- Timestamp:
- Apr 12, 2017, 10:15:09 PM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 50b7e8c
- Parents:
- ff178ee
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/generic_types/generic_types.tex
rff178ee r549950c 87 87 88 88 % inline code @...@ 89 \lstMakeShortInline@ 89 \lstMakeShortInline@% 90 90 91 91 % ACM Information … … 151 151 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from commercial operating-systems to hobby projects. This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more. 152 152 The \citet{TIOBE} ranks the top 5 most popular programming languages as: Java 16\%, \Textbf{C 7\%}, \Textbf{\CC 5\%}, \CS 4\%, Python 4\% = 36\%, where the next 50 languages are less than 3\% each with a long tail. The top 3 rankings over the past 30 years are: 153 \lstDeleteShortInline@ 153 \lstDeleteShortInline@% 154 154 \begin{center} 155 155 \setlength{\tabcolsep}{10pt} … … 164 164 \end{tabular} 165 165 \end{center} 166 \lstMakeShortInline@ 166 \lstMakeShortInline@% 167 167 Love it or hate it, C is extremely popular, highly used, and one of the few system's languages. 168 168 In many cases, \CC is often used solely as a better C. … … 251 251 252 252 Finally, \CFA allows variable overloading: 253 \lstDeleteShortInline@ 253 \lstDeleteShortInline@% 254 254 \par\smallskip 255 255 \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} … … 266 266 \end{lstlisting} 267 267 \end{tabular} 268 \lstMakeShortInline@269 268 \smallskip\par\noindent 269 \lstMakeShortInline@% 270 270 Hence, the single name @MAX@ replaces all the C type-specific names: @SHRT_MAX@, @INT_MAX@, @DBL_MAX@. 271 271 As well, restricted constant overloading is allowed for the values @0@ and @1@, which have special status in C, \eg the value @0@ is both an integer and a pointer literal, so its meaning depends on context. … … 273 273 \begin{lstlisting} 274 274 int x; 275 if (x) // if (x != 0) 276 x++; // x += 1; 275 if (x) x++ $\C{// if (x != 0) x += 1;}$ 277 276 \end{lstlisting} 278 277 Every if statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result. … … 377 376 forall( otype T ) T value( pair( const char *, T ) p ) { return p.second; } 378 377 forall( dtype F, otype T ) T value_p( pair( F *, T * ) p ) { return *p.second; } 379 380 378 pair( const char *, int ) p = { "magic", 42 }; 381 379 int magic = value( p ); … … 493 491 scalar(metres) marathon = half_marathon + half_marathon; 494 492 scalar(litres) two_pools = swimming_pool + swimming_pool; 495 marathon + swimming_pool; $\C{// compilation ERROR}$493 marathon + swimming_pool; $\C{// compilation ERROR}$ 496 494 \end{lstlisting} 497 495 @scalar@ is a dtype-static type, so all uses have a single structure definition, containing @unsigned long@, and can share the same implementations of common functions like @?+?@. … … 499 497 However, the \CFA type-checker ensures matching types are used by all calls to @?+?@, preventing nonsensical computations like adding a length to a volume. 500 498 499 501 500 \section{Tuples} 502 501 \label{sec:tuples} 503 502 504 The @pair(R, S)@ generic type used as an example in the previous section can be considered a special case of a more general \emph{tuple} data structure. The authors have implemented tuples in \CFA, with a design particularly motivated by two use cases: \emph{multiple-return-value functions} and \emph{variadic functions}. 505 506 In standard C, functions can return at most one value. This restriction results in code that emulates functions with multiple return values by \emph{aggregation} or by \emph{aliasing}. In the former situation, the function designer creates a record type that combines all of the return values into a single type. Unfortunately, the designer must come up with a name for the return type and for each of its fields. Unnecessary naming is a common programming language issue, introducing verbosity and a complication of the user's mental model. As such, this technique is effective when used sparingly, but can quickly get out of hand if many functions need to return different combinations of types. In the latter approach, the designer simulates multiple return values by passing the additional return values as pointer parameters. The pointer parameters are assigned inside of the function body to emulate a return. Using this approach, the caller is directly responsible for allocating storage for the additional temporary return values. This responsibility complicates the call site with a sequence of variable declarations leading up to the call. Also, while a disciplined use of @const@ can give clues about whether a pointer parameter is going to be used as an out parameter, it is not immediately obvious from only the function signature whether the callee expects such a parameter to be initialized before the call. Furthermore, while many C functions that accept pointers are designed so that it is safe to pass @NULL@ as a parameter, there are many C functions that are not null-safe. On a related note, C does not provide a standard mechanism to state that a parameter is going to be used as an additional return value, which makes the job of ensuring that a value is returned more difficult for the compiler. 507 508 C does provide a mechanism for variadic functions through manipulation of @va_list@ objects, but it is notoriously type-unsafe. A variadic function is one that contains at least one parameter, followed by @...@ as the last token in the parameter list. In particular, some form of \emph{argument descriptor} is needed to inform the function of the number of arguments and their types, commonly a format string or counter parameter. It is important to note that both of these mechanisms are inherently redundant, because they require the user to specify information that the compiler knows explicitly. This required repetition is error prone, because it is easy for the user to add or remove arguments without updating the argument descriptor. In addition, C requires the programmer to hard code all of the possible expected types. As a result, it is cumbersome to write a variadic function that is open to extension. For example, consider a simple function that sums $N$ @int@s: 509 \begin{lstlisting} 510 int sum(int N, ...) { 511 va_list args; 512 va_start(args, N); // must manually specify last non-variadic argument 513 int ret = 0; 514 while(N) { 515 ret += va_arg(args, int); // must specify type 516 N--; 517 } 518 va_end(args); 519 return ret; 520 } 521 522 sum(3, 10, 20, 30); // must keep initial counter argument in sync 523 \end{lstlisting} 524 525 The @va_list@ type is a special C data type that abstracts variadic argument manipulation. The @va_start@ macro initializes a @va_list@, given the last named parameter. Each use of the @va_arg@ macro allows access to the next variadic argument, given a type. Since the function signature does not provide any information on what types can be passed to a variadic function, the compiler does not perform any error checks on a variadic call. As such, it is possible to pass any value to the @sum@ function, including pointers, floating-point numbers, and structures. In the case where the provided type is not compatible with the argument's actual type after default argument promotions, or if too many arguments are accessed, the behaviour is undefined~\citep{C11}. Furthermore, there is no way to perform the necessary error checks in the @sum@ function at run-time, since type information is not carried into the function body. Since they rely on programmer convention rather than compile-time checks, variadic functions are inherently unsafe. 526 527 In practice, compilers can provide warnings to help mitigate some of the problems. For example, GCC provides the @format@ attribute to specify that a function uses a format string, which allows the compiler to perform some checks related to the standard format specifiers. Unfortunately, this attribute does not permit extensions to the format string syntax, so a programmer cannot extend it to warn for mismatches with custom types. 503 In many languages, functions can return at most one value; 504 however, many operations have multiple outcomes, some exceptional. 505 Consider C's @div@ and @remquo@ functions, which return the quotient and remainder for a division of integer and floating-point values, respectively. 506 \begin{lstlisting} 507 typedef struct { int quo, rem; } div_t; 508 div_t div( int num, int den ); 509 double remquo( double num, double den, int * quo ); 510 div_t qr = div( 13, 5 ); $\C{// return quotient/remainder aggregate}$ 511 int q; 512 double r = remquo( 13.5, 5.2, &q ); $\C{// return return remainder, alias quotient}$ 513 \end{lstlisting} 514 @div@ aggregates the quotient/remainder in a structure, while @remquo@ aliases a parameter to an argument. 515 Both approaches are awkward. 516 Alternatively, a programming language can directly support returning multiple values, \eg in \CFA: 517 \begin{lstlisting} 518 [ int, int ] div( int num, int den ); 519 [ double, double ] div( double num, double den ); 520 int q, r; 521 double q, r; 522 [ q, r ] = div( 13, 5 ); $\C{// select appropriate div and q, r}$ 523 [ q, r ] = div( 13.5, 5.2 ); 524 \end{lstlisting} 525 Clearly, this approach is straightforward to understand and use; 526 therefore, why do most programming language not support this obvious feature or provide it awkwardly? 527 The answer is that there are complex consequences that cascade through multiple aspects of the language, especially the type-system. 528 This section show these consequences and how \CFA deals with them. 528 529 529 530 530 531 \subsection{Tuple Expressions} 531 532 532 The tuple extensions in \CFA can express multiple return values and variadic function parameters in an efficient and type-safe manner. \CFA introduces \emph{tuple expressions} and \emph{tuple types}. A tuple expression is an expression producing a fixed-size, ordered list of values of heterogeneous types. The type of a tuple expression is the tuple of the subexpression types, or a \emph{tuple type}. In \CFA, a tuple expression is denoted by a comma-separated list of expressions enclosed in square brackets. For example, the expression @[5, 'x', 10.5]@ has type @[int, char, double]@. The previous expression has three \emph{components}. Each component in a tuple expression can be any \CFA expression, including another tuple expression. The order of evaluation of the components in a tuple expression is unspecified, to allow a compiler the greatest flexibility for program optimization. It is, however, guaranteed that each component of a tuple expression is evaluated for side-effects, even if the result is not used. Multiple-return-value functions can equivalently be called \emph{tuple-returning functions}. 533 534 \CFA allows declaration of \emph{tuple variables}, variables of tuple type. For example: 535 \begin{lstlisting} 536 [int, char] most_frequent(const char * ); 537 538 const char* str = "hello, world!"; 539 [int, char] freq = most_frequent(str); 540 printf("%s -- %d %c\n", str, freq); 541 \end{lstlisting} 542 In this example, the type of the @freq@ and the return type of @most_frequent@ are both tuple types. Also of note is how the tuple expression @freq@ is implicitly flattened into separate @int@ and @char@ arguments to @printf@; this code snippet could have been shortened by replacing the last two lines with @printf("%s -- %d %c\n", str, most_frequent(str));@ using exactly the same mechanism. 543 544 In addition to variables of tuple type, it is also possible to have pointers to tuples, and arrays of tuples. Tuple types can be composed of any types, except for array types, since arrays are not of fixed size, which makes tuple assignment difficult when a tuple contains an array. 545 \begin{lstlisting} 546 [double, int] di; 547 [double, int] * pdi 548 [double, int] adi[10]; 549 \end{lstlisting} 550 This example declares a variable of type @[double, int]@, a variable of type pointer to @[double, int]@, and an array of ten @[double, int]@. 533 The addition of multiple-return-value (MRV) functions are useless without a syntax for accepting multiple values at the call-site. 534 The simplest mechanism for capturing the return values is variable assignment, allowing the valuse to be retrieved directly. 535 As such, \CFA allows assigning multiple values from a function into multiple variables, using a square-bracketed list of lvalue expressions (as above), called a \emph{tuple}. 536 537 However, functions also use \emph{composition} (nested calls), with the direct consequence that MRV functions must also support composition to be orthogonal with single-returning-value (SRV) functions, \eg: 538 \begin{lstlisting} 539 printf( "%d %d\n", div( 13, 5 ) ); $\C{// return values seperated into arguments}$ 540 \end{lstlisting} 541 Here, the values returned by @div@ are composed with the call to @printf@. 542 However, the \CFA type-system must support significantly more complex composition: 543 \begin{lstlisting} 544 [ int, int ] foo$\(_1\)$( int ); 545 [ double ] foo$\(_2\)$( int ); 546 void bar( int, double, double ); 547 bar( foo( 3 ), foo( 3 ) ); 548 \end{lstlisting} 549 The type-resolver only has the tuple return-types to resolve the call to @bar@ as the parameters are identical, which involves unifying the possible @foo@ functions with @bar@'s parameter list. 550 No combination of @foo@s are an exact match with @bar@'s parameters, so the resolver applies C conversions. 551 The minimal cost is @bar( foo@$_1$@( 3 ), foo@$_2$@( 3 ) )@, giving (@int@, {\color{green}@int@}, @double@) to (@int@, {\color{green}@double@}, @double@) with one {\color{green}safe} (widening) conversion from @int@ to @double@ versus ({\color{red}@double@}, {\color{green}@int@}, {\color{green}@int@}) to ({\color{red}@int@}, {\color{green}@double@}, {\color{green}@double@}) with one {\color{red}unsafe} (narrowing) conversion from @double@ to @int@ and two safe conversions. 552 553 554 \subsection{Tuple Variables} 555 556 An important observation from function composition is that new variable names are not required to initialize parameters from an MRV function. 557 As a consequence, \CFA allows declaration of \emph{tuple variables} that can be initialized from an MRV function, \eg: 558 \begin{lstlisting} 559 [ int, int ] qr = div( 13, 5 ); $\C{// tuple-variable declaration and initialization}$ 560 [ double, double ] qr = div( 13.5, 5.2 ); 561 \end{lstlisting} 562 where the tuple variable-name serves the same purpose as the parameter name(s). 563 Tuple variables can be composed of any types, except for array types, since array sizes are generally unknown. 564 565 One way to access the tuple-variable components is with assignment or composition: 566 \begin{lstlisting} 567 [ q, r ] = qr; $\C{// access tuple-variable components}$ 568 printf( "%d %d\n", qr ); 569 \end{lstlisting} 570 \CFA also supports \emph{tuple indexing} to access single components of a typle expression: 571 \begin{lstlisting} 572 [int, int] * p = &qr; $\C{// tuple pointer}$ 573 int rem = qr.1; $\C{// access remainder}$ 574 int quo = div( 13, 5 ).0; $\C{// access quotient}$ 575 p->0 = 5; $\C{// change quotient}$ 576 bar( qr.1, qr ); $\C{// pass remainder and quotient/remainder}$ 577 rem = [42, div( 13, 5 )].0.1; $\C{// access 2nd component of 1st component of tuple expression}$ 578 \end{lstlisting} 579 551 580 552 581 \subsection{Flattening and Restructuring} 553 582 554 In function call contexts, tuples support implicit flattening and restructuring conversions. Tuple flattening recursively expands a tuple into the list of its basic components. Tuple structuring packages a list of expressions into a value of tuple type. 555 \begin{lstlisting} 556 int f(int, int); 557 int g([int, int]); 558 int h(int, [int, int]); 583 In function call contexts, tuples support implicit flattening and restructuring conversions. 584 Tuple flattening recursively expands a tuple into the list of its basic components. 585 Tuple structuring packages a list of expressions into a value of tuple type, \eg: 586 \lstDeleteShortInline@% 587 \par\smallskip 588 \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} 589 \begin{lstlisting} 590 int f( int, int ); 591 int g( [int, int] ); 592 int h( int, [int, int] ); 559 593 [int, int] x; 594 \end{lstlisting} 595 & 596 \begin{lstlisting} 560 597 int y; 561 562 f(x); // flatten 563 g(y, 10); // structure 564 h(x, y); // flatten & structure 565 \end{lstlisting} 566 In \CFA, each of these calls is valid. In the call to @f@, @x@ is implicitly flattened so that the components of @x@ are passed as the two arguments to @f@. For the call to @g@, the values @y@ and @10@ are structured into a single argument of type @[int, int]@ to match the type of the parameter of @g@. Finally, in the call to @h@, @y@ is flattened to yield an argument list of length 3, of which the first component of @x@ is passed as the first parameter of @h@, and the second component of @x@ and @y@ are structured into the second argument of type @[int, int]@. The flexible structure of tuples permits a simple and expressive function call syntax to work seamlessly with both single- and multiple-return-value functions, and with any number of arguments of arbitrarily complex structure. 567 568 % In {K-W C} \citep{Buhr94a,Till89}, a precursor to \CFA, there were 4 tuple coercions: opening, closing, flattening, and structuring. Opening coerces a tuple value into a tuple of values, while closing converts a tuple of values into a single tuple value. Flattening coerces a nested tuple into a flat tuple, \ie it takes a tuple with tuple components and expands it into a tuple with only non-tuple components. Structuring moves in the opposite direction, \ie it takes a flat tuple value and provides structure by introducing nested tuple components. 569 570 In \CFA, the design has been simplified to require only the two conversions previously described, which trigger only in function call and return situations. Specifically, the expression resolution algorithm examines all of the possible alternatives for an expression to determine the best match. In resolving a function call expression, each combination of function value and list of argument alternatives is examined. Given a particular argument list and function value, the list of argument alternatives is flattened to produce a list of non-tuple valued expressions. Then the flattened list of expressions is compared with each value in the function's parameter list. If the parameter's type is not a tuple type, then the current argument value is unified with the parameter type, and on success the next argument and parameter are examined. If the parameter's type is a tuple type, then the structuring conversion takes effect, recursively applying the parameter matching algorithm using the tuple's component types as the parameter list types. Assuming a successful unification, eventually the algorithm gets to the end of the tuple type, which causes all of the matching expressions to be consumed and structured into a tuple expression. For example, in 571 \begin{lstlisting} 572 int f(int, [double, int]); 573 f([5, 10.2], 4); 574 \end{lstlisting} 575 There is only a single definition of @f@, and 3 arguments with only single interpretations. First, the argument alternative list @[5, 10.2], 4@ is flattened to produce the argument list @5, 10.2, 4@. Next, the parameter matching algorithm begins, with $P =~$@int@ and $A =~$@int@, which unifies exactly. Moving to the next parameter and argument, $P =~$@[double, int]@ and $A =~$@double@. This time, the parameter is a tuple type, so the algorithm applies recursively with $P' =~$@double@ and $A =~$@double@, which unifies exactly. Then $P' =~$@int@ and $A =~$@double@, which again unifies exactly. At this point, the end of $P'$ has been reached, so the arguments @10.2, 4@ are structured into the tuple expression @[10.2, 4]@. Finally, the end of the parameter list $P$ has also been reached, so the final expression is @f(5, [10.2, 4])@. 598 f( x ); $\C[1in]{// flatten}$ 599 g( y, 10 ); $\C{// structure}$ 600 h( x, y ); $\C{// flatten and structure}\CRT{}$ 601 \end{lstlisting} 602 \end{tabular} 603 \smallskip\par\noindent 604 \lstMakeShortInline@% 605 In the call to @f@, @x@ is implicitly flattened so the components of @x@ are passed as the two arguments. 606 In the call to @g@, the values @y@ and @10@ are structured into a single argument of type @[int, int]@ to match the parameter type of @g@. 607 Finally, in the call to @h@, @x@ is flattened to yield an argument list of length 3, of which the first component of @x@ is passed as the first parameter of @h@, and the second component of @x@ and @y@ are structured into the second argument of type @[int, int]@. 608 The flexible structure of tuples permits a simple and expressive function call syntax to work seamlessly with both SRF and MRF, and with any number of arguments of arbitrarily complex structure. 609 610 611 \subsection{Tuple Assignment} 612 613 An assignment where the left side is a tuple type is called \emph{tuple assignment}. 614 There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \emph{multiple} and \emph{mass assignment}, respectively. 615 \lstDeleteShortInline@% 616 \par\smallskip 617 \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} 618 \begin{lstlisting} 619 int x = 10; 620 double y = 3.5; 621 [int, double] z; 622 623 \end{lstlisting} 624 & 625 \begin{lstlisting} 626 z = [x, y]; $\C[1in]{// multiple assignment}$ 627 [x, y] = z; $\C{// multiple assignment}$ 628 z = 10; $\C{// mass assignment}$ 629 [y, x] = 3.14; $\C{// mass assignment}\CRT{}$ 630 \end{lstlisting} 631 \end{tabular} 632 \smallskip\par\noindent 633 \lstMakeShortInline@% 634 Both kinds of tuple assignment have parallel semantics, so that each value on the left and right side is evaluated before any assignments occur. 635 As a result, it is possible to swap the values in two variables without explicitly creating any temporary variables or calling a function, \eg, @[x, y] = [y, x]@. 636 This semantics means mass assignment differs from C cascading assignment (\eg @a=b=c@) in that conversions are applied in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. 637 For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, yielding @y == 3.14@ and @x == 3@; 638 whereas C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, yielding @3@ in @y@ and @x@. 639 Finally, tuple assignment is an expression where the result type is the type of the left-hand side of the assignment, just like all other assignment expressions in C. 640 This example shows multiple, mass, and cascading assignment used in one expression: 641 \begin{lstlisting} 642 int a, b; 643 double c, d; 644 [void] f( [int, int] ); 645 f( [c, a] = [b, d] = 1.5 ); $\C{// assignments in parameter list}$ 646 \end{lstlisting} 647 576 648 577 649 \subsection{Member Access} 578 650 579 At times, it is desirable to access a single component of a tuple-valued expression without creating unnecessary temporary variables to assign to. Given a tuple-valued expression @e@ and a compile-time constant integer $i$ where $0 \leq i < n$, where $n$ is the number of components in @e@, @e.i@ accesses the $i$\textsuperscript{th} component of @e@. For example, 580 \begin{lstlisting} 581 [int, double] x; 582 [char *, int] f(); 583 void g(double, int); 584 [int, double] * p; 585 586 int y = x.0; // access int component of x 587 y = f().1; // access int component of f 588 p->0 = 5; // access int component of tuple pointed-to by p 589 g(x.1, x.0); // rearrange x to pass to g 590 double z = [x, f()].0.1; // access second component of first component of tuple expression 591 \end{lstlisting} 592 As seen above, tuple-index expressions can occur on any tuple-typed expression, including tuple-returning functions, square-bracketed tuple expressions, and other tuple-index expressions, provided the retrieved component is also a tuple. This feature was proposed for {K-W C}, but never implemented~\citep[p.~45]{Till89}. 593 594 It is possible to access multiple fields from a single expression using a \emph{member-access tuple expression}. The result is a single tuple expression whose type is the tuple of the types of the members. For example, 651 It is also possible to access multiple fields from a single expression using a \emph{member-access}. 652 The result is a single tuple-valued expression whose type is the tuple of the types of the members, \eg: 595 653 \begin{lstlisting} 596 654 struct S { int x; double y; char * z; } s; 597 s.[x, y, z]; 598 \end{lstlisting} 599 Here, the type of @s.[x, y, z]@ is @[int, double, char *]@. A member tuple expression has the form @a.[x, y, z];@ where @a@ is an expression with type @T@, where @T@ supports member access expressions, and @x, y, z@ are all members of @T@ with types @T$_x$@, @T$_y$@, and @T$_z$@ respectively. Then the type of @a.[x, y, z]@ is @[T$_x$, T$_y$, T$_z$]@. 600 601 Since tuple index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (\eg rearrange components, drop components, duplicate components, etc.): 655 s.[x, y, z] = 0; 656 \end{lstlisting} 657 Here, the mass assignment sets all members of @s@ to zero. 658 Since tuple-index expressions are a form of member-access expression, it is possible to use tuple-index expressions in conjunction with member tuple expressions to manually restructure a tuple (\eg rearrange, drop, and duplicate components). 659 \lstDeleteShortInline@% 660 \par\smallskip 661 \begin{tabular}{@{}l@{\hspace{\parindent}}|@{\hspace{\parindent}}l@{}} 602 662 \begin{lstlisting} 603 663 [int, int, long, double] x; 604 void f(double, long); 605 606 f(x.[0, 3]); // f(x.0, x.3) 607 x.[0, 1] = x.[1, 0]; // [x.0, x.1] = [x.1, x.0] 608 [long, int, long] y = x.[2, 0, 2]; 609 \end{lstlisting} 610 611 It is possible for a member tuple expression to contain other member access expressions: 664 void f( double, long ); 665 666 \end{lstlisting} 667 & 668 \begin{lstlisting} 669 x.[0, 1] = x.[1, 0]; $\C[1in]{// rearrange: [x.0, x.1] = [x.1, x.0]}$ 670 f( x.[0, 3] ); $\C{// drop: f(x.0, x.3)}\CRT{}$ 671 [int, int, int] y = x.[2, 0, 2]; // duplicate: [y.0, y.1, y.2] = [x.2, x.0. x.2] 672 \end{lstlisting} 673 \end{tabular} 674 \smallskip\par\noindent 675 \lstMakeShortInline@% 676 It is also possible for a member access to contain other member accesses, \eg: 612 677 \begin{lstlisting} 613 678 struct A { double i; int j; }; … … 616 681 v.[x, y.[i, j], z.k]; 617 682 \end{lstlisting} 618 This expression is equivalent to @[v.x, [v.y.i, v.y.j], v.z.k]@. That is, the aggregate expression is effectively distributed across the tuple, which allows simple and easy access to multiple components in an aggregate, without repetition. It is guaranteed that the aggregate expression to the left of the @.@ in a member tuple expression is evaluated exactly once. As such, it is safe to use member tuple expressions on the result of a side-effecting function. 619 620 \subsection{Tuple Assignment} 621 622 In addition to tuple-index expressions, individual components of tuples can be accessed by a \emph{destructuring assignment} which has a tuple expression with lvalue components on its left-hand side. More generally, an assignment where the left-hand side of the assignment operator has a tuple type is called \emph{tuple assignment}. There are two kinds of tuple assignment depending on whether the right-hand side of the assignment operator has a tuple type or a non-tuple type, called \emph{multiple assignment} and \emph{mass assignment}, respectively. 623 \begin{lstlisting} 624 int x; 625 double y; 626 [int, double] z; 627 [y, x] = 3.14; // mass assignment 628 [x, y] = z; // multiple assignment 629 z = 10; // mass assignment 630 z = [x, y]; // multiple assignment 631 \end{lstlisting} 632 Let $L_i$ for $i$ in $[0, n)$ represent each component of the flattened left-hand side, $R_i$ represent each component of the flattened right-hand side of a multiple assignment, and $R$ represent the right-hand side of a mass assignment. 633 634 For a multiple assignment to be valid, both tuples must have the same number of elements when flattened. Multiple assignment assigns $R_i$ to $L_i$ for each $i$. 635 That is, @?=?(&$L_i$, $R_i$)@ must be a well-typed expression. In the previous example, @[x, y] = z@, @z@ is flattened into @z.0, z.1@, and the assignments @x = z.0@ and @y = z.1@ are executed. 636 637 A mass assignment assigns the value $R$ to each $L_i$. For a mass assignment to be valid, @?=?(&$L_i$, $R$)@ must be a well-typed expression. This rule differs from C cascading assignment (\eg @a=b=c@) in that conversions are applied to $R$ in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment. For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, which results in the value @3.14@ in @y@ and the value @3@ in @x@. On the other hand, the C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, which results in the value @3@ in @x@, and as a result the value @3@ in @y@ as well. 638 639 Both kinds of tuple assignment have parallel semantics, such that each value on the left side and right side is evaluated \emph{before} any assignments occur. As a result, it is possible to swap the values in two variables without explicitly creating any temporary variables or calling a function: 640 \begin{lstlisting} 641 int x = 10, y = 20; 642 [x, y] = [y, x]; 643 \end{lstlisting} 644 After executing this code, @x@ has the value @20@ and @y@ has the value @10@. 645 646 Tuple assignment is an expression where the result type is the type of the left-hand side of the assignment, just like all other assignment expressions in C. This definition allows cascading tuple assignment and use of tuple assignment in other expression contexts, an occasionally useful idiom to keep code succinct and reduce repetition. 647 % In \CFA, tuple assignment is an expression where the result type is the type of the left-hand side of the assignment, as in normal assignment. That is, a tuple assignment produces the value of the left-hand side after assignment. These semantics allow cascading tuple assignment to work out naturally in any context where a tuple is permitted. These semantics are a change from the original tuple design in {K-W C}~\citep{Till89}, wherein tuple assignment was a statement that allows cascading assignments as a special case. This decision was made in an attempt to fix what was seen as a problem with assignment, wherein it can be used in many different locations, such as in function-call argument position. While permitting assignment as an expression does introduce the potential for subtle complexities, it is impossible to remove assignment expressions from \CFA without affecting backwards compatibility with C. Furthermore, there are situations where permitting assignment as an expression improves readability by keeping code succinct and reducing repetition, and complicating the definition of tuple assignment puts a greater cognitive burden on the user. In another language, tuple assignment as a statement could be reasonable, but it would be inconsistent for tuple assignment to be the only kind of assignment in \CFA that is not an expression. 648 683 684 685 \begin{comment} 649 686 \subsection{Casting} 650 687 … … 686 723 687 724 Note that a cast is not a function call in \CFA, so flattening and structuring conversions do not occur for cast expressions\footnote{User-defined conversions have been considered, but for compatibility with C and the existing use of casts as type ascription, any future design for such conversions would require more precise matching of types than allowed for function arguments and parameters.}. As such, (4) is invalid because the cast target type contains 4 components, while the source type contains only 3. Similarly, (5) is invalid because the cast @([int, int, int])(g().1)@ is invalid. That is, it is invalid to cast @[int, int]@ to @[int, int, int]@. 725 \end{comment} 726 688 727 689 728 \subsection{Polymorphism} 690 729 691 Tuples also integrate with \CFA polymorphism as a special sort of generic type. Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types. 692 \begin{lstlisting} 693 forall(otype T, dtype U) 694 void f(T x, U * y); 695 730 Tuples also integrate with \CFA polymorphism as a kind of generic type. 731 Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types, \eg: 732 \begin{lstlisting} 733 forall(otype T, dtype U) void f( T x, U * y ); 696 734 f([5, "hello"]); 697 735 \end{lstlisting} 698 In this example, @[5, "hello"]@ is flattened, so that the argument list appears as @5, "hello"@. The argument matching algorithm binds @T@ to @int@ and @U@ to @const char*@, and calls the function as normal. 699 700 Tuples, however, may contain polymorphic components. For example, a plus operator can be written to add two triples of a type together. 701 \begin{lstlisting} 702 forall(otype T | { T ?+?(T, T); }) 703 [T, T, T] ?+?([T, T, T] x, [T, T, T] y) { 704 return [x.0+y.0, x.1+y.1, x.2+y.2]; 736 where @[5, "hello"]@ is flattened, giving argument list @5, "hello"@, and @T@ binds to @int@ and @U@ binds to @const char*@. 737 Tuples, however, may contain polymorphic components. 738 For example, a plus operator can be written to add two triples together. 739 \begin{lstlisting} 740 forall(otype T | { T ?+?( T, T ); }) [T, T, T] ?+?( [T, T, T] x, [T, T, T] y ) { 741 return [x.0 + y.0, x.1 + y.1, x.2 + y.2]; 705 742 } 706 743 [int, int, int] x; … … 709 746 \end{lstlisting} 710 747 711 Flattening and restructuring conversions are also applied to tuple types in polymorphic type assertions. Previously in \CFA, it has been assumed that assertion arguments must match the parameter type exactly, modulo polymorphic specialization (\ie no implicit conversions are applied to assertion arguments). In the example below: 712 \begin{lstlisting} 713 int f([int, double], double); 714 forall(otype T, otype U | { T f(T, U, U); }) 715 void g(T, U); 716 g(5, 10.21); 717 \end{lstlisting} 718 If assertion arguments must match exactly, then the call to @g@ cannot be resolved, since the expected type of @f@ is flat, while the only @f@ in scope requires a tuple type. Since tuples are fluid, this requirement reduces the usability of tuples in polymorphic code. To ease this pain point, function parameter and return lists are flattened for the purposes of type unification, which allows the previous example to pass expression resolution. 719 720 This relaxation is made possible by extending the existing thunk generation scheme, as described by \citet{Bilson03}. Now, whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function: 721 \begin{lstlisting} 722 int _thunk(int _p0, double _p1, double _p2) { 723 return f([_p0, _p1], _p2); 724 } 725 \end{lstlisting} 726 Essentially, this thunk provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism. These thunks take advantage of GCC C nested functions to produce closures that have the usual function pointer signature. 748 Flattening and restructuring conversions are also applied to tuple types in polymorphic type assertions. 749 \begin{lstlisting} 750 int f( [int, double], double ); 751 forall(otype T, otype U | { T f( T, U, U ); }) 752 void g( T, U ); 753 g( 5, 10.21 ); 754 \end{lstlisting} 755 Hence, function parameter and return lists are flattened for the purposes of type unification allowing the example to pass expression resolution. 756 This relaxation is possible by extending the thunk scheme described by \citet{Bilson03}. 757 Whenever a candidate's parameter structure does not exactly match the formal parameter's structure, a thunk is generated to specialize calls to the actual function: 758 \begin{lstlisting} 759 int _thunk( int _p0, double _p1, double _p2 ) { 760 return f( [_p0, _p1], _p2 ); 761 } 762 \end{lstlisting} 763 so the thunk provides flattening and structuring conversions to inferred functions, improving the compatibility of tuples and polymorphism. 764 These thunks take advantage of GCC C nested-functions to produce closures that have the usual function pointer signature. 765 727 766 728 767 \subsection{Variadic Tuples}
Note: See TracChangeset
for help on using the changeset viewer.