source: doc/working/resolver_design.md @ 3d1e617

ADTaaron-thesisarm-ehast-experimentalcleanup-dtorsctordeferred_resndemanglerenumforall-pointer-decaygc_noraiijacob/cs343-translationjenkins-sandboxmemorynew-astnew-ast-unique-exprnew-envno_listpersistent-indexerpthread-emulationqualifiedEnumresolv-newwith_gc
Last change on this file since 3d1e617 was 3d1e617, checked in by Aaron Moss <a3moss@…>, 8 years ago

Add working documents directory with resolver design

  • Property mode set to 100644
File size: 58.4 KB
Line 
1# Thoughts on Resolver Design #
2
3## Conversions ##
4C's implicit "usual arithmetic conversions" define a structure among the
5built-in types consisting of _unsafe_ narrowing conversions and a hierarchy of
6_safe_ widening conversions.
7There is also a set of _explicit_ conversions that are only allowed through a
8cast expression.
9Based on Glen's notes on conversions [1], I propose that safe and unsafe
10conversions be expressed as constructor variants, though I make explicit
11(cast) conversions a constructor variant as well rather than a dedicated
12operator.
13Throughout this article, I will use the following operator names for
14constructors and conversion functions from `From` to `To`:
15
16        void ?{} ( To*, To );            // copy constructor
17        void ?{} ( To*, From );          // explicit constructor
18        void ?{explicit} ( To*, From );  // explicit cast conversion
19        void ?{safe} ( To*, From );      // implicit safe conversion
20        void ?{unsafe} ( To*, From );    // implicit unsafe conversion
21
22[1] http://plg.uwaterloo.ca/~cforall/Conversions/index.html
23
24Glen's design made no distinction between constructors and unsafe implicit
25conversions; this is elegant, but interacts poorly with tuples.
26Essentially, without making this distinction, a constructor like the following
27would add an interpretation of any two `int`s as a `Coord`, needlessly
28multiplying the space of possible interpretations of all functions:
29
30        void ?{}( Coord *this, int x, int y );
31
32That said, it would certainly be possible to make a multiple-argument implicit
33conversion, as below, though the argument above suggests this option should be
34used infrequently:
35
36        void ?{unsafe}( Coord *this, int x, int y );
37
38An alternate possibility would be to only count two-arg constructors
39`void ?{} ( To*, From )` as unsafe conversions; under this semantics, safe and 
40explicit conversions should also have a compiler-enforced restriction to
41ensure that they are two-arg functions (this restriction may be valuable
42regardless).
43
44### Constructor Idiom ###
45Basing our notion of conversions off otherwise normal Cforall functions means
46that we can use the full range of Cforall features for conversions, including
47polymorphism.
48Glen [1] defines a _constructor idiom_ that can be used to create chains of
49safe conversions without duplicating code; given a type `Safe` which members
50of another type `From` can be directly converted to, the constructor idiom
51allows us to write a conversion for any type `To` which `Safe` converts to:
52
53        forall(otype To | { void ?{safe}( To*, Safe ) })
54        void ?{safe}( To *this, From that ) {
55                Safe tmp = /* some expression involving that */;
56                *this = tmp; // uses assertion parameter
57        }
58
59This idiom can also be used with only minor variations for a parallel set of
60unsafe conversions.
61
62What selective non-use of the constructor idiom gives us is the ability to
63define a conversion that may only be the *last* conversion in a chain of such.
64Constructing a conversion graph able to unambiguously represent the full
65hierarchy of implicit conversions in C is provably impossible using only
66single-step conversions with no additional information (see Appendix B), but
67this mechanism is sufficiently powerful (see [1], though the design there has
68some minor bugs; the general idea is to use the constructor idiom to define
69two chains of conversions, one among the signed integral types, another among
70the unsigned, and to use monomorphic conversions to allow conversions between
71signed and unsigned integer types).   
72
73### Implementation Details ###
74It is desirable to have a system which can be efficiently implemented, yet
75also to have one which has sufficient power to distinguish between functions
76on all possible axes of polymorphism.
77This ordering may be a partial order, which may complicate implementation
78somewhat; in this case it may be desirable to store the set of implementations
79for a given function as the directed acyclic graph (DAG) representing the
80order.
81
82## Conversion Costs ##
83Each possible resolution of an expression has a _cost_ consisting of four
84integer components: _unsafe_ conversion cost, _polymorphic_ specialization
85cost, _safe_ conversion cost, and a _count_ of conversions.
86These components form a lexically-ordered tuple which can be summed
87element-wise; summation starts at `(0, 0, 0, 0)`.
88
89**TODO** Costs of T, T*, lvalue T, rvalue T conversions (if applicable)
90
91### Lvalue Conversions ###
92**TODO** Finish me
93
94#### NOTES
95* C standard 6.3.2.1
96* pointer_like_generators.md
97
98### Qualifier Conversions ###
99**TODO** Finish me
100
101#### NOTES
102C standard 6.3.2.3.2: We can add any qualifier to the pointed-to-type of a
103pointer.
104* Glen thinks this means that we should make the default assignment operator
105  `?=?(T volatile restrict *this, T that)`, but I'm not sure I like the
106  implications for the actual implementation of forcing `this` to be
107  volatile
108* I want to consider whether this property should generalize to other
109  parameterized types (e.g. `lvalue T`, `box(T)`)
110
111C standard 6.3.2.1.1: "modifiable lvalues" recursively exclude structs with
112const-qualified fields
113
114C standard 6.3.2.1.2: Using lvalues as rvalues implicitly strips qualifiers
115
116C standard 6.2.4.26:
117
118C standard 6.7.3: **TODO**
119
120### Conversion Operator Costs ###
121Copy constructors, safe conversions, and unsafe conversions all have an
122associated conversion cost, calculated according to the algorithm below:
123
1241. Monomorphic copy constructors have a conversion cost of `(0, 0, 0, 0)`
1252. Monomorphic safe conversions have a conversion cost of `(0, 0, 1, 1)`
1263. Monomoprhic unsafe conversions have a conversion cost of `(1, 0, 0, 1)`
1274. Polymorphic conversion operators (or copy constructors) have a conversion
128   cost of `(0, 1, 0, 1)` plus the conversion cost of their monomorphic
129   equivalent and the sum of the conversion costs of all conversion operators
130   passed as assertion parameters, but where the fourth "count" element of the
131   cost tuple is fixed to `1`.
132
133**TODO** Polymorphism cost may need to be reconsidered in the light of the
134thoughts on polymorphism below.
135**TODO** You basically just want path-length in the conversion graph implied
136by the set of conversions; the only tricky question is whether or not you can
137account for "mixed" safe and unsafe conversions used to satisfy polymorphic
138constraints, whether a polymorphic conversion should cost more than a
139monomorphic one, and whether to account for non-conversion constraints in the
140polymorphism cost
141
142### Argument-Parameter Matching ###
143Given a function `f` with an parameter list (after tuple flattening)
144`(T1 t1, T2 t2, ... Tn tn)`, and a function application
145`f(<e1>, <e2>, ... <em>)`, the cost of matching each argument to the
146appropriate parameter is calculated according to the algorithm below:
147
148Given a parameter `t` of type `T` and an expression `<e>` from these lists,
149`<e>` will have a set of interpretations of types `E1, E2, ... Ek` with
150associated costs `(u1, p1, s1, c1), (u2, p2, s2, c2), ... (uk, pk, sk, ck)`.
151(If any `Ei` is a tuple type, replace it with its first flattened element for
152the purposes of this section.)
153
154The cost of matching the interpretation of `<e>` with type `Ei` to `t1` with
155type `T` is the sum of the interpretation cost `(ui, pi, si, ci)` and the
156conversion operator cost from `Ei` to `T`.
157
158### Object Initialization ###
159The cost to initialize an object is calculated very similarly to
160argument-parameter matching, with a few modifications.
161Firstly, explicit constructors are included in the set of available
162conversions, with conversion cost `(0, 0, 0, 1)` plus associated polymorphic
163conversion costs (if applicable) and the _interpretation cost_ of the
164constructor, the sum of the argument-parameter matching costs for its
165parameters.
166Also, ties in overall cost (interpretation cost plus conversion cost) are
167broken by lowest conversion cost (i.e. of alternatives with the same overall
168cost, copy constructors are preferred to other explicit constructors,
169explicit constructors are preferred to safe conversions, which are preferred
170to unsafe conversions).
171An object initialization is properly typed if it has exactly one min-cost
172interpretation.
173
174### Explicit Casts ###
175Explicit casts are handled similarly to object initialization.
176Copy constructors and other explicit constructors are not included in the set
177of possible conversions, though interpreting a cast as type ascription
178(`(T)e`, meaning the interpretation of `e` as type `T`) has conversion cost
179`(0, 0, 0, 0)`.
180Explicit conversion operators are also included in the set of possible
181conversions, with cost `(0, 0, 0, 1)` plus whatever polymorphic conversion
182costs are invoked.
183Unlike for explicit constructors and other functions, implicit conversions are
184never applied to the argument or return type of an explicit cast operator, so
185that the cast may be used more effectively as a method for the user programmer
186to guide type resolution.
187
188## Trait Satisfaction ##
189A _trait_ consists of a list of _type variables_ along with a (possibly empty)
190set of _assertions_ on those variables.
191Assertions can take two forms, _variable assertions_ and the more common
192_function assertions_, as in the following example:
193
194        trait a_trait(otype T, otype S) {
195                T a_variable_assertion;
196                S* another_variable_assertion;
197                S a_function_assertion( T* );
198        };
199
200Variable assertions enforce that a variable with the given name and type
201exists (the type is generally one of the type variables, or derived from one),
202while a function assertion enforces that a function with a
203_compatible signature_ to the provided function exists.
204
205To test if some list of types _satisfy_ the trait, the types are first _bound_ 
206to the type variables, and then declarations to satisfy each assertion are
207sought out.
208Variable assertions require an exact match, because they are passed as object
209pointers, and there is no mechanism to employ conversion functions, while
210function assertions only require a function that can be wrapped to a
211compatible type; for example, the declarations below satisfy
212`a_trait(int, short)`:
213
214        int a_variable_assertion;
215        short* another_variable_assertion;
216        char a_function_assertion( void* );
217        // int* may be implicitly converted to void*, and char to short, so the
218        // above works
219
220Cforall Polymorphic functions have a _constraining trait_, denoted as follows:
221
222        forall(otype A, otype B | some_trait(A, B))
223
224The trait may be anonymous, with the same syntax as a trait declaration, and
225may be unioned together using `|` or `,`.
226
227**TODO** Consider including field assertions in the list of constraint types,
228also associated types and the appropriate matching type assertion.
229
230## Polymorphism Costs ##
231The type resolver should prefer functions that are "less polymorphic" to
232functions that are "more polymorphic".
233Determining how to order functions by degree of polymorphism is somewhat less
234straightforward, though, as there are multiple axes of polymorphism and it is
235not always clear how they compose.
236The natural order for degree of polymorphism is a partial order, and this
237section includes some open questions on whether it is desirable or feasible to
238develop a tie-breaking strategy to impose a total order on the degree of
239polymorphism of functions.
240Helpfully, though, the degree of polymorphism is a property of functions
241rather than function calls, so any complicated graph structure or calculation
242representing a (partial) order over function degree of polymorphism can be
243calculated once and cached.
244
245### Function Parameters ###
246All other things being equal, if a parameter of one function has a concrete
247type and the equivalent parameter of another function has a dynamic type, the
248first function is less polymorphic:
249
250                        void f( int, int );  // (0) least polymorphic
251        forall(otype T) void f( T, int );    // (1a) more polymorphic than (0)
252        forall(otype T) void f( int, T );    // (1b) more polymorphic than (0)
253                                             //      incomparable with (1a)
254        forall(otype T) void f( T, T );      // (2) more polymorphic than (1a/b)
255
256This should extend to parameterized types (pointers and generic types) also:
257
258        forall(otype S) struct box { S val; };
259       
260        forall(otype T) void f( T, T* );       // (3) less polymorphic than (2)
261        forall(otype T) void f( T, T** );      // (4) less polymorphic than (3)
262        forall(otype T) void f( T, box(T) );   // (5) less polymorphic than (2)
263                                               //     incomparable with (3)
264        forall(otype T) void f( T, box(T*) );  // (6) less polymorphic than (5)
265
266Every function in the group above is incomparable with (1a/b), but that's fine
267because an `int` isn't a pointer or a `box`, so the ambiguity shouldn't occur
268much in practice (unless there are safe or unsafe conversions defined between
269the possible argument types).
270
271For degree of polymorphism from arguments, I think we should not distinguish
272between different type parameters, e.g. the following should be considered
273equally polymorphic:
274
275        forall(otype T, otype S) void f( T, T, S );  // (7)
276        forall(otype T, otype S) void f( S, T, T );  // (8)
277
278However parameter lists are compared, parameters of multi-parameter generic
279types should ideally be treated as a recursive case, e.g. in the example
280below, (9) is less polymorphic than (10), which is less polymorphic than (11):
281
282        forall(otype T, otype S) struct pair { T x; S y; };
283       
284                        void f( pair(int, int) );  // (9)
285        forall(otype T) void f( pair(T, int) );    // (10)
286        forall(otype T) void f( pair(T, T) );      // (11)
287
288Parameter comparison could possibly be made somewhat cheaper at loss of some
289precision by representing each parameter as a value from the natural numbers
290plus infinity, where infinity represents a monomorphic parameter and a finite
291number counts how many levels deep the shallowest type variable is, e.g. where
292`T` is a type variable, `int` would have value infinity, `T` would have value
2930, `T*` would have value 1, `box(T)*` would have value 2, etc.
294Under this scheme, higher values represent less polymorphism.
295This makes the partial order on parameters a total order, so that many of the
296incomparable functions above compare equal, though that is perhaps a virtue.
297It also loses the ability to differentiate between some multi-parameter
298generic types, such as the parameters in (10) and (11), which would both be
299valued 1, losing the polymorphism distinction between them.
300
301A variant of the above scheme would be to fix a maximum depth of polymorphic
302type variables (16 seems like a reasonable choice) at which a parameter would
303be considered to be effectively monomorphic, and to subtract the value 
304described above from that maximum, clamping the result to a minimum of 0.
305Under this scheme, assuming a maximum value of 4, `int` has value 0, `T` has
306value 4, `T*` has value 3, `box(T)*` has value 2, and `box(T*)**` has value 0,
307the same as `int`.
308This can be quite succinctly represented, and summed without the presence of a
309single monomorphic parameter pushing the result to infinity, but does lose the
310ability to distinguish between very deeply structured polymorphic types.
311
312### Parameter Lists ###
313A partial order on function parameter lists can be produced by the
314product order of the partial orders on parameters described above.
315In more detail, this means that for two parameter lists with the same arity,
316if any pair of corresponding parameters are incomparable with respect to each
317other, the two parameter lists are incomparable; if in all pairs of
318corresponding parameters one list's parameter is always (less than or) equal
319to the other list's parameter than the first parameter list is (less than or)
320equal to the second parameter list; otherwise the lists are incomparable with
321respect to each other.
322
323How to compare parameter lists of different arity is a somewhat open question.
324A simple, but perhaps somewhat unsatisfying, solution would be just to say
325that such lists are incomparable.
326The simplist approach to make them comparable is to say that, given two lists
327`(T1, T2, ... Tn)` and `(S1, S2, ... Sm)`, where `n <= m`, the parameter lists
328can be compared based on their shared prefix of `n` types.
329This approach breaks the transitivity property of the equivalence relation on
330the partial order, though, as seen below:
331
332        forall(otype T) void f( T, int );       // (1a)
333        forall(otype T) void f( T, int, int );  // (12)
334        forall(otype T) void f( T, int, T );    // (13)
335
336By this rule, (1a) is equally polymorphic to both (12) and (13), so by
337transitivity (12) and (13) should also be equally polymorphic, but that is not
338actually the case.
339
340We can fix the rule by saying that `(T1 ... Tn)` can be compared to
341`(S1 ... Sm)` by _extending_ the list of `T`s to `m` types by inserting
342notional monomorphic parameters.
343In this case, (1a) and (12) are equally polymorphic, because (1a) gets
344extended with a monomorphic type that compares equal to (12)'s third `int` 
345parameter, but (1a) is less polymorphic than (13), because its notional
346monomorphic third parameter is less polymorphic than (13)'s `T`.
347Essentially what this rule says is that any parameter list with more
348parameters is no less polymorphic than one with fewer.
349
350We could collapse this parameter list ordering to a succinct total order by
351simply taking the sum of the clamped parameter polymorphism counts, but this
352would again make most incomparable parameter lists compare equal, as well as
353having the potential for some unexpected results based on the (completely
354arbitrary) value chosen for "completely polymorphic".
355For instance, if we set 4 to be the maximum depth of polymorphism (as above),
356the following functions would be equally polymorphic, which is a somewhat
357unexpected result:
358
359        forall(otype T) void g( T, T, T, int );    // 4 + 4 + 4 + 0 = 12
360        forall(otype T) void g( T*, T*, T*, T* );  // 3 + 3 + 3 + 3 = 12
361
362These functions would also be considered equally polymorphic:
363
364        forall(otype T) void g( T, int );    // 4 + 0 = 4;
365        forall(otype T) void g( T**, T** );  // 2 + 2 = 4;
366
367This issue can be mitigated by choosing a larger maximum depth of
368polymorphism, but this scheme does have the distinct disadvantage of either
369specifying the (completely arbitrary) maximum depth as part of the language or
370allowing the compiler to refuse to accept otherwise well-typed deeply-nested
371polymorphic types. 
372
373For purposes of determining polymorphism, the list of return types of a
374function should be treated like another parameter list, and combined with the
375degree of polymorphism from the parameter list in the same way that the
376parameters in the parameter list are combined.
377For instance, in the following, (14) is less polymorphic than (15) which is
378less polymorphic than (16):
379
380        forall(otype T) int f( T );  // (14)
381        forall(otype T) T*  f( T );  // (15)
382        forall(otype T) T   f( T );  // (16)
383
384### Type Variables and Bounds ###
385Degree of polymorphism doesn't solely depend on the parameter lists, though.
386Glen's thesis (4.4.4, p.89) gives an example that shows that it also depends
387on the number of type variables as well:
388
389        forall(otype T)          void f( T, int );  // (1a) polymorphic
390        forall(otype T)          void f( T, T );    // (2)  more polymorphic
391        forall(otype T, otype S) void f( T, S );    // (17) most polymorphic
392
393Clearly the `forall` type parameter list needs to factor into calculation of
394degree of polymorphism as well, as it's the only real differentiation between
395(2) and (17).
396The simplest way to include the type parameter list would be to simply count
397the type variables and say that functions with more type variables are more
398polymorphic.
399
400However, it also seems natural that more-constrained type variables should be
401counted as "less polymorphic" than less-constrained type variables.
402This would allow our resolver to pick more specialized (and presumably more
403efficient) implementations of functions where one exists.
404For example:
405
406        forall(otype T | { void g(T); }) T f( T );  // (18) less polymorphic
407        forall(otype T)                  T f( T );  // (16) more polymorphic
408
409We could account for this by counting the number of unique constraints and
410saying that functions with more constraints are less polymorphic.
411
412That said, we do model the `forall` constraint list as a (possibly anonymous)
413_trait_, and say that each trait is a set of constraints, so we could
414presumably define a partial order over traits based on subset inclusion, and
415use this partial order instead of the weaker count of constraints to order the
416list of type parameters of a function, as below:
417
418        trait has_g(otype T) { void g(T); };
419        trait has_h(otype S) { void h(T); };
420        trait has_gh(otype R | has_g(R) | has_h(R)) {};
421        // has_gh is equivlent to { void g(R); void h(R); }
422       
423        forall(otype T | has_gh(T)) T f( T );  // (19) least polymorphic
424        forall(otype T | has_g(T))  T f( T );  // (18) more polymorphic than (19)
425        forall(otype T | has_h(T))  T f( T );  // (18b) more polymorphic than (19)
426                                               //       incomparable with (18)
427        forall(otype T)             T f( T );  // (16) most polymorphic
428
429The tricky bit with this is figuring out how to compare the constraint
430functions for equality up to type variable renaming; I suspect there's a known
431solution, but don't know what it is (perhaps some sort of unification
432calculation, though I hope there's a more lightweight option).
433We also should be able to take advantage of the programmer-provided trait
434subset information (like the constraint on `has_gh` in the example) to more
435efficiently generate the partial-order graph for traits, which should be able
436to be cached for efficiency.
437
438Combining count of type variables with the (partial) order on the trait
439constraining those variables seems like it should be a fairly straightforward
440product ordering to me - one `forall` qualifier is (less than or) equal to
441another if it has both a (less than or) equal number of type variables and a
442(less than or) equal degree of polymorphism from its constraining trait; the
443two qualifiers are incomparable otherwise.
444If an easier-to-calculate total ordering is desired, it might be acceptable to
445use the number of type variables, with ties broken by number of constraints.
446
447Similarly, to combine the (partial) orders on parameter and return lists with
448the (partial) order on `forall` qualifiers, a product ordering seems like the
449reasonable choice, though if we wanted a total order a reasonable choice would
450be to use whatever method we use to combine parameter costs into parameter
451lists to combine the costs for the parameter and return lists, then break ties
452by the order on the `forall` qualifiers.
453
454## Expression Costs ##
455
456### Variable Expressions ###
457Variables may be overloaded; that is, there may be multiple distinct variables
458with the same name so long as each variable has a distinct type.
459The variable expression `x` has one zero-cost interpretation as type `T` for
460each variable `T x` in scope.
461
462### Member Selection Expressions ###
463For every interpretation `I` of `e` which has a struct or union type `S`,
464`e.y` has an interpretation of type `T` for each member `T y` of `S`, with the
465same cost as `I`.
466Note that there may be more than one member of `S` with the same name, as per
467Cforall's usual overloading rules.
468The indirect member expression `e->y` is desugared to `(*e).y` and interpreted
469analogously.
470
471**TODO** Consider allowing `e.y` to be interpreted as `e->y` if no
472interpretations as `e.y` exist.
473
474### Address Expressions ###
475Address expressions `&e` have an interpretation for each interpretation `I` of
476`e` that is an lvalue of type `T`, with the same cost as `I` and type `T*`.
477Lvalues result from variable expressions, member selection expressions, or
478application of functions returning an lvalue-qualified type.
479Note that the dereference operator is overloadable, so the rules for its
480resolution follow those for function application below.
481
482**TODO** Consider allowing conversion-to-lvalue so that, e.g., `&42` spawns a
483new temporary holding `42` and takes its address.
484
485### Boolean-context Expressions ###
486C has a number of "boolean contexts", where expressions are assigned a truth
487value; these include both arguments to the short-circuiting `&&` and `||` 
488operators, as well as the conditional expressions in `if` and `while` 
489statements, the middle expression in `for` statements, and the first argument
490to the `?:` ternary conditional operator.
491In all these contexts, C interprets `0` (which is both an integer and a null
492pointer literal) as false, and all other integer or pointer values as true.
493In this spirit, Cforall allows other types to be considered "truthy" if they
494support the following de-sugaring in a conditional context (see notes on
495interpretation of literal `0` below):
496
497        x => ((int)( x != 0 ))
498
499### Literal Expressions ###
500Literal expressions (e.g. 42, 'c', 3.14, "Hello, world!") have one
501zero-cost interpretation with the same type the expression would have in C,
502with three exceptions:
503
504Character literals like 'x' are typed as `char` in Cforall, not `int` as in C.
505This change breaks very little C code (primarily `sizeof 'x'`; the implicit
506conversion from `int` to `char` and lack of overloading handle most other
507expressions), matches the behaviour of C++, and is more compatible with
508programmer intuition.
509
510The literals `0` and `1` are also treated specially by Cforall, due to their
511potential uses in operator overloading.
512Earlier versions of Cforall allowed `0` and `1` to be variable names, allowing
513multiple interpretations of them according to the existing variable
514overloading rules, with the following declarations in the prelude:
515
516        const int 0, 1;
517        forall ( dtype DT ) const DT * const    0;
518        forall ( ftype FT ) FT * const          0;
519
520This did, however, create some backward-compatibility problems and potential
521performance issues, and works poorly for generic types. To start with, this
522(entirely legal C) code snippet doesn't compile in Cforall:
523
524        if ( 0 ) {}
525
526It desugars to `if ( (int)(0 != 0) ) {}`, and since both `int` and
527`forall(dtype DT) DT*` have a != operator which returns `int` the resolver can
528not choose which `0` variable to take, because they're both exact matches.
529
530The general != computation may also be less efficient than a check for a zero
531value; take the following example of a rational type:
532
533        struct rational { int32_t num, int32_t den };
534        rational 0 = { 0, 1 };
535       
536        int ?!=? (rational a, rational b) {
537                return ((int64_t)a.num)*b.den != ((int64_t)b.num)*a.den;
538        }
539       
540        int not_zero (rational a) { return a.num != 0; }
541
542To check if two rationals are equal we need to do a pair of multiplications to
543normalize them (the casts in the example are to prevent overflow), but to
544check if a rational is non-zero we just need to check its numerator, a more
545efficient operation.
546
547Finally, though polymorphic null-pointer variables can be meaningfully
548defined, most other polymorphic variables cannot be, which makes it difficult
549to make generic types "truthy" using the existing system:
550
551        forall(otype T) struct pair { T x; T y; };
552        forall(otype T | { T 0; }) pair(T) 0 = { 0, 0 };
553
554Now, it seems natural enough to want to define the zero for this pair type as
555a pair of the zero values of its element type (if they're defined).
556The declaration of `pair(T) 0` above is actually illegal though, as there is
557no way to represent the zero values of an infinite number of types in the
558single memory location available for this polymorphic variable - the
559polymorphic null-pointer variables defined in the prelude are legal, but that
560is only because all pointers are the same size and the single zero value is a
561legal value of all pointer types simultaneously; null pointer is, however,
562somewhat unique in this respect.
563
564The technical explanation for the problems with polymorphic zero is that `0` 
565is really a rvalue, not a lvalue - an expression, not an object.
566Drawing from this, the solution we propose is to give `0` a new built-in type,
567`_zero_t` (name open to bikeshedding), and similarly give `1` the new built-in
568type `_unit_t`.
569If the prelude defines != over `_zero_t` this solves the `if ( 0 )` problem,
570because now the unambiguous best interpretation of `0 != 0` is to read them
571both as `_zero_t` (and say that this expression is false).
572Backwards compatibility with C can be served by defining conversions in the
573prelude from `_zero_t` and `_unit_t` to `int` and the appropriate pointer
574types, as below:
575
576        // int 0;
577        forall(otype T | { void ?{safe}(T*, int); }) void ?{safe} (T*, _zero_t);
578        forall(otype T | { void ?{unsafe}(T*, int); }) void ?{unsafe} (T*, _zero_t);
579       
580        // int 1;
581        forall(otype T | { void ?{safe}(T*, int); }) void ?{safe} (T*, _unit_t);
582        forall(otype T | { void ?{unsafe}(T*, int); }) void ?{unsafe} (T*, _unit_t);
583       
584        // forall(dtype DT) const DT* 0;
585        forall(dtype DT) void ?{safe}(const DT**, _zero_t);
586        // forall(ftype FT) FT* 0;
587        forall(ftype FT) void ?{safe}(FT**, _zero_t);
588
589Further, with this change, instead of making `0` and `1` overloadable
590variables, we can instead allow user-defined constructors (or, more flexibly,
591safe conversions) from `_zero_t`, as below:
592
593        // rational 0 = { 0, 1 };
594        void ?{safe} (rational *this, _zero_t) { this->num = 0; this->den = 1; }
595
596Note that we don't need to name the `_zero_t` parameter to this constructor,
597because its only possible value is a literal zero.
598This one line allows `0` to be used anywhere a `rational` is required, as well
599as enabling the same use of rationals in boolean contexts as above (by
600interpreting the `0` in the desguraring to be a rational by this conversion).
601Furthermore, while defining a conversion function from literal zero to
602`rational` makes rational a "truthy" type able to be used in a boolean
603context, we can optionally further optimize the truth decision on rationals as
604follows:
605
606        int ?!=? (rational a, _zero_t) { return a.num != 0; }
607
608This comparison function will be chosen in preference to the more general
609rational comparison function for comparisons against literal zero (like in
610boolean contexts) because it doesn't require a conversion on the `0` argument.
611Functions of the form `int ?!=? (T, _zero_t)` can acutally be used in general
612to make a type `T` truthy without making `0` a value which can convert to that
613type, a capability not available in the current design.
614
615This design also solves the problem of polymorphic zero for generic types, as
616in the following example:
617
618        // ERROR: forall(otype T | { T 0; }) pair(T) 0 = { 0, 0 };
619        forall(otype T | { T 0; }) void ?{safe} (pair(T) *this, _zero_t) {
620                this->x = 0; this->y = 0;
621        }
622
623The polymorphic variable declaration didn't work, but this constructor is
624perfectly legal and has the desired semantics.
625
626Similarly giving literal `1` the special type `_unit_t` allows for more
627concise and consistent specification of the increment and decrement operators,
628using the following de-sugaring:
629
630        ++i => i += 1
631        i++ => (tmp = i, i += 1, tmp)
632        --i => i -= 1
633        i-- => (tmp = i, i -= 1, tmp)
634
635In the examples above, `tmp` is a fresh temporary with its type inferred from
636the return type of `i += 1`.
637Under this proposal, defining a conversion from `_unit_t` to `T` and a
638`lvalue T ?+=? (T*, T)` provides both the pre- and post-increment operators
639for free in a consistent fashion (similarly for -= and the decrement
640operators).
641If a meaningful `1` cannot be defined for a type, both increment operators can
642still be defined with the signature `lvalue T ?+=? (T*, _unit_t)`.
643Similarly, if scalar addition can be performed on a type more efficiently than
644by repeated increment, `lvalue T ?+=? (T*, int)` will not only define the
645addition operator, it will simultaneously define consistent implementations of
646both increment operators (this can also be accomplished by defining a
647conversion from `int` to `T` and an addition operator `lvalue T ?+=?(T*, T)`).
648
649To allow functions of the form `lvalue T ?+=? (T*, int)` to satisfy "has an
650increment operator" assertions of the form `lvalue T ?+=? (T*, _unit_t)`,
651we also define a non-transitive unsafe conversion from `_Bool` (allowable
652values `0` and `1`) to `_unit_t` (and `_zero_t`) as follows:
653
654        void ?{unsafe} (_unit_t*, _Bool) {}
655
656As a note, the desugaring of post-increment above is possibly even more
657efficient than that of C++ - in C++, the copy to the temporary may be hidden
658in a separately-compiled module where it can't be elided in cases where it is
659not used, whereas this approach for Cforall always gives the compiler the
660opportunity to optimize out the temporary when it is not needed.
661Furthermore, one could imagine a post-increment operator that returned some
662type `T2` that was implicitly convertable to `T` but less work than a full
663copy of `T` to create (this seems like an absurdly niche case) - since the
664type of `tmp` is inferred from the return type of `i += 1`, you could set up
665functions with the following signatures to enable an equivalent pattern in
666Cforall:
667
668        lvalue T2 ?+=? (T*, _unit_t); // increment operator returns T2
669        void ?{} (T2*, T);            // initialize T2 from T for use in `tmp = i`
670        void ?{safe} (T*, T2);        // allow T2 to be used as a T when needed to
671                                      // preserve expected semantics of T x = y++;
672
673**TODO** Look in C spec for literal type interprations.
674**TODO** Write up proposal for wider range of literal types, put in appendix
675
676### Initialization and Cast Expressions ###
677An initialization expression `T x = e` has one interpretation for each
678interpretation `I` of `e` with type `S` which is convertable to `T`.
679The cost of the interpretation is the cost of `I` plus the conversion cost
680from `S` to `T`.
681A cast expression `(T)e` is interpreted as hoisting initialization of a
682temporary variable `T tmp = e` out of the current expression, then replacing
683`(T)e` by the new temporary `tmp`.
684
685### Assignment Expressions ###
686An assignment expression `e = f` desugars to `(?=?(&e, f), e)`, and is then
687interpreted according to the usual rules for function application and comma
688expressions.
689Operator-assignment expressions like `e += f` desugar similarly as
690`(?+=?(&e, f), e)`.
691
692### Function Application Expressions ###
693Every _compatible function_ and satisfying interpretation of its arguments and
694polymorphic variable bindings produces one intepretation for the function
695application expression.
696Broadly speaking, the resolution cost of a function application is the sum of
697the cost of the interpretations of all arguments, the cost of all conversions
698to make those argument interpretations match the parameter types, and the
699binding cost of any of the function's polymorphic type parameters.
700
701**TODO** Work out binding cost in more detail.
702**TODO** Address whether "incomparably polymorphic" should be treated as
703"equally polymorphic" and be disambiguated by count of (safe) conversions.
704**TODO** Think about what polymorphic return types mean in terms of late
705binding.
706**TODO** Consider if "f is less polymorphic than g" can mean exactly "f
707specializes g"; if we don't consider the assertion parameters (except perhaps
708by count) and make polymorphic variables bind exactly (rather than after
709implicit conversions) this should actually be pre-computable.
710**TODO** Add "deletable" functions - take Thierry's suggestion that a deleted
711function declaration is costed out by the resolver in the same way that any
712other function declaration is costed; if the deleted declaration is the unique
713min-cost resolution refuse to type the expression, if it is tied for min-cost
714then take the non-deleted alternative, and of two equivalent-cost deleted
715interpretations with the same return type pick one arbitrarily rather than
716producing an ambiguous resolution.
717
718### Sizeof, Alignof & Offsetof Expressions ###
719`sizeof`, `alignof`, and `offsetof` expressions have at most a single
720interpretation, of type `size_t`.
721`sizeof` and `alignof` expressions take either a type or an expression as a 
722an argument; if the argument is a type, it must be a complete type which is
723not a function type, if an expression, the expression must have a single
724interpretation, the type of which conforms to the same rules.
725`offsetof` takes two arguments, a type and a member name; the type must be
726a complete structure or union type, and the second argument must name a member
727of that type.
728
729### Comma Expressions ###
730A comma expression `x, y` resolves `x` as if it had been cast to `void`, and
731then, if there is a unique interpretation `I` of `x`, has one interpretation
732for each interpretation `J` of `y` with the same type as `J` costing the sum
733of the costs of `I` and `J`.
734
735#### Compatible Functions ####
736**TODO** This subsection is very much a work in progress and has multiple open
737design questions.
738
739A _compatible function_ for an application expression is a visible function
740declaration with the same name as the application expression and parameter
741types that can be converted to from the argument types.
742Function pointers and variables of types with the `?()` function call operator
743overloaded may also serve as function declarations for purposes of
744compatibility.
745
746For monomorphic parameters of a function declaration, the declaration is a
747compatible function if there is an argument interpretation that is either an
748exact match, or has a safe or unsafe implicit conversion that can be used to
749reach the parameter type; for example:
750
751        void f(int);
752       
753        f(42);        // compatible; exact match to int type
754        f('x');       // compatible; safe conversion from char => int
755        f(3.14);      // compatible; unsafe conversion from double => int
756        f((void*)0);  // not compatible; no implicit conversion from void* => int
757
758Per Richard[*], function assertion satisfaction involves recursively searching
759for compatible functions, not an exact match on the function types (I don't
760believe the current Cforall resolver handles this properly); to extend the
761previous example:
762
763        forall(otype T | { void f(T); }) void g(T);
764       
765        g(42);    // binds T = int, takes f(int) by exact match
766        g('x');   // binds T = char, takes f(int) by conversion
767        g(3.14);  // binds T = double, takes f(int) by conversion
768
769[*] Bilson, s.2.1.3, p.26-27, "Assertion arguments are found by searching the
770accessible scopes for definitions corresponding to assertion names, and
771choosing the ones whose types correspond *most closely* to the assertion
772types." (emphasis mine)
773
774There are three approaches we could take to binding type variables: type
775variables must bind to argument types exactly, each type variable must bind
776exactly to at least one argument, or type variables may bind to any type which
777all corresponding arguments can implicitly convert to; I'll provide some
778possible motivation for each approach.
779
780There are two main arguments for the more restrictive binding schemes; the
781first is that the built-in implicit conversions in C between `void*` and `T*` 
782for any type `T` can lead to unexpectedly type-unsafe behaviour in a more
783permissive binding scheme, for example:
784
785        forall(dtype T) T* id(T *p) { return p; }
786       
787        int main() {
788                int *p = 0;
789                char *c = id(p);
790        }
791
792This code compiles in CFA today, and it works because the extra function
793wrapper `id` provides a level of indirection that allows the non-chaining
794implicit conversions from `int*` => `void*` and `void*` => `char*` to chain.
795The resolver types the last line with `T` bound to `void` as follows:
796
797        char *c = (char*)id( (void*)p );
798
799It has been suggested that making the implicit conversions to and from `void*` 
800explicit in Cforall code (as in C++) would solve this particular problem, and
801provide enough other type-safety benefits to outweigh the source-compatibility
802break with C; see Appendix D for further details.
803
804The second argument for a more constrained binding scheme is performance;
805trait assertions need to be checked after the type variables are bound, and
806adding more possible values for the type variables should correspond to a
807linear increase in runtime of the resolver per type variable.
808There are 21 built-in arithmetic types in C (ignoring qualifiers), and each of
809them is implicitly convertable to any other; if we allow unrestricted binding
810of type variables, a common `int` variable (or literal) used in the position
811of a polymorphic variable parameter would cause a 20x increase in the amount
812of time needed to check trait resolution for that interpretation.
813These numbers have yet to be emprically substantiated, but the theory is
814reasonable, and given that much of the impetus for re-writing the resolver is
815due to its poor performance, I think this is a compelling argument.
816
817I would also mention that a constrained binding scheme is practical; the most
818common type of assertion is a function assertion, and, as mentioned above,
819those assertions should be able to be implicitly converted to to match.
820Thus, in the example above with `g(T)`, where the assertion is `void f(T)`,
821we first bind `T = int` or `T = char` or `T = double`, then substitute the
822binding into the assertion, yielding assertions of `void f(int)`,
823`void f(char)`, or `void f(double)`, respectively, then attempt to satisfy
824these assertions to complete the binding.
825Though in all three cases, the existing function with signature `void f(int)` 
826satisfies this assertion, the checking work cannot easily be re-used between
827variable bindings, because there may be better or worse matches depending on
828the specifics of the binding.
829
830The main argument for a more flexible binding scheme is that the binding
831abstraction can actually cause a wrapped function call that would work to
832cease to resolve, as below:
833
834        forall(otype T | { T ?+? (T, T) })
835        T add(T x, T y) { return x + y; }
836       
837        int main() {
838                int i, j = 2;
839                short r, s = 3;
840                i = add(j, s);
841                r = add(s, j);
842        }
843
844Now, C's implicit conversions mean that you can call `j + s` or `s + j`, and
845in both cases the short `s` is promoted to `int` to match `j`.
846If, on the other hand, we demand that variables exactly match type variables,
847neither call to `add` will compile, because it is impossible to simultaneously
848bind `T` to both `int` and `short` (C++ has a similar restriction on template
849variable inferencing).
850One alternative that enables this case, while still limiting the possible
851type variable bindings is to say that at least one argument must bind to its
852type parameter exactly.
853In this case, both calls to `add` would have the set `{ T = int, T = short }` 
854for candidate bindings, and would have to check both, as well as checking that
855`short` could convert to `int` or vice-versa.
856
857It is worth noting here that parameterized types generally bind their type
858parameters exactly anyway, so these "restrictive" semantics only restrict a
859small minority of calls; for instance, in the example following, there isn't a
860sensible way to type the call to `ptr-add`:
861
862        forall(otype T | { T ?+?(T, T) })
863        void ptr-add( T* rtn, T* x, T* y ) {
864                *rtn = *x + *y;
865        }
866       
867        int main() {
868                int i, j = 2;
869                short s = 3;
870                ptr-add(&i, &j, &s); // ERROR &s is not an int*
871        }
872
873I think there is some value in providing library authors with the
874capability to express "these two parameter types must match exactly".
875This can be done without restricting the language's expressivity, as the `add` 
876case above can be made to work under the strictest type variable binding
877semantics with any addition operator in the system by changing its signature
878as follows:
879
880        forall( otype T, otype R, otype S | { R ?+?(T, S); } )
881        R add(T x, S y) { return x + y; }
882
883Now, it is somewhat unfortunate that the most general version here is more
884verbose (and thus that the path of least resistence would be more restrictive
885library code); however, the breaking case in the example code above is a bit
886odd anyway - explicitly taking two variables of distinct types and relying on
887C's implicit conversions to do the right thing is somewhat bad form,
888especially where signed/unsigned conversions are concerned.
889I think the more common case for implicit conversions is the following,
890though, where the conversion is used on a literal:
891
892        short x = 40;
893        short y = add(x, 2);
894
895One option to handle just this case would be to make literals implicitly
896convertable to match polymorphic type variables, but only literals.
897The example above would actually behave slightly differently than `x + 2` in
898C, though, casting the `2` down to `short` rather than the `x` up to `int`, a
899possible demerit of this scheme.
900
901The other question to ask would be which conversions would be allowed for
902literals; it seems rather odd to allow down-casting `42ull` to `char`, when
903the programmer has explicitly specified by the suffix that it's an unsigned
904long.
905Type interpretations of literals in C are rather complex (see [1]), but one
906reasonable approach would be to say that un-suffixed integer literals could be
907interpreted as any type convertable from int, "u" suffixed literals could be
908interpreted as any type convertable from "unsigned int" except the signed
909integer types, and "l" or "ll" suffixed literals could only be interpreted as
910`long` or `long long`, respectively (or possibly that the "u" suffix excludes
911the signed types, while the "l" suffix excludes the types smaller than
912`long int`, as in [1]).
913Similarly, unsuffixed floating-point literals could be interpreted as `float`,
914`double` or `long double`, but "f" or "l" suffixed floating-point literals
915could only be interpreted as `float` or `long double`, respectively.
916I would like to specify that character literals can only be interpreted as
917`char`, but the wide-character variants and the C practice of typing character
918literals as `int` means that would likely break code, so character literals
919should be able to take any integer type.
920
921[1] http://en.cppreference.com/w/c/language/integer_constant
922
923With the possible exception of the `add` case above, implicit conversions to
924the function types of assertions can handle most of the expected behaviour
925from C.
926However, implicit conversions cannot be applied to match variable assertions,
927as in the following example:
928
929        forall( otype T | { int ?<?(T, T); T ?+?(T, T); T min; T max; } )
930        T clamp_sum( T x, T y ) {
931                T sum = x + y;
932                if ( sum < min ) return min;
933                if ( max < sum ) return max;
934                return sum;
935        }
936       
937        char min = 'A';
938        double max = 100.0;
939        //int val = clamp_sum( 'X', 3.14 );  // ERROR (1)
940       
941        char max = 'Z'
942        char val = clamp_sum( 'X', 3.14 ); // MATCH (2)
943        double val = clamp_sum( 40.9, 19.9 ); // MAYBE (3)
944
945In this example, the call to `clamp_sum` at (1) doesn't compile, because even
946though there are compatible `min` and `max` variables of types `char` and
947`double`, they need to have the same type to match the constraint, and they
948don't.
949The (2) example does compile, but with a behaviour that might be a bit
950unexpected given the "usual arithmetic conversions", in that both values are
951narrowed to `char` to match the `min` and `max` constraints, rather than
952widened to `double` as is usual for mis-matched arguments to +.
953The (3) example is the only case discussed here that would require the most
954permisive type binding semantics - here, `T` is bound to `char`, to match the
955constraints, and both the parameters are narrowed from `double` to `char` 
956before the call, which would not be allowed under either of the more
957restrictive binding semantics.
958However, the behaviour here is unexpected to the programmer, because the
959return value will be `(double)'A' /* == 60.0 */` due to the conversions,
960rather than `60.8 /* == 40.9 + 19.9 */` as they might expect.
961
962Personally, I think that implicit conversions are not a particularly good
963language design, and that the use-cases for them can be better handled with
964less powerful features (e.g. more versatile rules for typing constant
965expressions).
966However, though we do need implicit conversions in monomorphic code for C
967compatibility, I'm in favour of restricting their usage in polymorphic code,
968both to give programmers some stronger tools to express their intent and to
969shrink the search space for the resolver.
970Of the possible binding semantics I've discussed, I'm in favour of forcing
971polymorphic type variables to bind exactly, though I could be talked into
972allowing literal expressions to have more flexibility in their bindings, or
973possibly loosening "type variables bind exactly" to "type variables bind
974exactly at least once"; I think the unrestricted combination of implicit
975conversions and polymorphic type variable binding unneccesarily multiplies the
976space of possible function resolutions, and that the added resolution options
977are mostly unexpected and therefore confusing and not useful to user
978programmers.
979
980## Appendix A: Partial and Total Orders ##
981The `<=` relation on integers is a commonly known _total order_, and
982intuitions based on how it works generally apply well to other total orders.
983Formally, a total order is some binary relation `<=` over a set `S` such that
984for any two members `a` and `b` of `S`, `a <= b` or `b <= a` (if both, `a` and
985`b` must be equal, the _antisymmetry_ property); total orders also have a
986_transitivity_ property, that if `a <= b` and `b <= c`, then `a <= c`.
987If `a` and `b` are distinct elements and `a <= b`, we may write `a < b`.
988 
989A _partial order_ is a generalization of this concept where the `<=` relation
990is not required to be defined over all pairs of elements in `S` (though there
991is a _reflexivity_ requirement that for all `a` in `S`, `a <= a`); in other
992words, it is possible for two elements `a` and `b` of `S` to be
993_incomparable_, unable to be ordered with respect to one another (any `a` and
994`b` for which either `a <= b` or `b <= a` are called _comparable_).
995Antisymmetry and transitivity are also required for a partial order, so all
996total orders are also partial orders by definition.
997One fairly natural partial order is the "subset of" relation over sets from
998the same universe; `{ }` is a subset of both `{ 1 }` and `{ 2 }`, which are
999both subsets of `{ 1, 2 }`, but neither `{ 1 }` nor `{ 2 }` is a subset of the
1000other - they are incomparable under this relation.
1001
1002We can compose two (or more) partial orders to produce a new partial order on
1003tuples drawn from both (or all the) sets.
1004For example, given `a` and `c` from set `S` and `b` and `d` from set `R`,
1005where both `S` and `R` both have partial orders defined on them, we can define
1006a ordering relation between `(a, b)` and `(c, d)`.
1007One common order is the _lexicographical order_, where `(a, b) <= (c, d)` iff
1008`a < c` or both `a = c` and `b <= d`; this can be thought of as ordering by
1009the first set and "breaking ties" by the second set.
1010Another common order is the _product order_, which can be roughly thought of
1011as "all the components are ordered the same way"; formally `(a, b) <= (c, d)` 
1012iff `a <= c` and `b <= d`.
1013One difference between the lexicographical order and the product order is that
1014in the lexicographical order if both `a` and `c` and `b` and `d` are
1015comparable then `(a, b)` and `(c, d)` will be comparable, while in the product
1016order you can have `a <= c` and `d <= b` (both comparable) which will make
1017`(a, b)` and `(c, d)` incomparable.
1018The product order, on the other hand, has the benefit of not prioritizing one
1019order over the other.
1020
1021Any partial order has a natural representation as a directed acyclic graph
1022(DAG).
1023Each element `a` of the set becomes a node of the DAG, with an arc pointing to
1024its _covering_ elements, any element `b` such that `a < b` but where there is
1025no `c` such that `a < c` and `c < b`.
1026Intuitively, the covering elements are the "next ones larger", where you can't
1027fit another element between the two.
1028Under this construction, `a < b` is equivalent to "there is a path from `a` to
1029`b` in the DAG", and the lack of cycles in the directed graph is ensured by
1030the antisymmetry property of the partial order.
1031
1032Partial orders can be generalized to _preorders_ by removing the antisymmetry
1033property.
1034In a preorder the relation is generally called `<~`, and it is possible for
1035two distict elements `a` and `b` to have `a <~ b` and `b <~ a` - in this case
1036we write `a ~ b`; `a <~ b` and not `a ~ b` is written `a < b`.
1037Preorders may also be represented as directed graphs, but in this case the
1038graph may contain cycles.
1039
1040## Appendix B: Building a Conversion Graph from Un-annotated Single Steps ##
1041The short answer is that it's impossible.
1042
1043The longer answer is that it has to do with what's essentially a diamond
1044inheritance problem.
1045In C, `int` converts to `unsigned int` and also `long` "safely"; both convert
1046to `unsigned long` safely, and it's possible to chain the conversions to
1047convert `int` to `unsigned long`.
1048There are two constraints here; one is that the `int` to `unsigned long` 
1049conversion needs to cost more than the other two (because the types aren't as
1050"close" in a very intuitive fashion), and the other is that the system needs a
1051way to choose which path to take to get to the destination type.
1052Now, a fairly natural solution for this would be to just say "C knows how to
1053convert from `int` to `unsigned long`, so we just put in a direct conversion
1054and make the compiler smart enough to figure out the costs" - given that the
1055users can build an arbitrary graph of conversions, this needs to be handled
1056anyway.
1057We can define a preorder over the types by saying that `a <~ b` if there
1058exists a chain of conversions from `a` to `b`.
1059This preorder corresponds roughly to a more usual type-theoretic concept of
1060subtyping ("if I can convert `a` to `b`, `a` is a more specific type than
1061`b`"); however, since this graph is arbitrary, it may contain cycles, so if
1062there is also a path to convert `b` to `a` they are in some sense equivalently
1063specific.
1064
1065Now, to compare the cost of two conversion chains `(s, x1, x2, ... xn)` and
1066`(s, y1, y2, ... ym)`, we have both the length of the chains (`n` versus `m`)
1067and this conversion preorder over the destination types `xn` and `ym`.
1068We could define a preorder by taking chain length and breaking ties by the
1069conversion preorder, but this would lead to unexpected behaviour when closing
1070diamonds with an arm length of longer than 1.
1071Consider a set of types `A`, `B1`, `B2`, `C` with the arcs `A->B1`, `B1->B2`,
1072`B2->C`, and `A->C`.
1073If we are comparing conversions from `A` to both `B2` and `C`, we expect the
1074conversion to `B2` to be chosen because it's the more specific type under the
1075conversion preorder, but since its chain length is longer than the conversion
1076to `C`, it loses and `C` is chosen.
1077However, taking the conversion preorder and breaking ties or ambiguities by
1078chain length also doesn't work, because of cases like the following example
1079where the transitivity property is broken and we can't find a global maximum:
1080
1081        `X->Y1->Y2`, `X->Z1->Z2->Z3->W`, `X->W`
1082
1083In this set of arcs, if we're comparing conversions from `X` to each of `Y2`,
1084`Z3` and `W`, converting to `Y2` is cheaper than converting to `Z3`, because
1085there are no conversions between `Y2` and `Z3`, and `Y2` has the shorter chain
1086length.
1087Also, comparing conversions from `X` to `Z3` and to `W`, we find that the
1088conversion to `Z3` is cheaper, because `Z3 < W` by the conversion preorder,
1089and so is considered to be the nearer type.
1090By transitivity, then, the conversion from `X` to `Y2` should be cheaper than
1091the conversion from `X` to `W`, but in this case the `X` and `W` are
1092incomparable by the conversion preorder, so the tie is broken by the shorter
1093path from `X` to `W` in favour of `W`, contradicting the transitivity property
1094for this proposed order.
1095
1096Without transitivity, we would need to compare all pairs of conversions, which
1097would be expensive, and possibly not yield a minimal-cost conversion even if
1098all pairs were comparable.
1099In short, this ordering is infeasible, and by extension I believe any ordering
1100composed solely of single-step conversions between types with no further
1101user-supplied information will be insufficiently powerful to express the
1102built-in conversions between C's types.
1103
1104## Appendix C: Proposed Prelude Changes ##
1105**TODO** Port Glen's "Future Work" page for builtin C conversions.
1106**TODO** Move discussion of zero_t, unit_t here.
1107
1108It may be desirable to have some polymorphic wrapper functions in the prelude
1109which provide consistent default implementations of various operators given a
1110definition of one of them.
1111Naturally, users could still provide a monomorphic overload if they wished to
1112make their own code more efficient than the polymorphic wrapper could be, but
1113this would minimize user effort in the common case where the user cannot write
1114a more efficient function, or is willing to trade some runtime efficiency for
1115developer time.
1116As an example, consider the following polymorphic defaults for `+` and `+=`:
1117
1118        forall(otype T | { T ?+?(T, T); })
1119        lvalue T ?+=? (T *a, T b) {
1120                return *a = *a + b;
1121        }
1122       
1123        forall(otype T | { lvalue T ?+=? (T*, T) })
1124        T ?+? (T a, T b) {
1125                T tmp = a;
1126                return tmp += b;
1127        }
1128
1129Both of these have a possibly unneccessary copy (the first in copying the
1130result of `*a + b` back into `*a`, the second copying `a` into `tmp`), but in
1131cases where these copies are unavoidable the polymorphic wrappers should be
1132just as performant as the monomorphic equivalents (assuming a compiler
1133sufficiently clever to inline the extra function call), and allow programmers
1134to define a set of related operators with maximal concision.
1135
1136**TODO** Look at what Glen and Richard have already written for this.
1137
1138## Appendix D: Feasibility of Making void* Conversions Explicit ##
1139C allows implicit conversions between `void*` and other pointer types, as per
1140section 6.3.2.3.1 of the standard.
1141Making these implicit conversions explicit in Cforall would provide
1142significant type-safety benefits, and is precedented in C++.
1143A weaker version of this proposal would be to allow implicit conversions to
1144`void*` (as a sort of "top type" for all pointer types), but to make the
1145unsafe conversion from `void*` back to a concrete pointer type an explicit
1146conversion.
1147However, `int *p = malloc( sizeof(int) );` and friends are hugely common
1148in C code, and rely on the unsafe implicit conversion from the `void*` return
1149type of `malloc` to the `int*` type of the variable - obviously it would be
1150too much of a source-compatibility break to disallow this for C code.
1151We do already need to wrap C code in an `extern "C"` block, though, so it is
1152technically feasible to make the `void*` conversions implicit in C but
1153explicit in Cforall.
1154Also, for calling C code with `void*`-based APIs, pointers-to-dtype are
1155calling-convention compatible with `void*`; we could read `void*` in function
1156signatures as essentially a fresh dtype type variable, e.g:
1157
1158        void* malloc( size_t )
1159                => forall(dtype T0) T0* malloc( size_t )
1160        void qsort( void*, size_t, size_t, int (*)( const void*, const void* ) )
1161                => forall(dtype T0, dtype T1, dtype T2)
1162                   void qsort( T0*, size_t, size_t, int (*)( const T1*, const T2* ) )
1163
1164This would even allow us to leverage some of Cforall's type safety to write
1165better declarations for legacy C API functions, like the following wrapper for
1166`qsort`:
1167
1168        extern "C" { // turns off name-mangling so that this calls the C library
1169                // call-compatible type-safe qsort signature
1170                forall(dtype T)
1171                void qsort( T*, size_t, size_t, int (*)( const T*, const T* ) );
1172               
1173                // forbid type-unsafe C signature from resolving
1174                void qsort( void*, size_t, size_t, int (*)( const void*, const void* ) )
1175                        = delete;
1176        }
Note: See TracBrowser for help on using the repository browser.