Changeset 0720e049 for doc/proposals
- Timestamp:
- Aug 11, 2017, 10:33:37 AM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 54cd58b0
- Parents:
- 3d4b23fa (diff), 59a75cb (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Location:
- doc/proposals
- Files:
-
- 1 deleted
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/text/concurrency.tex
r3d4b23fa r0720e049 4 4 % ====================================================================== 5 5 % ====================================================================== 6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. 7 8 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. 9 10 An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for general purpose language, which is why it was rejected as the core paradigm for concurrency in \CFA. 6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. 7 8 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. 9 10 An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for general purpose language, which is why it was rejected as the core paradigm for concurrency in \CFA. 11 11 12 12 One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct. … … 101 101 } 102 102 \end{cfacode} 103 The multi-acquisition monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. 103 The multi-acquisition monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. 104 104 105 105 However, such use leads the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires: … … 169 169 \end{tabular} 170 170 \end{center} 171 Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting. 171 Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting. 172 172 173 173 % ====================================================================== … … 178 178 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be complex to support. However, it is shown that entry-point locking solves most of the issues. 179 179 180 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. 181 182 Before looking into complex control flow, it is important to present the difference between the two acquiring options : \gls{callsite-locking} and \gls{entry-point-locking}, i.e. acquiring the monitors before making a mutex routine call or as the first instruction of the mutex routinecall. For example:180 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. 181 182 Before looking into complex control-flow, it is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example: 183 183 \begin{center} 184 184 \setlength\tabcolsep{1.5pt} … … 245 245 \end{center} 246 246 247 \Gls{callsite-locking} is inefficient, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking} calling a monitor routine becomes exactly the same as calling it from anywhere else. Note that the \code{mutex} keyword relies on the resolver rather than another form of language, which mean that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait. This is possible because monitors are designed in terms a trait. For example: 247 \Gls{callsite-locking} is inefficient, since any \code{dtype} routine may have to obtain some lock before calling a routine, depending on whether or not the type passed is a monitor. However, with \gls{entry-point-locking} calling a monitor routine becomes exactly the same as calling it from anywhere else. 248 249 Note the \code{mutex} keyword relies on the resolver, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait. This is possible because monitors are designed in terms a trait. For example: 248 250 \begin{cfacode} 249 251 //Incorrect 250 252 //T is not a monitor 251 253 forall(dtype T) 252 void foo(T * mutex t); 254 void foo(T * mutex t); 253 255 254 256 //Correct 255 //this function only works on monitors 257 //this function only works on monitors 256 258 //(any monitor) 257 259 forall(dtype T | is_monitor(T)) 258 void bar(T * mutex t)); 260 void bar(T * mutex t)); 259 261 \end{cfacode} 260 262 … … 267 269 In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronisation. With monitors, this is generally achieved with internal or external scheduling as in\cit. Since internal scheduling of single monitors is mostly a solved problem, this proposal concentraits on extending internal scheduling to multiple monitors at once. Indeed, like the \gls{group-acquire} semantics, internal scheduling extends to multiple monitors at once in a way that is natural to the user but requires additional complexity on the implementation side. 268 270 269 First, Here is a simple example of such a technique:271 First, here is a simple example of such a technique: 270 272 271 273 \begin{cfacode} … … 289 291 \end{cfacode} 290 292 291 There are two details to note here. First, there \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This is needed to respect mutual-exclusion. Second, in \CFA, \code{condition} have no particular need to be stored inside a monitor, beyond any software engineering reasons. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. 293 There are two details to note here. First, there \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This is needed to respect mutual-exclusion. Second, in \CFA, \code{condition} have no particular need to be stored inside a monitor, beyond any software engineering reasons. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. 292 294 293 295 An important aspect to take into account here is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, foo is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency. … … 317 319 \end{pseudo} 318 320 \end{multicols} 319 320 The previous example shows the simple case of having two threads (one for each column) and a single monitor A. One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. There is an important thing to note here, both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This restriction is hidden on the user side in \uC, as it is a logical requirement for barging prevention. 321 The example shows the simple case of having two threads (one for each column) and a single monitor A. One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. There is an important thing to note here, both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This restriction is hidden on the user side in \uC, as it is a logical requirement for barging prevention. 321 322 322 323 A direct extension of the previous example is the \gls{group-acquire} version: … … 337 338 \end{pseudo} 338 339 \end{multicols} 339 340 340 This version uses \gls{group-acquire} (denoted using the \& symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate. 341 341 … … 397 397 \end{center} 398 398 399 It is particularly important to pay attention to code sections 8 and 3, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{group-acquire} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 1 7), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs the monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. We are therefore left withthree options:399 It is particularly important to pay attention to code sections 8 and 3, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{group-acquire} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 16), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs the monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. There are three options: 400 400 401 401 \subsubsection{Delaying signals} 402 The first more obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is the correct time to transfer ownership when the last lock is no longer needed is what fits most closely to the behaviour of single monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single groupd of object. Effectively making the existing single monitor semantic viable by simply changing monitors to monitor collections.402 The first more obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is the correct time to transfer ownership when the last lock is no longer needed because this semantics fits most closely to the behaviour of single monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of object, effectively making the existing single monitor semantic viable by simply changing monitors to monitor collections. 403 403 \begin{multicols}{2} 404 404 Waiter … … 424 424 \end{pseudo} 425 425 \end{multicols} 426 However, this solution can become much more complicated depending on what is executed while secretly holding B . Indeed, nothing prevents a user from signalling monitor A on a different condition variable:426 However, this solution can become much more complicated depending on what is executed while secretly holding B (at line 10). Indeed, nothing prevents a user from signalling monitor A on a different condition variable: 427 427 \newpage 428 428 \begin{multicols}{2} … … 459 459 \end{multicols} 460 460 461 The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example. Depending on the order of signals (line 12 and 15) two cases can happen. Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect.462 463 \paragraph{Case 1: thread 1 will go first.} In this case, the problem is that monitor A needs to be passed to thread 2 when thread 1 is done with it.464 \paragraph{Case 2: thread 2 will go first.} In this case, the problem is that monitor B needs to be passed to thread 1. This can be done directly or using thread 2 as an intermediate.461 The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example. Depending on the order of signals (line 12 and 15) two cases can happen. 462 463 \paragraph{Case 1: thread 1 goes first.} In this case, the problem is that monitor A needs to be passed to thread 2 when thread 1 is done with it. 464 \paragraph{Case 2: thread 2 goes first.} In this case, the problem is that monitor B needs to be passed to thread 1, which can be done directly or using thread 2 as an intermediate. 465 465 \\ 466 466 467 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect. 468 467 469 In both cases however, the threads need to be able to distinguish on a per monitor basis which ones need to be released and which ones need to be transferred. Which means monitors cannot be handled as a single homogenous group. 468 470 469 471 \subsubsection{Dependency graphs} 470 In the Listing 1 pseudo-code, there is a solution which would statisfy both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases A and then the waiter transfers back ownership of A when it releases it then the problem is solved. Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example the following code which is just a direct extension to three monitors requires at least three ownership transfer and has multiple solutions:472 In the Listing 1 pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases A and then the waiter transfers back ownership of A when it releases it then the problem is solved. Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions: 471 473 472 474 \begin{multicols}{2} … … 496 498 497 499 \subsubsection{Partial signalling} 498 Finally, the solution that was chosen for \CFA is to use partial signalling. Consider the following case:500 Finally, the solution that is chosen for \CFA is to use partial signalling. Consider the following case: 499 501 500 502 \begin{multicols}{2} … … 518 520 \end{pseudo} 519 521 \end{multicols} 520 521 The partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated in to only two actions, passing monitors to the next owner when they should be release and conditionnaly waking threads if all conditions are met. Contrary to the other solutions, this solution quickly hits an upper bound on complexity of implementation. 522 The partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionnaly waking threads if all conditions are met. Contrary to the other solutions, this solution quickly hits an upper bound on complexity of implementation. 522 523 523 524 % ====================================================================== … … 526 527 % ====================================================================== 527 528 % ====================================================================== 528 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation . Thisis achieved using the \code{signal_block} routine\footnote{name to be discussed}.529 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine\footnote{name to be discussed}. 529 530 530 531 For example here is an example highlighting the difference in behaviour: … … 625 626 bool inUse; 626 627 public: 627 void P() { 628 if(inUse) wait(c); 628 void P() { 629 if(inUse) wait(c); 629 630 inUse = true; 630 631 } 631 void V() { 632 inUse = false; 633 signal(c); 632 void V() { 633 inUse = false; 634 signal(c); 634 635 } 635 636 } … … 639 640 bool inUse; 640 641 public: 641 void P() { 642 if(inUse) _Accept(V); 642 void P() { 643 if(inUse) _Accept(V); 643 644 inUse = true; 644 645 } 645 void g() { 646 void g() { 646 647 inUse = false; 647 648 -
doc/proposals/concurrency/version
r3d4b23fa r0720e049 1 0.9.1 191 0.9.122 -
doc/proposals/virtual.txt
r3d4b23fa r0720e049 1 1 Proposal for virtual functionality 2 3 There are two types of virtual inheritance in this proposal, relaxed 4 (implicit) and strict (explicit). Relaxed is the simpler case that uses the 5 existing trait system with the addition of trait references and vtables. 6 Strict adds some constraints and requires some additional notation but allows 7 for down-casting. 8 9 Relaxed Virtual Inheritance: 2 10 3 11 Imagine the following code : … … 20 28 void draw(line*); 21 29 22 While all the members of this simple UI support drawing creating a UI that easily23 supports both these UI requires some tedious boiler-plate code:30 While all the members of this simple UI support drawing, creating a UI that 31 easily supports both these UI requires some tedious boiler-plate code: 24 32 25 33 enum type_t { text, line }; … … 41 49 } 42 50 43 While this code will work as indented, adding any new widgets or any new widget behaviors 44 requires changing existing code to add the desired functionality. To ease this maintenance 45 effort required CFA introduces the concept of dynamic types, in a manner similar to C++. 46 47 A simple usage of dynamic type with the previous example would look like : 48 49 drawable* objects[10]; 51 While this code will work as implemented, adding any new widgets or any new 52 widget behaviors requires changing existing code to add the desired 53 functionality. To ease this maintenance effort required CFA introduces the 54 concept of trait references. 55 56 Using trait references to implement the above gives the following : 57 58 trait drawable objects[10]; 50 59 fill_objects(objects); 51 60 52 61 while(running) { 53 for(drawable *object : objects) {62 for(drawable object : objects) { 54 63 draw(object); 55 64 } 56 65 } 57 66 58 However, this is not currently do-able in the current CFA and furthermore is not 59 possible to implement statically. Therefore we need to add a new feature to handle 60 having dynamic types like this (That is types that are found dynamically not types 61 that change dynamically). 62 63 C++ uses inheritance and virtual functions to find the 64 desired type dynamically. CFA takes inspiration from this solution. 65 66 What we really want to do is express the fact that calling draw() on a object 67 should find the dynamic type of the parameter before calling the routine, much like the 68 hand written example given above. We can express this by adding the virtual keyword on 69 the parameter of the constraints on our trait: 67 The keyword trait is optional (by the same rules as the struct keyword). This 68 is not currently supported in CFA and the lookup is not possible to implement 69 statically. Therefore we need to add a new feature to handle having dynamic 70 lookups like this. 71 72 What we really want to do is express the fact that calling draw() on a trait 73 reference should find the underlying type of the given parameter and find how 74 it implements the routine, as in the example with the enumeration and union. 75 76 For instance specifying that the drawable trait reference looks up the type 77 of the first argument to find the implementation would be : 70 78 71 79 trait drawable(otype T) { … … 73 81 }; 74 82 75 This expresses the idea that drawable is similar to an abstract base class in C++ and 76 also gives meaning to trying to take a pointer of drawable. That is anything that can 77 be cast to a drawable pointer has the necessary information to call the draw routine on 78 that type. Before that drawable was only a abstract type while now it also points to a 79 piece of storage which specify which behavior the object will have at run time. 80 81 This storage needs to be allocate somewhere. C++ just adds an invisible pointer at 82 the beginning of the struct but we can do something more explicit for users, actually 83 have a visible special field : 84 85 struct text { 86 char* text; 87 vtable drawable; 88 }; 89 90 struct line{ 91 vtable drawable; 92 vec2 start; 93 vec2 end; 94 }; 95 96 With these semantics, adding a "vtable drawable" means that text pointers and line pointers are now 97 convertible to drawable pointers. This conversion will not necessarily be a type only change however, indeed, 98 the drawable pointer will point to the field "vtable drawable" not the head of the struct. However, since all 99 the types are known at compile time, converting pointers becomes a simple offset operations. 100 101 The vtable field contains a pointer to a vtable which contains all the information needed for the caller 102 to find the function pointer of the desired behavior. 103 104 One of the limitations of this design is that it does not support double dispatching, which 105 concretely means traits cannot have routines with more than one virtual parameter. This design 106 would have many ambiguities if it did support multiple virtual parameter. A futher limitation is 107 that traits over more than one type cannot have vtables meaningfully defined for them, as the 108 particular vtable to use would be a function of the other type(s) the trait is defined over. 109 110 It is worth noting that the function pointers in these vtables are bound at object construction, rather than 111 function call-site, as in Cforall's existing polymorphic functions. As such, it is possible that two objects 112 with the same static type would have a different vtable (consider what happens if draw(line*) is overridden 113 between the definitions of two line objects). Given that the virtual drawable* erases static types though, 114 this should not be confusing in practice. A more distressing possibility is that of creating an object that 115 outlives the scope of one of the functions in its vtable. This is certainly a possible bug, but it is of a 116 type that C programmers are familiar with, and should be able to avoid by the usual methods. 117 118 Extensibility. 119 120 One of the obvious critics of this implementation is that it lacks extensibility for classes 121 that cannot be modified (ex: Linux C headers). However this solution can be extended to 122 allow more extensibility by adding "Fat pointers". 123 124 Indeed, users could already "solve" this issue by writing their own fat pointers as such: 125 126 trait MyContext(otype T) { 127 void* get_stack(virtual T*) 128 }; 129 130 void* get_stack(ucontext_t *context); 131 132 struct fat_ucontext_t { 133 vtable MyContext; 134 ucontext_t *context; 135 } 136 137 //Tedious forwarding routine 138 void* get_stack(fat_ucontext_t *ptr) { 139 return get_stack(ptr->context); 140 } 141 142 However, users would have to write all the virtual methods they want to override and make 143 them all simply forward to the existing method that takes the corresponding POCO(Plain Old C Object). 144 145 The alternative we propose is to use language level fat pointers : 146 147 trait MyContext(otype T) { 148 void* get_stack(virtual T*) 149 }; 150 151 void* get_stack(ucontext_t *context); 152 153 //The type vptr(ucontext_t) all 154 vptr(ucontext_t) context; 155 156 These behave exactly as the previous example but all the forwarding routines are automatically generated. 157 158 Bikeshedding. 159 160 It may be desirable to add fewer new keywords than discussed in this proposal; it is possible that "virtual" 161 could replace both "vtable" and "vptr" above with unambiguous contextual meaning. However, for purposes of 162 clarity in the design discussion it is beneficial to keep the keywords for separate concepts distinct. 163 83 This could be implied in simple cases like this one (single parameter on the 84 trait and single generic parameter on the function). In more complex cases it 85 would have to be explicitly given, or a strong convention would have to be 86 enforced (e.g. implementation of trait functions is always drawn from the 87 first polymorphic parameter). 88 89 Once a function in a trait has been marked as virtual it defines a new 90 function that takes in that trait's reference and then dynamically calls the 91 underlying type implementation. Hence a trait reference becomes a kind of 92 abstract type, cannot be directly instantiated but can still be used. 93 94 One of the limitations of this design is that it does not support double 95 dispatching, which concretely means traits cannot have routines with more than 96 one virtual parameter. The program must have a single table to look up the 97 function on. Using trait references with traits with more than one parameter 98 is also restricted, initially forbidden, see extension. 99 100 Extension: Multi-parameter Virtual Traits: 101 102 This implementation can be extended to traits with multiple parameters if 103 one is called out as being the virtual trait. For example : 104 105 trait iterator(otype T, dtype Item) { 106 Maybe(Item) next(virtual T *); 107 } 108 109 iterator(int) generators[10]; 110 111 Which creates a collection of iterators that produce integers, regardless of 112 how those iterators are implemented. This may require a note that this trait 113 is virtual on T and not Item, but noting it on the functions may be enough. 114 115 116 Strict Virtual Inheritance: 117 118 One powerful feature relaxed virtual does not support is the idea of down 119 casting. Once something has been converted into a trait reference there is 120 very little we can do to recover and of the type information, only the trait's 121 required function implementations are kept. 122 123 To allow down casting strict virtual requires that all traits and structures 124 involved be organized into a tree. Each trait or struct must have a unique 125 position on this tree (no multiple inheritance). 126 127 This is declared as follows : 128 129 trait error(otype T) virtual { 130 const char * msg(T *); 131 } 132 133 trait io_error(otype T) virtual error { 134 FILE * src(T *); 135 } 136 137 struct eof_error virtual io_error { 138 FILE * fd; 139 }; 140 141 So the trait error is the head of a new tree and io_error is a child of it. 142 143 Also the parent trait is implicitly part of the assertions of the children, 144 so all children implement the same operations as the parent. By the unique 145 path down the tree, we can also uniquely order them so that a prefix of a 146 child's vtable has the same format as its parent's. 147 148 This gives us an important extra feature, runtime checking of the parent-child 149 relationship with a C++ dynamic_cast like operation. Allowing checked 150 conversions from trait references to more particular references, which works 151 if the underlying type is, or is a child of, the new trait type. 152 153 Extension: Multiple Parents 154 155 Although each trait/struct must have a unique position on each tree, it could 156 have positions on multiple trees. All this requires is the ability to give 157 multiple parents, as here : 158 159 trait region(otype T) virtual drawable, collider; 160 161 The restriction being, the parents must come from different trees. This 162 object (and all of its children) can be cast to either tree. This is handled 163 by generating a separate vtable for each tree the structure is in. 164 165 Extension: Multi-parameter Strict Virtual 166 167 If a trait has multiple parameters then one must be called out to be the one 168 we generate separate vtables for, as in : 169 170 trait example(otype T, otype U) virtual(T) ... 171 172 This can generate a separate vtable for each U for which all the T+U 173 implementations are provided. These are then separate nodes in the tree (or 174 the root of different trees) as if each was created individually. Providing a 175 single unique instance of these nodes would be the most difficult aspect of 176 this extension, possibly intractable, though with sufficient hoisting and 177 link-once duplication it may be possible. 178 179 Example: 180 181 trait argument(otype T) virtual { 182 char short_name(virtual T *); 183 bool is_set(virtual T *); 184 }; 185 186 trait value_argument(otype T, otype U) virtual(T) argument { 187 U get_value(virtual T *); 188 }; 189 190 Extension: Structural Inheritance 191 192 Currently traits must be the internal nodes and structs the leaf nodes. 193 Structs could be made internal nodes as well, in which case the child structs 194 would likely structurally inherit the fields of their parents. 195 196 197 Storing the Virtual Lookup Table (vtable): 198 199 We have so far been silent on how the vtable is created, stored and accessed. 200 201 Creation happens at compile time. Function pointers are found by using the 202 same best match rules as elsewhere (additional rules for defaults from the 203 parent may or may not be required). For strict virtual this must happen at the 204 global scope and forbidding static functions, to ensure that a single unique 205 vtable is created. Similarly, there may have to be stricter matching rules 206 for the functions that go into the vtable, possibly requiring an exact match. 207 Relaxed virtual could relax both restrictions, if we allow different vtable 208 at different conversion (struct to trait reference) sites. If it is allowed 209 local functions being bound to a vtable could cause issues when they go out 210 of scope, however this should follow the lifetime rules most C programs 211 already follow implicitly. 212 213 Most vtables should be stored statically, the only exception being some of 214 the relaxed vtables that could have local function pointers. These may be able 215 to be stack allocated. All vtables should be immutable and require no manual 216 cleanup. 217 218 Access has two main options: 219 220 The first is through the use of fat pointers, or a tuple of pointers. When the 221 object is converted to a trait reference, the pointers to its vtables are 222 stored along side it. 223 224 This allows for compatibility with existing structures (such as those imported 225 from C) and is the default storage method unless a different one is given. 226 227 The other is by inlining the vtable pointer as "intrusive vtables". This adds 228 a field to the structure to the vtable. The trait reference then has a single 229 pointer to this field, the vtable includes an offset to find the beginning of 230 the structure again. 231 232 This is used if you specify a vtable field in the structure. If given in the 233 trait the vtable pointer in the trait reference can then become a single 234 pointer to the vtable field and use that to recover the original object 235 pointer as well as retrieve all operations. 236 237 trait drawable(otype T) { 238 vtable drawable; 239 }; 240 241 struct line { 242 vtable drawable; 243 vec2 start; 244 vec2 end; 245 }; 246 247 This inline code allows trait references to be converted to plain pointers 248 (although they still must be called specially). The vtable field may just be 249 an opaque block of memory or it may allow user access to the vtable. If so 250 then there should be some way to retrieve the type of the vtable, which will be 251 autogenerated and often unique. 252 253 254 Keyword Usage: 255 256 It may be desirable to add fewer new keywords than discussed in this proposal. 257 It is possible that "virtual" could replace both "vtable" above with 258 unambiguous contextual meaning. However, for purposes of clarity in the design 259 discussion it is beneficial to keep the keywords for separate concepts distinct. 260 261 262 Trait References and Operations: 263 264 sizeof(drawable) will return the size of the trait object itself. However : 265 266 line a_line; 267 drawable widget = a_line; 268 sizeof(widget); 269 270 Will instead return the sizeof the underlying object, although the trait must 271 require that its implementation is sized for there to be a meaningful value 272 to return. You may also get the size of the trait reference with 273 274 sizeof(&widget); 275 276 Calling free on a trait reference will free the memory for the object. It will 277 leave the vtables alone, as those are (always?) statically allocated.
Note:
See TracChangeset
for help on using the changeset viewer.