Changeset 3c6e417
- Timestamp:
- Jun 20, 2019, 1:45:01 PM (6 years ago)
- Branches:
- ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 54dd994
- Parents:
- 1e5dedc4 (diff), c0f9efe (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 30 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/papers/concurrency/Paper.tex
r1e5dedc4 r3c6e417 311 311 Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads. 312 312 As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms. 313 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC, BoostThreads}, including putting green threads back into Java~\cite{Quasar}.313 From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}. 314 314 The main argument for user-level threading is that they are lighter weight than kernel threads (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work-units to facilitate load balancing by the runtime~\cite{Verch12}. 315 315 As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{vonBehren03}. … … 327 327 328 328 Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary. 329 Two concurrency violations of this philosophy are \emph{spurious wakeup} and \emph{barging}, i.e., random wakeup~\cite[\S~8]{Buhr05a} and signals-as-hints~\cite[\S~8]{Buhr05a}, where one is a consequence of the other, i.e., once there is spurious wakeup, signals-as-hints follows.329 Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging} (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, i.e., once there is spurious wakeup, signals-as-hints follows. 330 330 However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice. 331 331 Similarly, signals-as-hints is often a performance decision. 332 332 We argue removing spurious wakeup and signals-as-hints makes concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation. 333 (Author sexperience teaching concurrency is that students are highly confused by these semantics.)333 (Author experience teaching concurrency is that students are highly confused by these semantics.) 334 334 Clawing back performance, when local non-determinism is unimportant, should be an option not the default. 335 335 … … 367 367 \section{Stateful Function} 368 368 369 The generator/coroutine provides a stateful function, which is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}. 370 A stateful function allows execution to be temporarily suspended and later resumed, e.g., plugin, device driver, finite-state machine. 369 The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, e.g., plugin, device driver, finite-state machine. 371 370 Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension. 372 371 This capability is accomplished by retaining a data/execution \emph{closure} between invocations. … … 543 542 \end{figure} 544 543 545 For generators, coroutines, and threads, many designs are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.544 Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. 546 545 For example, Python presents generators as a function object: 547 546 \begin{python} … … 587 586 588 587 Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version. 589 This kind ofgenerator is an \emph{output generator}, producing a new result on each resumption.588 This generator is an \emph{output generator}, producing a new result on each resumption. 590 589 To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle. 591 An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining state in global variables is insufficient;590 An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining one state in global variables is insufficient; 592 591 hence, state is retained in a closure between calls. 593 592 Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators. 594 The C version only has the middle execution state because the top execution state becomes declaration initialization.593 The C version only has the middle execution state because the top execution state is declaration initialization. 595 594 Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type. 596 595 This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{ … … 668 667 As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are manually hoisted into the generator type. 669 668 % Manually, detecting and hoisting local-state variables is easy when the number is small. 670 Finally, the execution state is large, with one @resume@ and seven @suspend@s.669 In contrast, the execution state is large, with one @resume@ and seven @suspend@s. 671 670 Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than using a table-driven approach. 672 671 Because FSMs can be complex and occur frequently in important domains, direct support of the generator is crucial in a systems programming-language. … … 780 779 Figure~\ref{f:CFAPingPongGen} shows a symmetric generator, where the generator resumes another generator, forming a resume/resume cycle. 781 780 (The trivial cycle is a generator resuming itself.) 782 This control flow is similar to recursion for functions ,but without stack growth.781 This control flow is similar to recursion for functions but without stack growth. 783 782 The steps for symmetric control-flow are creating, executing, and terminating the cycle. 784 783 Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope. … … 789 788 Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example). 790 789 The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack. 791 Also, since local variables are not retained in the generator function, it does not contain an objects with destructors that must be called, so the cost is the same as a function return.790 Also, since local variables are not retained in the generator function, it does not contain any objects with destructors that must be called, so the cost is the same as a function return. 792 791 Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer. 793 792 … … 1220 1219 Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter. 1221 1220 Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end. 1222 For other scenarios, it is always possible to devise a solution with additional programming effort .1221 For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination. 1223 1222 1224 1223 The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first. … … 1290 1289 The function definitions ensures there is a statically-typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the currently executing coroutine handle. 1291 1290 The @main@ function has no return value or additional parameters because the coroutine type allows an arbitrary number of interface functions with corresponding arbitrary typed input/output values versus fixed ones. 1292 The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ function, and possibly redefining @suspend@and @resume@.1291 The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ function, and possibly redefining \textsf{suspend} and @resume@. 1293 1292 1294 1293 The \CFA custom-type @coroutine@ implicitly implements the getter and forward declarations for the coroutine main. … … 1338 1337 Once allocated, a VLS is fixed sized.} 1339 1338 on the allocating stack, provided the allocating stack is large enough. 1340 For a VLS stack allocation , allocation/deallocation is an inexpensive adjustment of the stack point, modulo any stack constructor costs (\eg initial frame setup).1339 For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup). 1341 1340 For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs. 1342 1341 With heap stack allocation, it is also possible to use a split (segmented) stack calling-convention, available with gcc and clang, so the stack is variable sized. … … 1363 1362 However, coroutines are a stepping stone towards concurrency. 1364 1363 1365 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a} \cite{Adya02}.1364 The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}. 1366 1365 Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}. 1367 The resulting execution system now follows a cooperative threading-model , called \newterm{non-preemptive scheduling}.1366 The resulting execution system now follows a cooperative threading-model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}. 1368 1367 Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}. 1369 1368 While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches. … … 1424 1423 This semantic ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic (termination) synchronization. 1425 1424 For block allocation to arbitrary depth, including recursion, threads are created/destroyed in a lattice structure (tree with top and bottom). 1426 Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic ally allocating.1425 Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation. 1427 1426 \begin{cfa} 1428 1427 MyTask * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$ … … 1525 1524 \subsection{Mutual Exclusion} 1526 1525 1527 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a n (individual) \newterm{critical-section}~\cite{Dijkstra65}.1528 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously.1526 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a \newterm{critical section}~\cite{Dijkstra65}, which is enforced by \newterm{simple mutual-exclusion}. 1527 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session use the resource simultaneously and different sessions are segregated, which is enforced by \newterm{complex mutual-exclusion} providing the correct kind and number of threads using a group critical-section. 1529 1528 The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers share a session but writers have a unique session. 1530 \newterm{Mutual exclusion} enforces the correct kind and number of threads using a critical section.1531 1529 1532 1530 However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. … … 1548 1546 Preventing or detecting barging is an involved challenge with low-level locks, which is made easier through higher-level constructs. 1549 1547 This challenge is often split into two different approaches: barging avoidance and prevention. 1550 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait .1548 Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait; 1551 1549 algorithms that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely. 1552 1550 … … 1638 1636 For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@. 1639 1637 1640 \newpage1641 1638 The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@. 1642 1639 The following has monitor parameter types that are composed of multiple objects. … … 1737 1734 1738 1735 Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@. 1739 \newpage1740 1736 \begin{cfa} 1741 1737 void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$ … … 1748 1744 \end{cfa} 1749 1745 The bulk-acquire semantics allow @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@. 1750 In the calls to @bar@ and @baz@, the monitors are acquiredin opposite order, possibly resulting in deadlock.1746 The calls to @bar@ and @baz@ acquired the monitors in opposite order, possibly resulting in deadlock. 1751 1747 However, this case is the simplest instance of the \emph{nested-monitor problem}~\cite{Lister77}, where monitors are acquired in sequence versus bulk. 1752 1748 Detecting the nested-monitor problem requires dynamic tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}. … … 1799 1795 % It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} 1800 1796 % \end{cquote} 1801 Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of barging.1797 Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging. 1802 1798 Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging. 1803 1799 1804 Figure~\ref{f:MonitorScheduling} shows internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}).1800 Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}). 1805 1801 External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order. 1806 Internal threads block on condition queues via @wait@ and they reenter from the condition in FIFO order .1802 Internal threads block on condition queues via @wait@ and they reenter from the condition in FIFO order, or they block on urgent via @signal_block@ or @waitfor@ and reenter implicit when the monitor becomes empty, \ie, the thread in the monitor exits or waits. 1807 1803 1808 1804 There are three signalling mechanisms to unblock waiting threads to enter the monitor. 1809 Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion so only can proceed.1805 Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion so only one can proceed. 1810 1806 For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line). 1811 1807 Multiple signals move multiple signallees to urgent, until the condition is empty. … … 1820 1816 Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent. 1821 1817 External scheduling requires urgent to be a stack, because the signaller excepts to execute immediately after the specified monitor call has exited or waited. 1822 Internal scheduling behaves the same for an urgent stack or queue, except for signalling multiple threads, where the threads unblock from urgent in reverse order from signalling. 1823 If the restart order is important, multiple signalling by a signal thread can be transformed into shared signalling among threads, where each thread signals the next thread. 1824 Hence, \CFA uses an urgent stack. 1818 Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling. 1819 If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread. 1820 We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next. 1821 Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling. 1825 1822 1826 1823 \begin{figure} … … 1840 1837 \end{figure} 1841 1838 1842 Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, seethe buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@.1839 Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@. 1843 1840 The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list. 1844 1841 The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer. … … 1964 1961 External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion. 1965 1962 If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. 1966 Threads making calls to functions that are currently excluded block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor.1963 Calls threads to functions that are currently excluded block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor. 1967 1964 Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@. 1968 1965 The writer does a similar action for each reader or writer using the resource. 1969 1966 Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@. 1970 External scheduling allows waiting for events from other threads while restricting unrelated events .1967 External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor. 1971 1968 The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels. 1972 1969 While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features. … … 1983 1980 Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number. 1984 1981 Putting loops around the @wait@s does not correct the problem; 1985 the s olution must be restructured to account for barging.1982 the simple solution must be restructured to account for barging. 1986 1983 1987 1984 \begin{figure} … … 2051 2048 the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently. 2052 2049 The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor. 2053 Blocking signal lingis the reverse, where the waiter is providing the cooperation for the signalling thread;2050 Blocking signal is the reverse, where the waiter is providing the cooperation for the signalling thread; 2054 2051 the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter. 2055 2052 The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state. … … 2082 2079 While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric. 2083 2080 Finally, a signaller, 2084 \newpage2085 2081 \begin{cfa} 2086 2082 void baz( M & mutex m1, M & mutex m2 ) { … … 2088 2084 } 2089 2085 \end{cfa} 2090 must have acquired at least the same locks as the waiting thread signalled from the condition queue.2086 must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging. 2091 2087 2092 2088 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn, m1, m2 )@. … … 2121 2117 The \emph{conditional-expression} of a @when@ may call a function, but the function must not block or context switch. 2122 2118 If there are multiple acceptable mutex calls, selection occurs top-to-bottom (prioritized) among the @waitfor@ clauses, whereas some programming languages with similar mechanisms accept nondeterministically for this case, \eg Go \lstinline[morekeywords=select]@select@. 2123 If some accept guards are true and there are no outstanding calls to these members, the acceptor is accept-blocked until a call to one of these members is made.2119 If some accept guards are true and there are no outstanding calls to these members, the acceptor is blocked until a call to one of these members is made. 2124 2120 If there is a @timeout@ clause, it provides an upper bound on waiting. 2125 2121 If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead. … … 2164 2160 However, the basic @waitfor@ semantics do not support this functionality, since using an object after its destructor is called is undefined. 2165 2161 Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then the caller is unblocked from the urgent queue to deallocate the object. 2166 Accepting the destructor is anidiomatic way to terminate a thread in \CFA.2162 Accepting the destructor is the idiomatic way to terminate a thread in \CFA. 2167 2163 2168 2164 … … 2258 2254 In the object-oriented scenario, the type and all its operators are always present at compilation (even separate compilation), so it is possible to number the operations in a bit mask and use an $O(1)$ compare with a similar bit mask created for the operations specified in a @waitfor@. 2259 2255 2260 In \CFA, monitor functions can be statically added/removed in translation units, so it is impossible to apply an $O(1)$ approach.2256 However, in \CFA, monitor functions can be statically added/removed in translation units, making a fast subset check difficult. 2261 2257 \begin{cfa} 2262 2258 monitor M { ... }; // common type, included in .h file … … 2265 2261 void g( M & mutex m ) { waitfor( `f`, m ); } 2266 2262 translation unit 2 2267 void `f`( M & mutex m ); 2263 void `f`( M & mutex m ); $\C{// replacing f and g for type M in this translation unit}$ 2268 2264 void `g`( M & mutex m ); 2269 void h( M & mutex m ) { waitfor( `f`, m ) or waitfor( `g`, m ); } 2265 void h( M & mutex m ) { waitfor( `f`, m ) or waitfor( `g`, m ); } $\C{// extending type M in this translation unit}$ 2270 2266 \end{cfa} 2271 2267 The @waitfor@ statements in each translation unit cannot form a unique bit-mask because the monitor type does not carry that information. 2272 Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array ,2268 Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array. 2273 2269 Then, the same implementation approach used for the urgent stack is used for the calling queue. 2274 2270 Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (usually short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers. … … 2280 2276 2281 2277 External scheduling, like internal scheduling, becomes significantly more complex for multi-monitor semantics. 2282 Even in the simplest case, new semantics need sto be established.2278 Even in the simplest case, new semantics need to be established. 2283 2279 \begin{cfa} 2284 2280 monitor M { ... }; … … 2512 2508 2513 2509 For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc. 2514 However, we strongly advocate using high-level concurrencymechanisms whenever possible.2510 Some of these low-level mechanism are used in the \CFA runtime, but we strongly advocate using high-level mechanisms whenever possible. 2515 2511 2516 2512 … … 2568 2564 \label{s:RuntimeStructureCluster} 2569 2565 2570 A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the threads from its own ready queue (like an OS).2566 A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads). 2571 2567 The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. 2572 2568 The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. 2573 However, the scheduler is pluggable, supporting alternative schedulers, such as multi-queue multi-server, with work-stealing/sharing .2569 However, the scheduler is pluggable, supporting alternative schedulers, such as multi-queue multi-server, with work-stealing/sharing across the virtual processors. 2574 2570 If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another. 2575 2571 No automatic load balancing among clusters is performed by \CFA. … … 2584 2580 \label{s:RuntimeStructureProcessor} 2585 2581 2586 A virtual processor is implemented by a kernel thread (\eg UNIX process), which is subsequentlyscheduled for execution on a hardware processor by the underlying operating system.2582 A virtual processor is implemented by a kernel thread (\eg UNIX process), which are scheduled for execution on a hardware processor by the underlying operating system. 2587 2583 Programs may use more virtual processors than hardware processors. 2588 2584 On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. 2589 2585 (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.) 2590 2586 The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; 2591 balancing the workload with processors is difficult .2587 balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicaton workload do next. 2592 2588 Preemption occurs on virtual processors rather than user threads, via operating-system interrupts. 2593 2589 Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads. … … 2622 2618 \subsection{Preemption} 2623 2619 2624 Nondeterministic preemption provides fairness from long running threads, and forces concurrent programmers to write more robust programs, rather than relying on section of code between cooperative scheduling to be atomic ,2620 Nondeterministic preemption provides fairness from long running threads, and forces concurrent programmers to write more robust programs, rather than relying on section of code between cooperative scheduling to be atomic. 2625 2621 A separate reason for not supporting preemption is that it significantly complicates the runtime system. 2626 2622 Preemption is normally handled by setting a count-down timer on each virtual processor. … … 2649 2645 There are two versions of the \CFA runtime kernel: debug and non-debug. 2650 2646 The debugging version has many runtime checks and internal assertions, \eg stack (non-writable) guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows. 2651 After a program is debugged, the non-debugging version can be used to decrease space and increase performance.2647 After a program is debugged, the non-debugging version can be used to significantly decrease space and increase performance. 2652 2648 2653 2649 … … 2708 2704 The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low. 2709 2705 2710 \newpage2711 2706 \begin{multicols}{2} 2712 2707 \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}} … … 2959 2954 One solution is to offer various tuning options, allowing the scheduler to be adjusted to the requirements of the workload. 2960 2955 However, to be truly flexible, a pluggable scheduler is necessary. 2961 Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion .2956 Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion~\cite{Buhr00b}. 2962 2957 2963 2958 \paragraph{Non-Blocking I/O} … … 2992 2987 \section{Acknowledgements} 2993 2988 2994 The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz and Andrew Beachon the features described in this paper.2989 The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach and Michael Brooks on the features described in this paper. 2995 2990 Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}). %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada. 2996 2991 -
libcfa/src/clock.hfa
r1e5dedc4 r3c6e417 10 10 // Created On : Thu Apr 12 14:36:06 2018 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Jul 2 21:40:01 201813 // Update Count : 712 // Last Modified On : Thu Jun 13 21:21:13 2019 13 // Update Count : 8 14 14 // 15 15 … … 63 63 clock_gettime( CLOCK_REALTIME, &curr ); 64 64 return (Time){ curr }; 65 } // getTime 65 } // getTimeNsec 66 66 67 67 Time getTime() { // without nanoseconds -
src/AST/Convert.cpp
r1e5dedc4 r3c6e417 993 993 const ast::Expr * visit( const ast::UniqueExpr * node ) override final { 994 994 auto rslt = new UniqueExpr( 995 get<Expression>().accept1(node->expr) 995 get<Expression>().accept1(node->expr), 996 node->id 996 997 ); 997 998 … … 1074 1075 } 1075 1076 1077 const ast::Type * visitType( const ast::Type * node, Type * type ) { 1078 // Some types do this in their constructor so add a check. 1079 if ( !node->attributes.empty() && type->attributes.empty() ) { 1080 type->attributes = get<Attribute>().acceptL( node->attributes ); 1081 } 1082 this->node = type; 1083 return nullptr; 1084 } 1085 1076 1086 const ast::Type * visit( const ast::VoidType * node ) override final { 1077 this->node = new VoidType{ cv( node ) }; 1078 return nullptr; 1087 return visitType( node, new VoidType{ cv( node ) } ); 1079 1088 } 1080 1089 … … 1085 1094 Validate::SizeType = type; 1086 1095 } 1087 this->node = type; 1088 return nullptr; 1096 return visitType( node, type ); 1089 1097 } 1090 1098 1091 1099 const ast::Type * visit( const ast::PointerType * node ) override final { 1092 this->node =new PointerType{1100 return visitType( node, new PointerType{ 1093 1101 cv( node ), 1094 1102 get<Type>().accept1( node->base ), … … 1096 1104 (bool)node->isVarLen, 1097 1105 (bool)node->isStatic 1098 }; 1099 return nullptr; 1106 } ); 1100 1107 } 1101 1108 1102 1109 const ast::Type * visit( const ast::ArrayType * node ) override final { 1103 this->node =new ArrayType{1110 return visitType( node, new ArrayType{ 1104 1111 cv( node ), 1105 1112 get<Type>().accept1( node->base ), … … 1107 1114 (bool)node->isVarLen, 1108 1115 (bool)node->isStatic 1109 }; 1110 return nullptr; 1116 } ); 1111 1117 } 1112 1118 1113 1119 const ast::Type * visit( const ast::ReferenceType * node ) override final { 1114 this->node =new ReferenceType{1120 return visitType( node, new ReferenceType{ 1115 1121 cv( node ), 1116 1122 get<Type>().accept1( node->base ) 1117 }; 1118 return nullptr; 1123 } ); 1119 1124 } 1120 1125 1121 1126 const ast::Type * visit( const ast::QualifiedType * node ) override final { 1122 this->node =new QualifiedType{1127 return visitType( node, new QualifiedType{ 1123 1128 cv( node ), 1124 1129 get<Type>().accept1( node->parent ), 1125 1130 get<Type>().accept1( node->child ) 1126 }; 1127 return nullptr; 1131 } ); 1128 1132 } 1129 1133 … … 1136 1140 ty->parameters = get<DeclarationWithType>().acceptL( node->params ); 1137 1141 ty->forall = get<TypeDecl>().acceptL( node->forall ); 1138 this->node = ty; 1139 return nullptr; 1140 } 1141 1142 void postvisit( const ast::ReferenceToType * old, ReferenceToType * ty ) { 1142 return visitType( node, ty ); 1143 } 1144 1145 const ast::Type * postvisit( const ast::ReferenceToType * old, ReferenceToType * ty ) { 1143 1146 ty->forall = get<TypeDecl>().acceptL( old->forall ); 1144 1147 ty->parameters = get<Expression>().acceptL( old->params ); 1145 1148 ty->hoistType = old->hoistType; 1149 return visitType( old, ty ); 1146 1150 } 1147 1151 … … 1161 1165 }; 1162 1166 } 1163 postvisit( node, ty ); 1164 this->node = ty; 1165 return nullptr; 1167 return postvisit( node, ty ); 1166 1168 } 1167 1169 … … 1181 1183 }; 1182 1184 } 1183 postvisit( node, ty ); 1184 this->node = ty; 1185 return nullptr; 1185 return postvisit( node, ty ); 1186 1186 } 1187 1187 … … 1201 1201 }; 1202 1202 } 1203 postvisit( node, ty ); 1204 this->node = ty; 1205 return nullptr; 1203 return postvisit( node, ty ); 1206 1204 } 1207 1205 … … 1221 1219 }; 1222 1220 } 1223 postvisit( node, ty ); 1224 this->node = ty; 1225 return nullptr; 1221 return postvisit( node, ty ); 1226 1222 } 1227 1223 … … 1243 1239 }; 1244 1240 } 1245 postvisit( node, ty ); 1246 this->node = ty; 1247 return nullptr; 1241 return postvisit( node, ty ); 1248 1242 } 1249 1243 1250 1244 const ast::Type * visit( const ast::TupleType * node ) override final { 1251 this->node =new TupleType{1245 return visitType( node, new TupleType{ 1252 1246 cv( node ), 1253 1247 get<Type>().acceptL( node->types ) 1254 1248 // members generated by TupleType c'tor 1255 }; 1256 return nullptr; 1249 } ); 1257 1250 } 1258 1251 1259 1252 const ast::Type * visit( const ast::TypeofType * node ) override final { 1260 this->node =new TypeofType{1253 return visitType( node, new TypeofType{ 1261 1254 cv( node ), 1262 1255 get<Expression>().accept1( node->expr ), 1263 1256 (bool)node->kind 1264 }; 1265 return nullptr; 1257 } ); 1266 1258 } 1267 1259 1268 1260 const ast::Type * visit( const ast::VarArgsType * node ) override final { 1269 this->node = new VarArgsType{ cv( node ) }; 1270 return nullptr; 1261 return visitType( node, new VarArgsType{ cv( node ) } ); 1271 1262 } 1272 1263 1273 1264 const ast::Type * visit( const ast::ZeroType * node ) override final { 1274 this->node = new ZeroType{ cv( node ) }; 1275 return nullptr; 1265 return visitType( node, new ZeroType{ cv( node ) } ); 1276 1266 } 1277 1267 1278 1268 const ast::Type * visit( const ast::OneType * node ) override final { 1279 this->node = new OneType{ cv( node ) }; 1280 return nullptr; 1281 } 1282 1283 const ast::Type * visit( const ast::GlobalScopeType * ) override final { 1284 this->node = new GlobalScopeType{}; 1285 return nullptr; 1269 return visitType( node, new OneType{ cv( node ) } ); 1270 } 1271 1272 const ast::Type * visit( const ast::GlobalScopeType * node ) override final { 1273 return visitType( node, new GlobalScopeType{} ); 1286 1274 } 1287 1275 … … 2367 2355 auto rslt = new ast::UniqueExpr( 2368 2356 old->location, 2369 GET_ACCEPT_1(expr, Expr) 2357 GET_ACCEPT_1(expr, Expr), 2358 old->get_id() 2370 2359 ); 2371 2360 rslt->object = GET_ACCEPT_1(object, ObjectDecl); … … 2440 2429 } 2441 2430 2431 void visitType( Type * old, ast::Type * type ) { 2432 // Some types do this in their constructor so add a check. 2433 if ( !old->attributes.empty() && type->attributes.empty() ) { 2434 type->attributes = GET_ACCEPT_V(attributes, Attribute); 2435 } 2436 this->node = type; 2437 } 2438 2442 2439 virtual void visit( VoidType * old ) override final { 2443 this->node = new ast::VoidType{ cv( old ) };2440 visitType( old, new ast::VoidType{ cv( old ) } ); 2444 2441 } 2445 2442 … … 2450 2447 sizeType = type; 2451 2448 } 2452 this->node = type;2449 visitType( old, type ); 2453 2450 } 2454 2451 2455 2452 virtual void visit( PointerType * old ) override final { 2456 this->node =new ast::PointerType{2453 visitType( old, new ast::PointerType{ 2457 2454 GET_ACCEPT_1( base, Type ), 2458 2455 GET_ACCEPT_1( dimension, Expr ), … … 2460 2457 (ast::DimensionFlag)old->isStatic, 2461 2458 cv( old ) 2462 } ;2459 } ); 2463 2460 } 2464 2461 2465 2462 virtual void visit( ArrayType * old ) override final { 2466 this->node =new ast::ArrayType{2463 visitType( old, new ast::ArrayType{ 2467 2464 GET_ACCEPT_1( base, Type ), 2468 2465 GET_ACCEPT_1( dimension, Expr ), … … 2470 2467 (ast::DimensionFlag)old->isStatic, 2471 2468 cv( old ) 2472 } ;2469 } ); 2473 2470 } 2474 2471 2475 2472 virtual void visit( ReferenceType * old ) override final { 2476 this->node =new ast::ReferenceType{2473 visitType( old, new ast::ReferenceType{ 2477 2474 GET_ACCEPT_1( base, Type ), 2478 2475 cv( old ) 2479 } ;2476 } ); 2480 2477 } 2481 2478 2482 2479 virtual void visit( QualifiedType * old ) override final { 2483 this->node =new ast::QualifiedType{2480 visitType( old, new ast::QualifiedType{ 2484 2481 GET_ACCEPT_1( parent, Type ), 2485 2482 GET_ACCEPT_1( child, Type ), 2486 2483 cv( old ) 2487 } ;2484 } ); 2488 2485 } 2489 2486 … … 2496 2493 ty->params = GET_ACCEPT_V( parameters, DeclWithType ); 2497 2494 ty->forall = GET_ACCEPT_V( forall, TypeDecl ); 2498 this->node = ty;2495 visitType( old, ty ); 2499 2496 } 2500 2497 … … 2503 2500 ty->params = GET_ACCEPT_V( parameters, Expr ); 2504 2501 ty->hoistType = old->hoistType; 2502 visitType( old, ty ); 2505 2503 } 2506 2504 … … 2521 2519 } 2522 2520 postvisit( old, ty ); 2523 this->node = ty;2524 2521 } 2525 2522 … … 2540 2537 } 2541 2538 postvisit( old, ty ); 2542 this->node = ty;2543 2539 } 2544 2540 … … 2559 2555 } 2560 2556 postvisit( old, ty ); 2561 this->node = ty;2562 2557 } 2563 2558 … … 2578 2573 } 2579 2574 postvisit( old, ty ); 2580 this->node = ty;2581 2575 } 2582 2576 … … 2599 2593 } 2600 2594 postvisit( old, ty ); 2601 this->node = ty;2602 2595 } 2603 2596 2604 2597 virtual void visit( TupleType * old ) override final { 2605 this->node =new ast::TupleType{2598 visitType( old, new ast::TupleType{ 2606 2599 GET_ACCEPT_V( types, Type ), 2607 2600 // members generated by TupleType c'tor 2608 2601 cv( old ) 2609 } ;2602 } ); 2610 2603 } 2611 2604 2612 2605 virtual void visit( TypeofType * old ) override final { 2613 this->node =new ast::TypeofType{2606 visitType( old, new ast::TypeofType{ 2614 2607 GET_ACCEPT_1( expr, Expr ), 2615 2608 (ast::TypeofType::Kind)old->is_basetypeof, 2616 2609 cv( old ) 2617 } ;2610 } ); 2618 2611 } 2619 2612 … … 2623 2616 2624 2617 virtual void visit( VarArgsType * old ) override final { 2625 this->node = new ast::VarArgsType{ cv( old ) };2618 visitType( old, new ast::VarArgsType{ cv( old ) } ); 2626 2619 } 2627 2620 2628 2621 virtual void visit( ZeroType * old ) override final { 2629 this->node = new ast::ZeroType{ cv( old ) };2622 visitType( old, new ast::ZeroType{ cv( old ) } ); 2630 2623 } 2631 2624 2632 2625 virtual void visit( OneType * old ) override final { 2633 this->node = new ast::OneType{ cv( old ) };2634 } 2635 2636 virtual void visit( GlobalScopeType * ) override final {2637 this->node = new ast::GlobalScopeType{};2626 visitType( old, new ast::OneType{ cv( old ) } ); 2627 } 2628 2629 virtual void visit( GlobalScopeType * old ) override final { 2630 visitType( old, new ast::GlobalScopeType{} ); 2638 2631 } 2639 2632 -
src/AST/Expr.hpp
r1e5dedc4 r3c6e417 47 47 48 48 ParamEntry() : decl( 0 ), declptr( nullptr ), actualType( nullptr ), formalType( nullptr ), expr( nullptr ) {} 49 ParamEntry( UniqueId id, Decl * declptr, Type* actual, Type* formal, Expr* e ) 49 ParamEntry( 50 UniqueId id, const Decl * declptr, const Type * actual, const Type * formal, 51 const Expr * e ) 50 52 : decl( id ), declptr( declptr ), actualType( actual ), formalType( formal ), expr( e ) {} 51 53 }; … … 129 131 case Params: return data.inferParams; 130 132 } 133 assert(!"unreachable"); 131 134 return *((InferredParams*)nullptr); 132 135 } … … 138 141 assert(!"Mode was not already Params"); 139 142 return *((InferredParams*)nullptr); 143 } 144 145 void set_inferParams( InferredParams && ps ) { 146 switch(mode) { 147 case Slots: 148 data.resnSlots.~ResnSlots(); 149 // fallthrough 150 case Empty: 151 new(&data.inferParams) InferredParams{ std::move( ps ) }; 152 mode = Params; 153 break; 154 case Params: 155 data.inferParams = std::move( ps ); 156 break; 157 } 140 158 } 141 159 -
src/AST/Type.hpp
r1e5dedc4 r3c6e417 37 37 public: 38 38 CV::Qualifiers qualifiers; 39 40 Type( CV::Qualifiers q = {} ) : qualifiers(q) {} 39 std::vector<ptr<Attribute>> attributes; 40 41 Type( CV::Qualifiers q = {}, std::vector<ptr<Attribute>> && as = {} ) 42 : qualifiers(q), attributes(std::move(as)) {} 41 43 42 44 bool is_const() const { return qualifiers.is_const; } … … 268 270 ForallList forall; 269 271 270 ParameterizedType( ForallList&& fs = {}, CV::Qualifiers q = {} ) 271 : Type(q), forall(std::move(fs)) {} 272 ParameterizedType( CV::Qualifiers q ) : Type(q), forall() {} 272 ParameterizedType( ForallList&& fs = {}, CV::Qualifiers q = {}, 273 std::vector<ptr<Attribute>> && as = {} ) 274 : Type(q, std::move(as)), forall(std::move(fs)) {} 275 276 ParameterizedType( CV::Qualifiers q, std::vector<ptr<Attribute>> && as = {} ) 277 : Type(q, std::move(as)), forall() {} 273 278 274 279 private: … … 311 316 public: 312 317 std::vector<ptr<Expr>> params; 313 std::vector<ptr<Attribute>> attributes;314 318 std::string name; 315 319 bool hoistType = false; … … 317 321 ReferenceToType( const std::string& n, CV::Qualifiers q = {}, 318 322 std::vector<ptr<Attribute>> && as = {} ) 319 : ParameterizedType(q ), params(), attributes(std::move(as)), name(n) {}323 : ParameterizedType(q, std::move(as)), params(), name(n) {} 320 324 321 325 /// Gets aggregate declaration this type refers to -
src/AST/TypeEnvironment.hpp
r1e5dedc4 r3c6e417 215 215 std::ostream & operator<<( std::ostream & out, const TypeEnvironment & env ); 216 216 217 } 217 } // namespace ast 218 218 219 219 // Local Variables: // -
src/AST/porting.md
r1e5dedc4 r3c6e417 302 302 * `ExplodedActual.h` => `ExplodedArg.hpp` 303 303 304 `polyCost` 305 * switched order of `env`, `symtab` parameters for better consistency 306 307 `findMinCost` 308 * pulled out conversion cost promotion into separate `promoteCvtCost` function 309 310 `resolveAssertions` => `satisfyAssertions` 311 * `ResolveAssertions.h` => `SatisfyAssertions.hpp` 312 * `Resn*` => `Sat*` 313 304 314 [1] https://gcc.gnu.org/onlinedocs/gcc-9.1.0/gcc/Type-Attributes.html#Type-Attributes 305 315 -
src/ResolvExpr/AlternativeFinder.cc
r1e5dedc4 r3c6e417 227 227 } 228 228 229 const ast::Expr * referenceToRvalueConversion( const ast::Expr * expr, Cost & cost ) {230 if ( expr->result.as< ast::ReferenceType >() ) {231 // cast away reference from expr232 cost.incReference();233 return new ast::CastExpr{ expr->location, expr, expr->result->stripReferences() };234 }235 236 return expr;237 }238 239 229 template< typename InputIterator, typename OutputIterator > 240 230 void AlternativeFinder::findSubExprs( InputIterator begin, InputIterator end, OutputIterator out ) { … … 518 508 } 519 509 520 /// Unique identifier for matching expression resolutions to their requesting expression 521 UniqueId globalResnSlot = 0;510 /// Unique identifier for matching expression resolutions to their requesting expression (located in CandidateFinder.cpp) 511 extern UniqueId globalResnSlot; 522 512 523 513 template< typename OutputIterator > -
src/ResolvExpr/Candidate.hpp
r1e5dedc4 r3c6e417 53 53 : expr( x ), cost( Cost::zero ), cvtCost( Cost::zero ), env( e ), open(), need() {} 54 54 55 Candidate( const Candidate & o, const ast::Expr * x )56 : expr( x ), cost( o.cost ), cvtCost( Cost::zero ), env( o.env ), open( o.open ),55 Candidate( const Candidate & o, const ast::Expr * x, const Cost & addedCost = Cost::zero ) 56 : expr( x ), cost( o.cost + addedCost ), cvtCost( Cost::zero ), env( o.env ), open( o.open ), 57 57 need( o.need ) {} 58 59 Candidate( 60 const ast::Expr * x, const ast::TypeEnvironment & e, const ast::OpenVarSet & o, 61 const ast::AssertionSet & n, const Cost & c, const Cost & cvt = Cost::zero ) 62 : expr( x ), cost( c ), cvtCost( cvt ), env( e ), open( o ), need( n.begin(), n.end() ) {} 58 63 59 64 Candidate( -
src/ResolvExpr/CandidateFinder.cpp
r1e5dedc4 r3c6e417 27 27 #include "Cost.h" 28 28 #include "ExplodedArg.hpp" 29 #include "RenameVars.h" // for renameTyVars 29 30 #include "Resolver.h" 31 #include "ResolveTypeof.h" 30 32 #include "SatisfyAssertions.hpp" 31 #include "typeops.h" // for adjustExprType 33 #include "typeops.h" // for adjustExprType, conversionCost, polyCost, specCost 32 34 #include "Unify.h" 33 35 #include "AST/Expr.hpp" … … 38 40 #include "AST/Type.hpp" 39 41 #include "SymTab/Mangler.h" 42 #include "SymTab/Validate.h" // for validateType 40 43 #include "Tuples/Tuples.h" // for handleTupleAssignment 41 44 … … 44 47 namespace ResolvExpr { 45 48 49 using std::move; 50 51 /// partner to move that copies any copyable type 52 template<typename T> 53 T copy( const T & x ) { return x; } 54 55 const ast::Expr * referenceToRvalueConversion( const ast::Expr * expr, Cost & cost ) { 56 if ( expr->result.as< ast::ReferenceType >() ) { 57 // cast away reference from expr 58 cost.incReference(); 59 return new ast::CastExpr{ expr->location, expr, expr->result->stripReferences() }; 60 } 61 62 return expr; 63 } 64 65 /// Unique identifier for matching expression resolutions to their requesting expression 66 UniqueId globalResnSlot = 0; 67 68 Cost computeConversionCost( 69 const ast::Type * argType, const ast::Type * paramType, const ast::SymbolTable & symtab, 70 const ast::TypeEnvironment & env 71 ) { 72 PRINT( 73 std::cerr << std::endl << "converting "; 74 ast::print( std::cerr, argType, 2 ); 75 std::cerr << std::endl << " to "; 76 ast::print( std::cerr, paramType, 2 ); 77 std::cerr << std::endl << "environment is: "; 78 ast::print( std::cerr, env, 2 ); 79 std::cerr << std::endl; 80 ) 81 Cost convCost = conversionCost( argType, paramType, symtab, env ); 82 PRINT( 83 std::cerr << std::endl << "cost is " << convCost << std::endl; 84 ) 85 if ( convCost == Cost::infinity ) return convCost; 86 convCost.incPoly( polyCost( paramType, symtab, env ) + polyCost( argType, symtab, env ) ); 87 PRINT( 88 std::cerr << "cost with polycost is " << convCost << std::endl; 89 ) 90 return convCost; 91 } 92 46 93 namespace { 47 48 94 /// First index is which argument, second is which alternative, third is which exploded element 49 95 using ExplodedArgs_new = std::deque< std::vector< ExplodedArg > >; … … 65 111 } 66 112 113 /// Computes conversion cost for a given expression to a given type 114 const ast::Expr * computeExpressionConversionCost( 115 const ast::Expr * arg, const ast::Type * paramType, const ast::SymbolTable & symtab, const ast::TypeEnvironment & env, Cost & outCost 116 ) { 117 Cost convCost = computeConversionCost( arg->result, paramType, symtab, env ); 118 outCost += convCost; 119 120 // If there is a non-zero conversion cost, ignoring poly cost, then the expression requires 121 // conversion. Ignore poly cost for now, since this requires resolution of the cast to 122 // infer parameters and this does not currently work for the reason stated below 123 Cost tmpCost = convCost; 124 tmpCost.incPoly( -tmpCost.get_polyCost() ); 125 if ( tmpCost != Cost::zero ) { 126 ast::ptr< ast::Type > newType = paramType; 127 env.apply( newType ); 128 return new ast::CastExpr{ arg->location, arg, newType }; 129 130 // xxx - *should* be able to resolve this cast, but at the moment pointers are not 131 // castable to zero_t, but are implicitly convertible. This is clearly inconsistent, 132 // once this is fixed it should be possible to resolve the cast. 133 // xxx - this isn't working, it appears because type1 (parameter) is seen as widenable, 134 // but it shouldn't be because this makes the conversion from DT* to DT* since 135 // commontype(zero_t, DT*) is DT*, rather than nothing 136 137 // CandidateFinder finder{ symtab, env }; 138 // finder.find( arg, ResolvMode::withAdjustment() ); 139 // assertf( finder.candidates.size() > 0, 140 // "Somehow castable expression failed to find alternatives." ); 141 // assertf( finder.candidates.size() == 1, 142 // "Somehow got multiple alternatives for known cast expression." ); 143 // return finder.candidates.front()->expr; 144 } 145 146 return arg; 147 } 148 67 149 /// Computes conversion cost for a given candidate 68 150 Cost computeApplicationConversionCost( 69 const CandidateRef &cand, const ast::SymbolTable & symtab151 CandidateRef cand, const ast::SymbolTable & symtab 70 152 ) { 71 #warning unimplemented 72 (void)cand; (void)symtab; 73 assert(false); 74 return Cost::infinity; 153 auto appExpr = cand->expr.strict_as< ast::ApplicationExpr >(); 154 auto pointer = appExpr->func->result.strict_as< ast::PointerType >(); 155 auto function = pointer->base.strict_as< ast::FunctionType >(); 156 157 Cost convCost = Cost::zero; 158 const auto & params = function->params; 159 auto param = params.begin(); 160 auto & args = appExpr->args; 161 162 for ( unsigned i = 0; i < args.size(); ++i ) { 163 const ast::Type * argType = args[i]->result; 164 PRINT( 165 std::cerr << "arg expression:" << std::endl; 166 ast::print( std::cerr, args[i], 2 ); 167 std::cerr << "--- results are" << std::endl; 168 ast::print( std::cerr, argType, 2 ); 169 ) 170 171 if ( param == params.end() ) { 172 if ( function->isVarArgs ) { 173 convCost.incUnsafe(); 174 PRINT( std::cerr << "end of params with varargs function: inc unsafe: " 175 << convCost << std::endl; ; ) 176 // convert reference-typed expressions into value-typed expressions 177 cand->expr = ast::mutate_field_index( 178 appExpr, &ast::ApplicationExpr::args, i, 179 referenceToRvalueConversion( args[i], convCost ) ); 180 continue; 181 } else return Cost::infinity; 182 } 183 184 if ( auto def = args[i].as< ast::DefaultArgExpr >() ) { 185 // Default arguments should be free - don't include conversion cost. 186 // Unwrap them here because they are not relevant to the rest of the system 187 cand->expr = ast::mutate_field_index( 188 appExpr, &ast::ApplicationExpr::args, i, def->expr ); 189 ++param; 190 continue; 191 } 192 193 // mark conversion cost and also specialization cost of param type 194 const ast::Type * paramType = (*param)->get_type(); 195 cand->expr = ast::mutate_field_index( 196 appExpr, &ast::ApplicationExpr::args, i, 197 computeExpressionConversionCost( 198 args[i], paramType, symtab, cand->env, convCost ) ); 199 convCost.decSpec( specCost( paramType ) ); 200 ++param; // can't be in for-loop update because of the continue 201 } 202 203 if ( param != params.end() ) return Cost::infinity; 204 205 // specialization cost of return types can't be accounted for directly, it disables 206 // otherwise-identical calls, like this example based on auto-newline in the I/O lib: 207 // 208 // forall(otype OS) { 209 // void ?|?(OS&, int); // with newline 210 // OS& ?|?(OS&, int); // no newline, always chosen due to more specialization 211 // } 212 213 // mark type variable and specialization cost of forall clause 214 convCost.incVar( function->forall.size() ); 215 for ( const ast::TypeDecl * td : function->forall ) { 216 convCost.decSpec( td->assertions.size() ); 217 } 218 219 return convCost; 220 } 221 222 void makeUnifiableVars( 223 const ast::ParameterizedType * type, ast::OpenVarSet & unifiableVars, 224 ast::AssertionSet & need 225 ) { 226 for ( const ast::TypeDecl * tyvar : type->forall ) { 227 unifiableVars[ tyvar->name ] = ast::TypeDecl::Data{ tyvar }; 228 for ( const ast::DeclWithType * assn : tyvar->assertions ) { 229 need[ assn ].isUsed = true; 230 } 231 } 232 } 233 234 /// Gets a default value from an initializer, nullptr if not present 235 const ast::ConstantExpr * getDefaultValue( const ast::Init * init ) { 236 if ( auto si = dynamic_cast< const ast::SingleInit * >( init ) ) { 237 if ( auto ce = si->value.as< ast::CastExpr >() ) { 238 return ce->arg.as< ast::ConstantExpr >(); 239 } else { 240 return si->value.as< ast::ConstantExpr >(); 241 } 242 } 243 return nullptr; 244 } 245 246 /// State to iteratively build a match of parameter expressions to arguments 247 struct ArgPack { 248 std::size_t parent; ///< Index of parent pack 249 ast::ptr< ast::Expr > expr; ///< The argument stored here 250 Cost cost; ///< The cost of this argument 251 ast::TypeEnvironment env; ///< Environment for this pack 252 ast::AssertionSet need; ///< Assertions outstanding for this pack 253 ast::AssertionSet have; ///< Assertions found for this pack 254 ast::OpenVarSet open; ///< Open variables for this pack 255 unsigned nextArg; ///< Index of next argument in arguments list 256 unsigned tupleStart; ///< Number of tuples that start at this index 257 unsigned nextExpl; ///< Index of next exploded element 258 unsigned explAlt; ///< Index of alternative for nextExpl > 0 259 260 ArgPack() 261 : parent( 0 ), expr(), cost( Cost::zero ), env(), need(), have(), open(), nextArg( 0 ), 262 tupleStart( 0 ), nextExpl( 0 ), explAlt( 0 ) {} 263 264 ArgPack( 265 const ast::TypeEnvironment & env, const ast::AssertionSet & need, 266 const ast::AssertionSet & have, const ast::OpenVarSet & open ) 267 : parent( 0 ), expr(), cost( Cost::zero ), env( env ), need( need ), have( have ), 268 open( open ), nextArg( 0 ), tupleStart( 0 ), nextExpl( 0 ), explAlt( 0 ) {} 269 270 ArgPack( 271 std::size_t parent, const ast::Expr * expr, ast::TypeEnvironment && env, 272 ast::AssertionSet && need, ast::AssertionSet && have, ast::OpenVarSet && open, 273 unsigned nextArg, unsigned tupleStart = 0, Cost cost = Cost::zero, 274 unsigned nextExpl = 0, unsigned explAlt = 0 ) 275 : parent(parent), expr( expr ), cost( cost ), env( move( env ) ), need( move( need ) ), 276 have( move( have ) ), open( move( open ) ), nextArg( nextArg ), tupleStart( tupleStart ), 277 nextExpl( nextExpl ), explAlt( explAlt ) {} 278 279 ArgPack( 280 const ArgPack & o, ast::TypeEnvironment && env, ast::AssertionSet && need, 281 ast::AssertionSet && have, ast::OpenVarSet && open, unsigned nextArg, Cost added ) 282 : parent( o.parent ), expr( o.expr ), cost( o.cost + added ), env( move( env ) ), 283 need( move( need ) ), have( move( have ) ), open( move( open ) ), nextArg( nextArg ), 284 tupleStart( o.tupleStart ), nextExpl( 0 ), explAlt( 0 ) {} 285 286 /// true if this pack is in the middle of an exploded argument 287 bool hasExpl() const { return nextExpl > 0; } 288 289 /// Gets the list of exploded candidates for this pack 290 const ExplodedArg & getExpl( const ExplodedArgs_new & args ) const { 291 return args[ nextArg-1 ][ explAlt ]; 292 } 293 294 /// Ends a tuple expression, consolidating the appropriate args 295 void endTuple( const std::vector< ArgPack > & packs ) { 296 // add all expressions in tuple to list, summing cost 297 std::deque< const ast::Expr * > exprs; 298 const ArgPack * pack = this; 299 if ( expr ) { exprs.emplace_front( expr ); } 300 while ( pack->tupleStart == 0 ) { 301 pack = &packs[pack->parent]; 302 exprs.emplace_front( pack->expr ); 303 cost += pack->cost; 304 } 305 // reset pack to appropriate tuple 306 std::vector< ast::ptr< ast::Expr > > exprv( exprs.begin(), exprs.end() ); 307 expr = new ast::TupleExpr{ expr->location, move( exprv ) }; 308 tupleStart = pack->tupleStart - 1; 309 parent = pack->parent; 310 } 311 }; 312 313 /// Instantiates an argument to match a parameter, returns false if no matching results left 314 bool instantiateArgument( 315 const ast::Type * paramType, const ast::Init * init, const ExplodedArgs_new & args, 316 std::vector< ArgPack > & results, std::size_t & genStart, const ast::SymbolTable & symtab, 317 unsigned nTuples = 0 318 ) { 319 if ( auto tupleType = dynamic_cast< const ast::TupleType * >( paramType ) ) { 320 // paramType is a TupleType -- group args into a TupleExpr 321 ++nTuples; 322 for ( const ast::Type * type : *tupleType ) { 323 // xxx - dropping initializer changes behaviour from previous, but seems correct 324 // ^^^ need to handle the case where a tuple has a default argument 325 if ( ! instantiateArgument( 326 type, nullptr, args, results, genStart, symtab, nTuples ) ) return false; 327 nTuples = 0; 328 } 329 // re-constitute tuples for final generation 330 for ( auto i = genStart; i < results.size(); ++i ) { 331 results[i].endTuple( results ); 332 } 333 return true; 334 } else if ( const ast::TypeInstType * ttype = Tuples::isTtype( paramType ) ) { 335 // paramType is a ttype, consumes all remaining arguments 336 337 // completed tuples; will be spliced to end of results to finish 338 std::vector< ArgPack > finalResults{}; 339 340 // iterate until all results completed 341 std::size_t genEnd; 342 ++nTuples; 343 do { 344 genEnd = results.size(); 345 346 // add another argument to results 347 for ( std::size_t i = genStart; i < genEnd; ++i ) { 348 unsigned nextArg = results[i].nextArg; 349 350 // use next element of exploded tuple if present 351 if ( results[i].hasExpl() ) { 352 const ExplodedArg & expl = results[i].getExpl( args ); 353 354 unsigned nextExpl = results[i].nextExpl + 1; 355 if ( nextExpl == expl.exprs.size() ) { nextExpl = 0; } 356 357 results.emplace_back( 358 i, expl.exprs[ results[i].nextExpl ], copy( results[i].env ), 359 copy( results[i].need ), copy( results[i].have ), 360 copy( results[i].open ), nextArg, nTuples, Cost::zero, nextExpl, 361 results[i].explAlt ); 362 363 continue; 364 } 365 366 // finish result when out of arguments 367 if ( nextArg >= args.size() ) { 368 ArgPack newResult{ 369 results[i].env, results[i].need, results[i].have, results[i].open }; 370 newResult.nextArg = nextArg; 371 const ast::Type * argType = nullptr; 372 373 if ( nTuples > 0 || ! results[i].expr ) { 374 // first iteration or no expression to clone, 375 // push empty tuple expression 376 newResult.parent = i; 377 std::vector< ast::ptr< ast::Expr > > emptyList; 378 newResult.expr = 379 new ast::TupleExpr{ CodeLocation{}, move( emptyList ) }; 380 argType = newResult.expr->result; 381 } else { 382 // clone result to collect tuple 383 newResult.parent = results[i].parent; 384 newResult.cost = results[i].cost; 385 newResult.tupleStart = results[i].tupleStart; 386 newResult.expr = results[i].expr; 387 argType = newResult.expr->result; 388 389 if ( results[i].tupleStart > 0 && Tuples::isTtype( argType ) ) { 390 // the case where a ttype value is passed directly is special, 391 // e.g. for argument forwarding purposes 392 // xxx - what if passing multiple arguments, last of which is 393 // ttype? 394 // xxx - what would happen if unify was changed so that unifying 395 // tuple 396 // types flattened both before unifying lists? then pass in 397 // TupleType (ttype) below. 398 --newResult.tupleStart; 399 } else { 400 // collapse leftover arguments into tuple 401 newResult.endTuple( results ); 402 argType = newResult.expr->result; 403 } 404 } 405 406 // check unification for ttype before adding to final 407 if ( 408 unify( 409 ttype, argType, newResult.env, newResult.need, newResult.have, 410 newResult.open, symtab ) 411 ) { 412 finalResults.emplace_back( move( newResult ) ); 413 } 414 415 continue; 416 } 417 418 // add each possible next argument 419 for ( std::size_t j = 0; j < args[nextArg].size(); ++j ) { 420 const ExplodedArg & expl = args[nextArg][j]; 421 422 // fresh copies of parent parameters for this iteration 423 ast::TypeEnvironment env = results[i].env; 424 ast::OpenVarSet open = results[i].open; 425 426 env.addActual( expl.env, open ); 427 428 // skip empty tuple arguments by (nearly) cloning parent into next gen 429 if ( expl.exprs.empty() ) { 430 results.emplace_back( 431 results[i], move( env ), copy( results[i].need ), 432 copy( results[i].have ), move( open ), nextArg + 1, expl.cost ); 433 434 continue; 435 } 436 437 // add new result 438 results.emplace_back( 439 i, expl.exprs.front(), move( env ), copy( results[i].need ), 440 copy( results[i].have ), move( open ), nextArg + 1, nTuples, 441 expl.cost, expl.exprs.size() == 1 ? 0 : 1, j ); 442 } 443 } 444 445 // reset for next round 446 genStart = genEnd; 447 nTuples = 0; 448 } while ( genEnd != results.size() ); 449 450 // splice final results onto results 451 for ( std::size_t i = 0; i < finalResults.size(); ++i ) { 452 results.emplace_back( move( finalResults[i] ) ); 453 } 454 return ! finalResults.empty(); 455 } 456 457 // iterate each current subresult 458 std::size_t genEnd = results.size(); 459 for ( std::size_t i = genStart; i < genEnd; ++i ) { 460 unsigned nextArg = results[i].nextArg; 461 462 // use remainder of exploded tuple if present 463 if ( results[i].hasExpl() ) { 464 const ExplodedArg & expl = results[i].getExpl( args ); 465 const ast::Expr * expr = expl.exprs[ results[i].nextExpl ]; 466 467 ast::TypeEnvironment env = results[i].env; 468 ast::AssertionSet need = results[i].need, have = results[i].have; 469 ast::OpenVarSet open = results[i].open; 470 471 const ast::Type * argType = expr->result; 472 473 PRINT( 474 std::cerr << "param type is "; 475 ast::print( std::cerr, paramType ); 476 std::cerr << std::endl << "arg type is "; 477 ast::print( std::cerr, argType ); 478 std::cerr << std::endl; 479 ) 480 481 if ( unify( paramType, argType, env, need, have, open, symtab ) ) { 482 unsigned nextExpl = results[i].nextExpl + 1; 483 if ( nextExpl == expl.exprs.size() ) { nextExpl = 0; } 484 485 results.emplace_back( 486 i, expr, move( env ), move( need ), move( have ), move( open ), nextArg, 487 nTuples, Cost::zero, nextExpl, results[i].explAlt ); 488 } 489 490 continue; 491 } 492 493 // use default initializers if out of arguments 494 if ( nextArg >= args.size() ) { 495 if ( const ast::ConstantExpr * cnst = getDefaultValue( init ) ) { 496 ast::TypeEnvironment env = results[i].env; 497 ast::AssertionSet need = results[i].need, have = results[i].have; 498 ast::OpenVarSet open = results[i].open; 499 500 if ( unify( paramType, cnst->result, env, need, have, open, symtab ) ) { 501 results.emplace_back( 502 i, new ast::DefaultArgExpr{ cnst->location, cnst }, move( env ), 503 move( need ), move( have ), move( open ), nextArg, nTuples ); 504 } 505 } 506 507 continue; 508 } 509 510 // Check each possible next argument 511 for ( std::size_t j = 0; j < args[nextArg].size(); ++j ) { 512 const ExplodedArg & expl = args[nextArg][j]; 513 514 // fresh copies of parent parameters for this iteration 515 ast::TypeEnvironment env = results[i].env; 516 ast::AssertionSet need = results[i].need, have = results[i].have; 517 ast::OpenVarSet open = results[i].open; 518 519 env.addActual( expl.env, open ); 520 521 // skip empty tuple arguments by (nearly) cloning parent into next gen 522 if ( expl.exprs.empty() ) { 523 results.emplace_back( 524 results[i], move( env ), move( need ), move( have ), move( open ), 525 nextArg + 1, expl.cost ); 526 527 continue; 528 } 529 530 // consider only first exploded arg 531 const ast::Expr * expr = expl.exprs.front(); 532 const ast::Type * argType = expr->result; 533 534 PRINT( 535 std::cerr << "param type is "; 536 ast::print( std::cerr, paramType ); 537 std::cerr << std::endl << "arg type is "; 538 ast::print( std::cerr, argType ); 539 std::cerr << std::endl; 540 ) 541 542 // attempt to unify types 543 if ( unify( paramType, argType, env, need, have, open, symtab ) ) { 544 // add new result 545 results.emplace_back( 546 i, expr, move( env ), move( need ), move( have ), move( open ), 547 nextArg + 1, nTuples, expl.cost, expl.exprs.size() == 1 ? 0 : 1, j ); 548 } 549 } 550 } 551 552 // reset for next parameter 553 genStart = genEnd; 554 555 return genEnd != results.size(); 556 } 557 558 /// Generate a cast expression from `arg` to `toType` 559 const ast::Expr * restructureCast( 560 ast::ptr< ast::Expr > & arg, const ast::Type * toType, ast::GeneratedFlag isGenerated = ast::GeneratedCast 561 ) { 562 if ( 563 arg->result->size() > 1 564 && ! toType->isVoid() 565 && ! dynamic_cast< const ast::ReferenceType * >( toType ) 566 ) { 567 // Argument is a tuple and the target type is neither void nor a reference. Cast each 568 // member of the tuple to its corresponding target type, producing the tuple of those 569 // cast expressions. If there are more components of the tuple than components in the 570 // target type, then excess components do not come out in the result expression (but 571 // UniqueExpr ensures that the side effects will still be produced) 572 if ( Tuples::maybeImpureIgnoreUnique( arg ) ) { 573 // expressions which may contain side effects require a single unique instance of 574 // the expression 575 arg = new ast::UniqueExpr{ arg->location, arg }; 576 } 577 std::vector< ast::ptr< ast::Expr > > components; 578 for ( unsigned i = 0; i < toType->size(); ++i ) { 579 // cast each component 580 ast::ptr< ast::Expr > idx = new ast::TupleIndexExpr{ arg->location, arg, i }; 581 components.emplace_back( 582 restructureCast( idx, toType->getComponent( i ), isGenerated ) ); 583 } 584 return new ast::TupleExpr{ arg->location, move( components ) }; 585 } else { 586 // handle normally 587 return new ast::CastExpr{ arg->location, arg, toType, isGenerated }; 588 } 589 } 590 591 /// Gets the name from an untyped member expression (must be NameExpr) 592 const std::string & getMemberName( const ast::UntypedMemberExpr * memberExpr ) { 593 if ( memberExpr->member.as< ast::ConstantExpr >() ) { 594 SemanticError( memberExpr, "Indexed access to struct fields unsupported: " ); 595 } 596 597 return memberExpr->member.strict_as< ast::NameExpr >()->name; 75 598 } 76 599 … … 99 622 } 100 623 624 /// Set up candidate assertions for inference 625 void inferParameters( CandidateRef & newCand, CandidateList & out ) { 626 // Set need bindings for any unbound assertions 627 UniqueId crntResnSlot = 0; // matching ID for this expression's assertions 628 for ( auto & assn : newCand->need ) { 629 // skip already-matched assertions 630 if ( assn.second.resnSlot != 0 ) continue; 631 // assign slot for expression if needed 632 if ( crntResnSlot == 0 ) { crntResnSlot = ++globalResnSlot; } 633 // fix slot to assertion 634 assn.second.resnSlot = crntResnSlot; 635 } 636 // pair slot to expression 637 if ( crntResnSlot != 0 ) { 638 newCand->expr.get_and_mutate()->inferred.resnSlots().emplace_back( crntResnSlot ); 639 } 640 641 // add to output list; assertion satisfaction will occur later 642 out.emplace_back( newCand ); 643 } 644 645 /// Completes a function candidate with arguments located 646 void validateFunctionCandidate( 647 const CandidateRef & func, ArgPack & result, const std::vector< ArgPack > & results, 648 CandidateList & out 649 ) { 650 ast::ApplicationExpr * appExpr = 651 new ast::ApplicationExpr{ func->expr->location, func->expr }; 652 // sum cost and accumulate arguments 653 std::deque< const ast::Expr * > args; 654 Cost cost = func->cost; 655 const ArgPack * pack = &result; 656 while ( pack->expr ) { 657 args.emplace_front( pack->expr ); 658 cost += pack->cost; 659 pack = &results[pack->parent]; 660 } 661 std::vector< ast::ptr< ast::Expr > > vargs( args.begin(), args.end() ); 662 appExpr->args = move( vargs ); 663 // build and validate new candidate 664 auto newCand = 665 std::make_shared<Candidate>( appExpr, result.env, result.open, result.need, cost ); 666 PRINT( 667 std::cerr << "instantiate function success: " << appExpr << std::endl; 668 std::cerr << "need assertions:" << std::endl; 669 ast::print( std::cerr, result.need, 2 ); 670 ) 671 inferParameters( newCand, out ); 672 } 673 101 674 /// Builds a list of candidates for a function, storing them in out 102 675 void makeFunctionCandidates( … … 104 677 const ExplodedArgs_new & args, CandidateList & out 105 678 ) { 106 #warning unimplemented 107 (void)func; (void)funcType; (void)args; (void)out; 108 assert(false); 679 ast::OpenVarSet funcOpen; 680 ast::AssertionSet funcNeed, funcHave; 681 ast::TypeEnvironment funcEnv{ func->env }; 682 makeUnifiableVars( funcType, funcOpen, funcNeed ); 683 // add all type variables as open variables now so that those not used in the parameter 684 // list are still considered open 685 funcEnv.add( funcType->forall ); 686 687 if ( targetType && ! targetType->isVoid() && ! funcType->returns.empty() ) { 688 // attempt to narrow based on expected target type 689 const ast::Type * returnType = funcType->returns.front()->get_type(); 690 if ( ! unify( 691 returnType, targetType, funcEnv, funcNeed, funcHave, funcOpen, symtab ) 692 ) { 693 // unification failed, do not pursue this candidate 694 return; 695 } 696 } 697 698 // iteratively build matches, one parameter at a time 699 std::vector< ArgPack > results; 700 results.emplace_back( funcEnv, funcNeed, funcHave, funcOpen ); 701 std::size_t genStart = 0; 702 703 for ( const ast::DeclWithType * param : funcType->params ) { 704 auto obj = strict_dynamic_cast< const ast::ObjectDecl * >( param ); 705 // Try adding the arguments corresponding to the current parameter to the existing 706 // matches 707 if ( ! instantiateArgument( 708 obj->type, obj->init, args, results, genStart, symtab ) ) return; 709 } 710 711 if ( funcType->isVarArgs ) { 712 // append any unused arguments to vararg pack 713 std::size_t genEnd; 714 do { 715 genEnd = results.size(); 716 717 // iterate results 718 for ( std::size_t i = genStart; i < genEnd; ++i ) { 719 unsigned nextArg = results[i].nextArg; 720 721 // use remainder of exploded tuple if present 722 if ( results[i].hasExpl() ) { 723 const ExplodedArg & expl = results[i].getExpl( args ); 724 725 unsigned nextExpl = results[i].nextExpl + 1; 726 if ( nextExpl == expl.exprs.size() ) { nextExpl = 0; } 727 728 results.emplace_back( 729 i, expl.exprs[ results[i].nextExpl ], copy( results[i].env ), 730 copy( results[i].need ), copy( results[i].have ), 731 copy( results[i].open ), nextArg, 0, Cost::zero, nextExpl, 732 results[i].explAlt ); 733 734 continue; 735 } 736 737 // finish result when out of arguments 738 if ( nextArg >= args.size() ) { 739 validateFunctionCandidate( func, results[i], results, out ); 740 741 continue; 742 } 743 744 // add each possible next argument 745 for ( std::size_t j = 0; j < args[nextArg].size(); ++j ) { 746 const ExplodedArg & expl = args[nextArg][j]; 747 748 // fresh copies of parent parameters for this iteration 749 ast::TypeEnvironment env = results[i].env; 750 ast::OpenVarSet open = results[i].open; 751 752 env.addActual( expl.env, open ); 753 754 // skip empty tuple arguments by (nearly) cloning parent into next gen 755 if ( expl.exprs.empty() ) { 756 results.emplace_back( 757 results[i], move( env ), copy( results[i].need ), 758 copy( results[i].have ), move( open ), nextArg + 1, 759 expl.cost ); 760 761 continue; 762 } 763 764 // add new result 765 results.emplace_back( 766 i, expl.exprs.front(), move( env ), copy( results[i].need ), 767 copy( results[i].have ), move( open ), nextArg + 1, 0, expl.cost, 768 expl.exprs.size() == 1 ? 0 : 1, j ); 769 } 770 } 771 772 genStart = genEnd; 773 } while( genEnd != results.size() ); 774 } else { 775 // filter out the results that don't use all the arguments 776 for ( std::size_t i = genStart; i < results.size(); ++i ) { 777 ArgPack & result = results[i]; 778 if ( ! result.hasExpl() && result.nextArg >= args.size() ) { 779 validateFunctionCandidate( func, result, results, out ); 780 } 781 } 782 } 109 783 } 110 784 111 785 /// Adds implicit struct-conversions to the alternative list 112 786 void addAnonConversions( const CandidateRef & cand ) { 113 #warning unimplemented 114 (void)cand; 115 assert(false); 787 // adds anonymous member interpretations whenever an aggregate value type is seen. 788 // it's okay for the aggregate expression to have reference type -- cast it to the 789 // base type to treat the aggregate as the referenced value 790 ast::ptr< ast::Expr > aggrExpr( cand->expr ); 791 ast::ptr< ast::Type > & aggrType = aggrExpr.get_and_mutate()->result; 792 cand->env.apply( aggrType ); 793 794 if ( aggrType.as< ast::ReferenceType >() ) { 795 aggrExpr = 796 new ast::CastExpr{ aggrExpr->location, aggrExpr, aggrType->stripReferences() }; 797 } 798 799 if ( auto structInst = aggrExpr->result.as< ast::StructInstType >() ) { 800 addAggMembers( structInst, aggrExpr, *cand, Cost::safe, "" ); 801 } else if ( auto unionInst = aggrExpr->result.as< ast::UnionInstType >() ) { 802 addAggMembers( unionInst, aggrExpr, *cand, Cost::safe, "" ); 803 } 804 } 805 806 /// Adds aggregate member interpretations 807 void addAggMembers( 808 const ast::ReferenceToType * aggrInst, const ast::Expr * expr, 809 const Candidate & cand, const Cost & addedCost, const std::string & name 810 ) { 811 for ( const ast::Decl * decl : aggrInst->lookup( name ) ) { 812 auto dwt = strict_dynamic_cast< const ast::DeclWithType * >( decl ); 813 CandidateRef newCand = std::make_shared<Candidate>( 814 cand, new ast::MemberExpr{ expr->location, dwt, expr }, addedCost ); 815 // add anonymous member interpretations whenever an aggregate value type is seen 816 // as a member expression 817 addAnonConversions( newCand ); 818 candidates.emplace_back( move( newCand ) ); 819 } 820 } 821 822 /// Adds tuple member interpretations 823 void addTupleMembers( 824 const ast::TupleType * tupleType, const ast::Expr * expr, const Candidate & cand, 825 const Cost & addedCost, const ast::Expr * member 826 ) { 827 if ( auto constantExpr = dynamic_cast< const ast::ConstantExpr * >( member ) ) { 828 // get the value of the constant expression as an int, must be between 0 and the 829 // length of the tuple to have meaning 830 long long val = constantExpr->intValue(); 831 if ( val >= 0 && (unsigned long long)val < tupleType->size() ) { 832 addCandidate( 833 cand, new ast::TupleIndexExpr{ expr->location, expr, (unsigned)val }, 834 addedCost ); 835 } 836 } 116 837 } 117 838 … … 189 910 funcE.emplace_back( *func, symtab ); 190 911 } 191 argExpansions.emplace_front( std::move( funcE ) );912 argExpansions.emplace_front( move( funcE ) ); 192 913 193 914 for ( const CandidateRef & op : opFinder ) { … … 233 954 if ( cvtCost != Cost::infinity ) { 234 955 withFunc->cvtCost = cvtCost; 235 candidates.emplace_back( std::move( withFunc ) );236 } 237 } 238 found = std::move( candidates );956 candidates.emplace_back( move( withFunc ) ); 957 } 958 } 959 found = move( candidates ); 239 960 240 961 // use a new list so that candidates are not examined by addAnonConversions twice … … 285 1006 286 1007 void postvisit( const ast::CastExpr * castExpr ) { 287 #warning unimplemented 288 (void)castExpr; 289 assert(false); 1008 ast::ptr< ast::Type > toType = castExpr->result; 1009 assert( toType ); 1010 toType = resolveTypeof( toType, symtab ); 1011 toType = SymTab::validateType( toType, symtab ); 1012 toType = adjustExprType( toType, tenv, symtab ); 1013 1014 CandidateFinder finder{ symtab, tenv, toType }; 1015 finder.find( castExpr->arg, ResolvMode::withAdjustment() ); 1016 1017 CandidateList matches; 1018 for ( CandidateRef & cand : finder.candidates ) { 1019 ast::AssertionSet need( cand->need.begin(), cand->need.end() ), have; 1020 ast::OpenVarSet open( cand->open ); 1021 1022 cand->env.extractOpenVars( open ); 1023 1024 // It is possible that a cast can throw away some values in a multiply-valued 1025 // expression, e.g. cast-to-void, one value to zero. Figure out the prefix of the 1026 // subexpression results that are cast directly. The candidate is invalid if it 1027 // has fewer results than there are types to cast to. 1028 int discardedValues = cand->expr->result->size() - toType->size(); 1029 if ( discardedValues < 0 ) continue; 1030 1031 // unification run for side-effects 1032 unify( toType, cand->expr->result, cand->env, need, have, open, symtab ); 1033 Cost thisCost = castCost( cand->expr->result, toType, symtab, cand->env ); 1034 PRINT( 1035 std::cerr << "working on cast with result: " << toType << std::endl; 1036 std::cerr << "and expr type: " << cand->expr->result << std::endl; 1037 std::cerr << "env: " << cand->env << std::endl; 1038 ) 1039 if ( thisCost != Cost::infinity ) { 1040 PRINT( 1041 std::cerr << "has finite cost." << std::endl; 1042 ) 1043 // count one safe conversion for each value that is thrown away 1044 thisCost.incSafe( discardedValues ); 1045 CandidateRef newCand = std::make_shared<Candidate>( 1046 restructureCast( cand->expr, toType, castExpr->isGenerated ), 1047 copy( cand->env ), move( open ), move( need ), cand->cost, 1048 cand->cost + thisCost ); 1049 inferParameters( newCand, matches ); 1050 } 1051 } 1052 1053 // select first on argument cost, then conversion cost 1054 CandidateList minArgCost = findMinCost( matches ); 1055 promoteCvtCost( minArgCost ); 1056 candidates = findMinCost( minArgCost ); 290 1057 } 291 1058 … … 297 1064 for ( CandidateRef & r : finder.candidates ) { 298 1065 addCandidate( 299 *r, new ast::VirtualCastExpr{ castExpr->location, r->expr, castExpr->result } ); 1066 *r, 1067 new ast::VirtualCastExpr{ castExpr->location, r->expr, castExpr->result } ); 300 1068 } 301 1069 } 302 1070 303 1071 void postvisit( const ast::UntypedMemberExpr * memberExpr ) { 304 #warning unimplemented 305 (void)memberExpr; 306 assert(false); 1072 CandidateFinder aggFinder{ symtab, tenv }; 1073 aggFinder.find( memberExpr->aggregate, ResolvMode::withAdjustment() ); 1074 for ( CandidateRef & agg : aggFinder.candidates ) { 1075 // it's okay for the aggregate expression to have reference type -- cast it to the 1076 // base type to treat the aggregate as the referenced value 1077 Cost addedCost = Cost::zero; 1078 agg->expr = referenceToRvalueConversion( agg->expr, addedCost ); 1079 1080 // find member of the given type 1081 if ( auto structInst = agg->expr->result.as< ast::StructInstType >() ) { 1082 addAggMembers( 1083 structInst, agg->expr, *agg, addedCost, getMemberName( memberExpr ) ); 1084 } else if ( auto unionInst = agg->expr->result.as< ast::UnionInstType >() ) { 1085 addAggMembers( 1086 unionInst, agg->expr, *agg, addedCost, getMemberName( memberExpr ) ); 1087 } else if ( auto tupleType = agg->expr->result.as< ast::TupleType >() ) { 1088 addTupleMembers( tupleType, agg->expr, *agg, addedCost, memberExpr->member ); 1089 } 1090 } 307 1091 } 308 1092 … … 311 1095 } 312 1096 313 void postvisit( const ast::NameExpr * variableExpr ) { 314 #warning unimplemented 315 (void)variableExpr; 316 assert(false); 1097 void postvisit( const ast::NameExpr * nameExpr ) { 1098 std::vector< ast::SymbolTable::IdData > declList = symtab.lookupId( nameExpr->name ); 1099 PRINT( std::cerr << "nameExpr is " << nameExpr->name << std::endl; ) 1100 for ( auto & data : declList ) { 1101 Cost cost = Cost::zero; 1102 ast::Expr * newExpr = data.combine( nameExpr->location, cost ); 1103 1104 CandidateRef newCand = std::make_shared<Candidate>( 1105 newExpr, copy( tenv ), ast::OpenVarSet{}, ast::AssertionSet{}, Cost::zero, 1106 cost ); 1107 PRINT( 1108 std::cerr << "decl is "; 1109 ast::print( std::cerr, data.id ); 1110 std::cerr << std::endl; 1111 std::cerr << "newExpr is "; 1112 ast::print( std::cerr, newExpr ); 1113 std::cerr << std::endl; 1114 ) 1115 newCand->expr = ast::mutate_field( 1116 newCand->expr.get(), &ast::Expr::result, 1117 renameTyVars( newCand->expr->result ) ); 1118 // add anonymous member interpretations whenever an aggregate value type is seen 1119 // as a name expression 1120 addAnonConversions( newCand ); 1121 candidates.emplace_back( move( newCand ) ); 1122 } 317 1123 } 318 1124 … … 329 1135 330 1136 void postvisit( const ast::SizeofExpr * sizeofExpr ) { 331 #warning unimplemented 332 (void)sizeofExpr; 333 assert(false); 1137 if ( sizeofExpr->type ) { 1138 addCandidate( 1139 new ast::SizeofExpr{ 1140 sizeofExpr->location, resolveTypeof( sizeofExpr->type, symtab ) }, 1141 tenv ); 1142 } else { 1143 // find all candidates for the argument to sizeof 1144 CandidateFinder finder{ symtab, tenv }; 1145 finder.find( sizeofExpr->expr ); 1146 // find the lowest-cost candidate, otherwise ambiguous 1147 CandidateList winners = findMinCost( finder.candidates ); 1148 if ( winners.size() != 1 ) { 1149 SemanticError( 1150 sizeofExpr->expr.get(), "Ambiguous expression in sizeof operand: " ); 1151 } 1152 // return the lowest-cost candidate 1153 CandidateRef & choice = winners.front(); 1154 choice->expr = referenceToRvalueConversion( choice->expr, choice->cost ); 1155 choice->cost = Cost::zero; 1156 addCandidate( *choice, new ast::SizeofExpr{ sizeofExpr->location, choice->expr } ); 1157 } 334 1158 } 335 1159 336 1160 void postvisit( const ast::AlignofExpr * alignofExpr ) { 337 #warning unimplemented 338 (void)alignofExpr; 339 assert(false); 1161 if ( alignofExpr->type ) { 1162 addCandidate( 1163 new ast::AlignofExpr{ 1164 alignofExpr->location, resolveTypeof( alignofExpr->type, symtab ) }, 1165 tenv ); 1166 } else { 1167 // find all candidates for the argument to alignof 1168 CandidateFinder finder{ symtab, tenv }; 1169 finder.find( alignofExpr->expr ); 1170 // find the lowest-cost candidate, otherwise ambiguous 1171 CandidateList winners = findMinCost( finder.candidates ); 1172 if ( winners.size() != 1 ) { 1173 SemanticError( 1174 alignofExpr->expr.get(), "Ambiguous expression in alignof operand: " ); 1175 } 1176 // return the lowest-cost candidate 1177 CandidateRef & choice = winners.front(); 1178 choice->expr = referenceToRvalueConversion( choice->expr, choice->cost ); 1179 choice->cost = Cost::zero; 1180 addCandidate( 1181 *choice, new ast::AlignofExpr{ alignofExpr->location, choice->expr } ); 1182 } 340 1183 } 341 1184 342 1185 void postvisit( const ast::UntypedOffsetofExpr * offsetofExpr ) { 343 #warning unimplemented 344 (void)offsetofExpr; 345 assert(false); 1186 const ast::ReferenceToType * aggInst; 1187 if (( aggInst = offsetofExpr->type.as< ast::StructInstType >() )) ; 1188 else if (( aggInst = offsetofExpr->type.as< ast::UnionInstType >() )) ; 1189 else return; 1190 1191 for ( const ast::Decl * member : aggInst->lookup( offsetofExpr->member ) ) { 1192 auto dwt = strict_dynamic_cast< const ast::DeclWithType * >( member ); 1193 addCandidate( 1194 new ast::OffsetofExpr{ offsetofExpr->location, aggInst, dwt }, tenv ); 1195 } 346 1196 } 347 1197 … … 376 1226 new ast::LogicalExpr{ 377 1227 logicalExpr->location, r1->expr, r2->expr, logicalExpr->isAnd }, 378 std::move( env ), std::move( open ), std::move( need ), 379 r1->cost + r2->cost ); 1228 move( env ), move( open ), move( need ), r1->cost + r2->cost ); 380 1229 } 381 1230 } … … 421 1270 common ) 422 1271 ) { 423 #warning unimplemented 424 assert(false); 1272 // generate typed expression 1273 ast::ConditionalExpr * newExpr = new ast::ConditionalExpr{ 1274 conditionalExpr->location, r1->expr, r2->expr, r3->expr }; 1275 newExpr->result = common ? common : r2->expr->result; 1276 // convert both options to result type 1277 Cost cost = r1->cost + r2->cost + r3->cost; 1278 newExpr->arg2 = computeExpressionConversionCost( 1279 newExpr->arg2, newExpr->result, symtab, env, cost ); 1280 newExpr->arg3 = computeExpressionConversionCost( 1281 newExpr->arg3, newExpr->result, symtab, env, cost ); 1282 // output candidate 1283 CandidateRef newCand = std::make_shared<Candidate>( 1284 newExpr, move( env ), move( open ), move( need ), cost ); 1285 inferParameters( newCand, candidates ); 425 1286 } 426 1287 } … … 480 1341 common ) 481 1342 ) { 1343 // generate new expression 482 1344 ast::RangeExpr * newExpr = 483 1345 new ast::RangeExpr{ rangeExpr->location, r1->expr, r2->expr }; 484 1346 newExpr->result = common ? common : r1->expr->result; 485 486 #warning unimplemented 487 assert(false); 1347 // add candidate 1348 CandidateRef newCand = std::make_shared<Candidate>( 1349 newExpr, move( env ), move( open ), move( need ), 1350 r1->cost + r2->cost ); 1351 inferParameters( newCand, candidates ); 488 1352 } 489 1353 } … … 512 1376 513 1377 addCandidate( 514 new ast::TupleExpr{ tupleExpr->location, std::move( exprs ) },515 std::move( env ), std::move( open ), std::move( need ), sumCost( subs ) );1378 new ast::TupleExpr{ tupleExpr->location, move( exprs ) }, 1379 move( env ), move( open ), move( need ), sumCost( subs ) ); 516 1380 } 517 1381 } … … 539 1403 540 1404 void postvisit( const ast::StmtExpr * stmtExpr ) { 541 #warning unimplemented 542 (void)stmtExpr; 543 assert(false); 1405 addCandidate( resolveStmtExpr( stmtExpr, symtab ), tenv ); 544 1406 } 545 1407 546 1408 void postvisit( const ast::UntypedInitExpr * initExpr ) { 547 #warning unimplemented 548 (void)initExpr; 549 assert(false); 1409 // handle each option like a cast 1410 CandidateList matches; 1411 PRINT( 1412 std::cerr << "untyped init expr: " << initExpr << std::endl; 1413 ) 1414 // O(n^2) checks of d-types with e-types 1415 for ( const ast::InitAlternative & initAlt : initExpr->initAlts ) { 1416 // calculate target type 1417 const ast::Type * toType = resolveTypeof( initAlt.type, symtab ); 1418 toType = SymTab::validateType( toType, symtab ); 1419 toType = adjustExprType( toType, tenv, symtab ); 1420 // The call to find must occur inside this loop, otherwise polymorphic return 1421 // types are not bound to the initialization type, since return type variables are 1422 // only open for the duration of resolving the UntypedExpr. 1423 CandidateFinder finder{ symtab, tenv, toType }; 1424 finder.find( initExpr->expr, ResolvMode::withAdjustment() ); 1425 for ( CandidateRef & cand : finder.candidates ) { 1426 ast::TypeEnvironment env{ cand->env }; 1427 ast::AssertionSet need( cand->need.begin(), cand->need.end() ), have; 1428 ast::OpenVarSet open{ cand->open }; 1429 1430 PRINT( 1431 std::cerr << " @ " << toType << " " << initAlt.designation << std::endl; 1432 ) 1433 1434 // It is possible that a cast can throw away some values in a multiply-valued 1435 // expression, e.g. cast-to-void, one value to zero. Figure out the prefix of 1436 // the subexpression results that are cast directly. The candidate is invalid 1437 // if it has fewer results than there are types to cast to. 1438 int discardedValues = cand->expr->result->size() - toType->size(); 1439 if ( discardedValues < 0 ) continue; 1440 1441 // unification run for side-effects 1442 unify( toType, cand->expr->result, env, need, have, open, symtab ); 1443 Cost thisCost = castCost( cand->expr->result, toType, symtab, env ); 1444 1445 if ( thisCost != Cost::infinity ) { 1446 // count one safe conversion for each value that is thrown away 1447 thisCost.incSafe( discardedValues ); 1448 CandidateRef newCand = std::make_shared<Candidate>( 1449 new ast::InitExpr{ 1450 initExpr->location, restructureCast( cand->expr, toType ), 1451 initAlt.designation }, 1452 copy( cand->env ), move( open ), move( need ), cand->cost, thisCost ); 1453 inferParameters( newCand, matches ); 1454 } 1455 } 1456 } 1457 1458 // select first on argument cost, then conversion cost 1459 CandidateList minArgCost = findMinCost( matches ); 1460 promoteCvtCost( minArgCost ); 1461 candidates = findMinCost( minArgCost ); 550 1462 } 551 1463 … … 628 1540 cand->env.applyFree( newResult ); 629 1541 cand->expr = ast::mutate_field( 630 cand->expr.get(), &ast::Expr::result, std::move( newResult ) );1542 cand->expr.get(), &ast::Expr::result, move( newResult ) ); 631 1543 632 1544 out.emplace_back( cand ); … … 651 1563 CandidateList satisfied; 652 1564 std::vector< std::string > errors; 653 for ( auto& candidate : candidates ) {654 satisfyAssertions( *candidate, symtab, satisfied, errors );1565 for ( CandidateRef & candidate : candidates ) { 1566 satisfyAssertions( candidate, symtab, satisfied, errors ); 655 1567 } 656 1568 … … 666 1578 667 1579 // reset candidates 668 candidates = std::move( satisfied );1580 candidates = move( satisfied ); 669 1581 } 670 1582 … … 690 1602 691 1603 auto oldsize = candidates.size(); 692 candidates = std::move( pruned );1604 candidates = move( pruned ); 693 1605 694 1606 PRINT( -
src/ResolvExpr/CandidateFinder.hpp
r1e5dedc4 r3c6e417 27 27 /// Data to perform expression resolution 28 28 struct CandidateFinder { 29 CandidateList candidates; 30 const ast::SymbolTable & symtab; 31 const ast::TypeEnvironment & env; 32 ast::ptr< ast::Type > targetType = nullptr; ///< Target type for resolution29 CandidateList candidates; ///< List of candidate resolutions 30 const ast::SymbolTable & symtab; ///< Symbol table to lookup candidates 31 const ast::TypeEnvironment & env; ///< Substitutions performed in this resolution 32 ast::ptr< ast::Type > targetType; ///< Target type for resolution 33 33 34 CandidateFinder( const ast::SymbolTable & symtab, const ast::TypeEnvironment & env ) 35 : candidates(), symtab( symtab ), env( env ) {} 34 CandidateFinder( 35 const ast::SymbolTable & symtab, const ast::TypeEnvironment & env, 36 const ast::Type * tt = nullptr ) 37 : candidates(), symtab( symtab ), env( env ), targetType( tt ) {} 36 38 37 39 /// Fill candidates with feasible resolutions for `expr` … … 52 54 }; 53 55 56 /// Computes conversion cost between two types 57 Cost computeConversionCost( 58 const ast::Type * argType, const ast::Type * paramType, const ast::SymbolTable & symtab, 59 const ast::TypeEnvironment & env ); 60 54 61 } // namespace ResolvExpr 55 62 -
src/ResolvExpr/CastCost.cc
r1e5dedc4 r3c6e417 78 78 }); 79 79 } else { 80 PassVisitor<CastCost> converter( dest, indexer, env, castCost ); 80 #warning cast on castCost artifact of having two functions, remove when port done 81 PassVisitor<CastCost> converter( 82 dest, indexer, env, 83 (Cost (*)( Type *, Type *, const SymTab::Indexer &, const TypeEnvironment & )) 84 castCost ); 81 85 src->accept( converter ); 82 86 if ( converter.pass.get_cost() == Cost::infinity ) { … … 125 129 } 126 130 } 131 132 Cost castCost( 133 const ast::Type * src, const ast::Type * dst, const ast::SymbolTable & symtab, 134 const ast::TypeEnvironment & env 135 ) { 136 #warning unimplmented 137 (void)src; (void)dst; (void)symtab; (void)env; 138 assert(false); 139 return Cost::zero; 140 } 127 141 } // namespace ResolvExpr 128 142 -
src/ResolvExpr/ConversionCost.cc
r1e5dedc4 r3c6e417 85 85 }); 86 86 } else { 87 PassVisitor<ConversionCost> converter( dest, indexer, env, conversionCost ); 87 PassVisitor<ConversionCost> converter( 88 dest, indexer, env, 89 (Cost (*)(Type*, Type*, const SymTab::Indexer&, const TypeEnvironment&)) 90 conversionCost ); 88 91 src->accept( converter ); 89 92 if ( converter.pass.get_cost() == Cost::infinity ) { … … 134 137 } else { 135 138 PRINT( std::cerr << "reference to rvalue conversion" << std::endl; ) 136 PassVisitor<ConversionCost> converter( dest, indexer, env, conversionCost ); 139 PassVisitor<ConversionCost> converter( 140 dest, indexer, env, 141 (Cost (*)(Type*, Type*, const SymTab::Indexer&, const TypeEnvironment&)) 142 conversionCost ); 137 143 src->accept( converter ); 138 144 return converter.pass.get_cost(); … … 482 488 } // if 483 489 } 490 491 Cost conversionCost( 492 const ast::Type * src, const ast::Type * dst, const ast::SymbolTable & symtab, 493 const ast::TypeEnvironment & env 494 ) { 495 #warning unimplemented 496 (void)src; (void)dst; (void)symtab; (void)env; 497 assert(false); 498 return Cost::zero; 499 } 484 500 } // namespace ResolvExpr 485 501 -
src/ResolvExpr/PolyCost.cc
r1e5dedc4 r3c6e417 9 9 // Author : Richard C. Bilson 10 10 // Created On : Sun May 17 09:50:12 2015 11 // Last Modified By : Peter A. Buhr12 // Last Modified On : Sun May 17 09:52:02 201513 // Update Count : 311 // Last Modified By : Andrew Beach 12 // Last Modified On : Wed Jun 19 10:45:00 2019 13 // Update Count : 4 14 14 // 15 15 16 #include "AST/SymbolTable.hpp" 17 #include "AST/Type.hpp" 18 #include "AST/TypeEnvironment.hpp" 16 19 #include "Common/PassVisitor.h" 17 20 #include "SymTab/Indexer.h" // for Indexer … … 54 57 } 55 58 59 // TODO: When the old PolyCost is torn out get rid of the _new suffix. 60 struct PolyCost_new { 61 int result; 62 const ast::SymbolTable &symtab; 63 const ast::TypeEnvironment &env_; 64 65 PolyCost_new( const ast::SymbolTable & symtab, const ast::TypeEnvironment & env ) : 66 result( 0 ), symtab( symtab ), env_( env ) {} 67 68 void previsit( const ast::TypeInstType * type ) { 69 if ( const ast::EqvClass * eqv = env_.lookup( type->name ) ) /* && */ if ( eqv->bound ) { 70 if ( const ast::TypeInstType * otherType = eqv->bound.as< ast::TypeInstType >() ) { 71 if ( symtab.lookupType( otherType->name ) ) { 72 // Bound to opaque type. 73 result += 1; 74 } 75 } else { 76 // Bound to concrete type. 77 result += 1; 78 } 79 } 80 } 81 }; 82 83 int polyCost( 84 const ast::Type * type, const ast::SymbolTable & symtab, const ast::TypeEnvironment & env 85 ) { 86 ast::Pass<PolyCost_new> costing( symtab, env ); 87 type->accept( costing ); 88 return costing.pass.result; 89 } 90 56 91 } // namespace ResolvExpr 57 92 -
src/ResolvExpr/RenameVars.cc
r1e5dedc4 r3c6e417 96 96 } 97 97 } // namespace 98 99 const ast::Type * renameTyVars( const ast::Type * t ) { 100 #warning unimplemented; make sure resetTyVarRenaming() updated when implemented 101 (void)t; 102 assert(false); 103 return t; 104 } 98 105 } // namespace ResolvExpr 99 106 -
src/ResolvExpr/RenameVars.h
r1e5dedc4 r3c6e417 23 23 #include "SynTree/Visitor.h" // for Visitor 24 24 25 namespace ast { 26 class Type; 27 } 28 25 29 namespace ResolvExpr { 26 30 /// Provides a consistent renaming of forall type names in a hierarchy by prefixing them with a unique "level" ID 27 31 void renameTyVars( Type * ); 32 const ast::Type * renameTyVars( const ast::Type * ); 28 33 29 34 /// resets internal state of renamer to avoid overflow -
src/ResolvExpr/ResolveAssertions.cc
r1e5dedc4 r3c6e417 186 186 using InferCache = std::unordered_map< UniqueId, InferredParams >; 187 187 188 /// Lexicographically-ordered vector of costs 189 using CostVec = std::vector< Cost >; 190 191 int compare( const CostVec & a, const CostVec & b ) { 192 unsigned i = 0; 193 do { 194 // lex-compare where shorter one is less 195 if ( i == a.size() ) { 196 return i == b.size() ? 0 : -1; 197 } 198 if ( i == b.size() /* && i < a.size() */ ) return 1; 199 200 int c = a[i].compare( b[i] ); 201 if ( c != 0 ) return c; 202 } while ( ++i ); 203 assert(!"unreachable"); 204 } 205 206 bool operator< ( const CostVec & a, const CostVec & b ) { return compare( a, b ) < 0; } 207 188 208 /// Flag for state iteration 189 209 enum IterateFlag { IterateState }; … … 196 216 DeferList deferred; ///< Deferred matches 197 217 InferCache inferred; ///< Cache of already-inferred parameters 218 CostVec costs; ///< Costs of recursive assertion satisfaction for disambiguation 198 219 SymTab::Indexer& indexer; ///< Name lookup (depends on previous assertions) 199 220 200 221 /// Initial resolution state for an alternative 201 222 ResnState( Alternative& a, SymTab::Indexer& indexer ) 202 : alt(a), need(), newNeed(), deferred(), inferred(), indexer(indexer) {223 : alt(a), need(), newNeed(), deferred(), inferred(), costs{ Cost::zero }, indexer(indexer) { 203 224 need.swap( a.need ); 204 225 } … … 207 228 ResnState( ResnState&& o, IterateFlag ) 208 229 : alt(std::move(o.alt)), need(o.newNeed.begin(), o.newNeed.end()), newNeed(), deferred(), 209 inferred(std::move(o.inferred)), indexer(o.indexer) {} 230 inferred(std::move(o.inferred)), costs(o.costs), indexer(o.indexer) { 231 costs.emplace_back( Cost::zero ); 232 } 210 233 }; 211 234 … … 336 359 }; 337 360 338 void finalizeAssertions( Alternative& alt, InferCache& inferred, AltList& out ) { 339 PassVisitor<InferMatcher> matcher{ inferred }; 340 alt.expr = alt.expr->acceptMutator( matcher ); 341 out.emplace_back( alt ); 361 /// Map of alternative return types to recursive assertion satisfaction costs 362 using PruneMap = std::unordered_map<std::string, CostVec>; 363 364 /// Gets the pruning key for an alternative 365 std::string pruneKey( const Alternative & alt ) { 366 Type* resType = alt.expr->result->clone(); 367 alt.env.apply( resType ); 368 std::string resKey = SymTab::Mangler::mangleType( resType ); 369 delete resType; 370 return std::move( resKey ); 371 } 372 373 /// Replace resnSlots with inferParams and add alternative to output list, if meets pruning 374 /// threshold. 375 void finalizeAssertions( ResnState& resn, PruneMap & pruneThresholds, AltList& out ) { 376 // prune if cheaper alternative for same key has already been generated 377 std::string resKey = pruneKey( resn.alt ); 378 auto it = pruneThresholds.find( resKey ); 379 if ( it != pruneThresholds.end() ) { 380 if ( it->second < resn.costs ) return; 381 } else { 382 pruneThresholds.emplace_hint( it, resKey, resn.costs ); 383 } 384 385 // replace resolution slots with inferred params, add to output 386 PassVisitor<InferMatcher> matcher{ resn.inferred }; 387 resn.alt.expr = resn.alt.expr->acceptMutator( matcher ); 388 out.emplace_back( resn.alt ); 342 389 } 343 390 … … 359 406 ResnList resns{ ResnState{ alt, root_indexer } }; 360 407 ResnList new_resns{}; 408 409 // Pruning thresholds by result type of the output alternatives. 410 // Alternatives *should* be generated in sorted order, so no need to retroactively prune 411 PruneMap thresholds; 361 412 362 413 // resolve assertions in breadth-first-order up to a limited number of levels deep … … 364 415 // scan over all mutually-compatible resolutions 365 416 for ( auto& resn : resns ) { 417 // stop this branch if already found a better option 418 auto it = thresholds.find( pruneKey( resn.alt ) ); 419 if ( it != thresholds.end() && it->second < resn.costs ) goto nextResn; 420 366 421 // make initial pass at matching assertions 367 422 for ( auto& assn : resn.need ) { … … 383 438 // either add successful match or push back next state 384 439 if ( resn.newNeed.empty() ) { 385 finalizeAssertions( resn .alt, resn.inferred, out );440 finalizeAssertions( resn, thresholds, out ); 386 441 } else { 387 442 new_resns.emplace_back( std::move(resn), IterateState ); … … 420 475 goto nextResn; 421 476 } 422 // sort by cost 477 // sort by cost for overall pruning 423 478 CandidateCost coster{ resn.indexer }; 424 479 std::sort( compatible.begin(), compatible.end(), coster ); 425 480 426 // keep map of detected options427 std::unordered_map<std::string, Cost> found;428 481 for ( auto& compat : compatible ) { 429 // filter by environment-adjusted result type, keep only cheapest option430 Type* resType = alt.expr->result->clone();431 compat.env.apply( resType );432 // skip if cheaper alternative already processed with same result type433 Cost resCost = coster.get( compat );434 auto it = found.emplace( SymTab::Mangler::mangleType( resType ), resCost );435 delete resType;436 if ( it.second == false && it.first->second < resCost ) continue;437 438 // proceed with resolution step439 482 ResnState new_resn = resn; 440 483 … … 452 495 new_resn.alt.openVars = std::move(compat.openVars); 453 496 497 // mark cost of this path 498 new_resn.costs.back() += compat.cost; 499 454 500 // either add sucessful match or push back next state 455 501 if ( new_resn.newNeed.empty() ) { 456 finalizeAssertions( new_resn .alt, new_resn.inferred, out );502 finalizeAssertions( new_resn, thresholds, out ); 457 503 } else { 458 504 new_resns.emplace_back( std::move(new_resn), IterateState ); -
src/ResolvExpr/ResolveTypeof.cc
r1e5dedc4 r3c6e417 107 107 return newType; 108 108 } 109 110 const ast::Type * resolveTypeof( const ast::Type * type , const ast::SymbolTable & symtab ) { 111 #warning unimplemented 112 (void)type; (void)symtab; 113 assert(false); 114 return nullptr; 115 } 109 116 } // namespace ResolvExpr 110 117 -
src/ResolvExpr/ResolveTypeof.h
r1e5dedc4 r3c6e417 20 20 class Indexer; 21 21 } // namespace SymTab 22 namespace ast { 23 class Type; 24 class SymbolTable; 25 } 22 26 23 27 namespace ResolvExpr { 24 28 Type *resolveTypeof( Type*, const SymTab::Indexer &indexer ); 29 const ast::Type * resolveTypeof( const ast::Type *, const ast::SymbolTable & ); 25 30 } // namespace ResolvExpr 26 31 -
src/ResolvExpr/Resolver.cc
r1e5dedc4 r3c6e417 1249 1249 1250 1250 void resolve( std::list< ast::ptr<ast::Decl> >& translationUnit ) { 1251 ast::Pass< Resolver_new> resolver;1251 ast::Pass< Resolver_new > resolver; 1252 1252 accept_all( translationUnit, resolver ); 1253 } 1254 1255 ast::ptr< ast::Expr > resolveStmtExpr( 1256 const ast::StmtExpr * stmtExpr, const ast::SymbolTable & symtab 1257 ) { 1258 assert( stmtExpr ); 1259 ast::Pass< Resolver_new > resolver{ symtab }; 1260 ast::ptr< ast::Expr > ret = stmtExpr; 1261 ret = ret->accept( resolver ); 1262 strict_dynamic_cast< ast::StmtExpr * >( ret.get_and_mutate() )->computeResult(); 1263 return ret; 1253 1264 } 1254 1265 -
src/ResolvExpr/Resolver.h
r1e5dedc4 r3c6e417 31 31 class Decl; 32 32 class DeletedExpr; 33 class StmtExpr; 33 34 class SymbolTable; 34 35 class TypeEnvironment; … … 58 59 ast::ptr< ast::Expr > resolveInVoidContext( 59 60 const ast::Expr * expr, const ast::SymbolTable & symtab, ast::TypeEnvironment & env ); 61 /// Resolves a statement expression 62 ast::ptr< ast::Expr > resolveStmtExpr( 63 const ast::StmtExpr * stmtExpr, const ast::SymbolTable & symtab ); 60 64 } // namespace ResolvExpr 61 65 -
src/ResolvExpr/SatisfyAssertions.cpp
r1e5dedc4 r3c6e417 16 16 #include "SatisfyAssertions.hpp" 17 17 18 #include <algorithm> 18 19 #include <cassert> 20 #include <sstream> 21 #include <string> 22 #include <unordered_map> 23 #include <vector> 24 25 #include "Candidate.hpp" 26 #include "CandidateFinder.hpp" 27 #include "Cost.h" 28 #include "RenameVars.h" 29 #include "typeops.h" 30 #include "Unify.h" 31 #include "AST/Decl.hpp" 32 #include "AST/Expr.hpp" 33 #include "AST/Node.hpp" 34 #include "AST/Pass.hpp" 35 #include "AST/Print.hpp" 36 #include "AST/SymbolTable.hpp" 37 #include "AST/TypeEnvironment.hpp" 38 #include "Common/FilterCombos.h" 39 #include "Common/Indenter.h" 40 #include "GenPoly/GenPoly.h" 41 #include "SymTab/Mangler.h" 19 42 20 43 namespace ResolvExpr { 21 44 45 // in CandidateFinder.cpp; unique ID for assertion satisfaction 46 extern UniqueId globalResnSlot; 47 48 namespace { 49 /// Post-unification assertion satisfaction candidate 50 struct AssnCandidate { 51 ast::SymbolTable::IdData cdata; ///< Satisfying declaration 52 ast::ptr< ast::Type > adjType; ///< Satisfying type 53 ast::TypeEnvironment env; ///< Post-unification environment 54 ast::AssertionSet have; ///< Post-unification have-set 55 ast::AssertionSet need; ///< Post-unification need-set 56 ast::OpenVarSet open; ///< Post-unification open-var-set 57 ast::UniqueId resnSlot; ///< Slot for any recursive assertion IDs 58 59 AssnCandidate( 60 const ast::SymbolTable::IdData c, const ast::Type * at, ast::TypeEnvironment && e, 61 ast::AssertionSet && h, ast::AssertionSet && n, ast::OpenVarSet && o, ast::UniqueId rs ) 62 : cdata( c ), adjType( at ), env( std::move( e ) ), have( std::move( h ) ), 63 need( std::move( n ) ), open( std::move( o ) ), resnSlot( rs ) {} 64 }; 65 66 /// List of assertion satisfaction candidates 67 using AssnCandidateList = std::vector< AssnCandidate >; 68 69 /// Reference to a single deferred item 70 struct DeferRef { 71 const ast::DeclWithType * decl; 72 const ast::AssertionSetValue & info; 73 const AssnCandidate & match; 74 }; 75 76 /// Wrapper for the deferred items from a single assertion satisfaction. 77 /// Acts like an indexed list of DeferRef 78 struct DeferItem { 79 const ast::DeclWithType * decl; 80 const ast::AssertionSetValue & info; 81 AssnCandidateList matches; 82 83 DeferItem( 84 const ast::DeclWithType * d, const ast::AssertionSetValue & i, AssnCandidateList && ms ) 85 : decl( d ), info( i ), matches( std::move( ms ) ) {} 86 87 bool empty() const { return matches.empty(); } 88 89 AssnCandidateList::size_type size() const { return matches.size(); } 90 91 DeferRef operator[] ( unsigned i ) const { return { decl, info, matches[i] }; } 92 }; 93 94 /// List of deferred satisfaction items 95 using DeferList = std::vector< DeferItem >; 96 97 /// Set of assertion satisfactions, grouped by resolution ID 98 using InferCache = std::unordered_map< ast::UniqueId, ast::InferredParams >; 99 100 /// Lexicographically-ordered vector of costs. 101 /// Lexicographic order comes from default operator< on std::vector. 102 using CostVec = std::vector< Cost >; 103 104 /// Flag for state iteration 105 enum IterateFlag { IterateState }; 106 107 /// Intermediate state for satisfying a set of assertions 108 struct SatState { 109 CandidateRef cand; ///< Candidate assertion is rooted on 110 ast::AssertionList need; ///< Assertions to find 111 ast::AssertionSet newNeed; ///< Recursive assertions from current satisfied assertions 112 DeferList deferred; ///< Deferred matches 113 InferCache inferred; ///< Cache of already-inferred assertions 114 CostVec costs; ///< Disambiguating costs of recursive assertion satisfaction 115 ast::SymbolTable symtab; ///< Name lookup (depends on previous assertions) 116 117 /// Initial satisfaction state for a candidate 118 SatState( CandidateRef & c, const ast::SymbolTable & syms ) 119 : cand( c ), need(), newNeed(), deferred(), inferred(), costs{ Cost::zero }, 120 symtab( syms ) { need.swap( c->need ); } 121 122 /// Update satisfaction state for next step after previous state 123 SatState( SatState && o, IterateFlag ) 124 : cand( std::move( o.cand ) ), need( o.newNeed.begin(), o.newNeed.end() ), newNeed(), 125 deferred(), inferred( std::move( o.inferred ) ), costs( std::move( o.costs ) ), 126 symtab( o.symtab ) { costs.emplace_back( Cost::zero ); } 127 128 /// Field-wise next step constructor 129 SatState( 130 CandidateRef && c, ast::AssertionSet && nn, InferCache && i, CostVec && cs, 131 ast::SymbolTable && syms ) 132 : cand( std::move( c ) ), need( nn.begin(), nn.end() ), newNeed(), deferred(), 133 inferred( std::move( i ) ), costs( std::move( cs ) ), symtab( std::move( syms ) ) 134 { costs.emplace_back( Cost::zero ); } 135 }; 136 137 /// Adds a captured assertion to the symbol table 138 void addToSymbolTable( const ast::AssertionSet & have, ast::SymbolTable & symtab ) { 139 for ( auto & i : have ) { 140 if ( i.second.isUsed ) { symtab.addId( i.first ); } 141 } 142 } 143 144 /// Binds a single assertion, updating satisfaction state 145 void bindAssertion( 146 const ast::DeclWithType * decl, const ast::AssertionSetValue & info, CandidateRef & cand, 147 AssnCandidate & match, InferCache & inferred 148 ) { 149 const ast::DeclWithType * candidate = match.cdata.id; 150 assertf( candidate->uniqueId, 151 "Assertion candidate does not have a unique ID: %s", toString( candidate ).c_str() ); 152 153 ast::Expr * varExpr = match.cdata.combine( cand->expr->location, cand->cvtCost ); 154 varExpr->result = match.adjType; 155 if ( match.resnSlot ) { varExpr->inferred.resnSlots().emplace_back( match.resnSlot ); } 156 157 // place newly-inferred assertion in proper location in cache 158 inferred[ info.resnSlot ][ decl->uniqueId ] = ast::ParamEntry{ 159 candidate->uniqueId, candidate, match.adjType, decl->get_type(), varExpr }; 160 } 161 162 /// Satisfy a single assertion 163 bool satisfyAssertion( ast::AssertionList::value_type & assn, SatState & sat ) { 164 // skip unused assertions 165 if ( ! assn.second.isUsed ) return true; 166 167 // find candidates that unify with the desired type 168 AssnCandidateList matches; 169 for ( const ast::SymbolTable::IdData & cdata : sat.symtab.lookupId( assn.first->name ) ) { 170 const ast::DeclWithType * candidate = cdata.id; 171 172 // build independent unification context for candidate 173 ast::AssertionSet have, newNeed; 174 ast::TypeEnvironment newEnv{ sat.cand->env }; 175 ast::OpenVarSet newOpen{ sat.cand->open }; 176 ast::ptr< ast::Type > toType = assn.first->get_type(); 177 ast::ptr< ast::Type > adjType = 178 renameTyVars( adjustExprType( candidate->get_type(), newEnv, sat.symtab ) ); 179 180 // only keep candidates which unify 181 if ( unify( toType, adjType, newEnv, newNeed, have, newOpen, sat.symtab ) ) { 182 // set up binding slot for recursive assertions 183 ast::UniqueId crntResnSlot = 0; 184 if ( ! newNeed.empty() ) { 185 crntResnSlot = ++globalResnSlot; 186 for ( auto & a : newNeed ) { a.second.resnSlot = crntResnSlot; } 187 } 188 189 matches.emplace_back( 190 cdata, adjType, std::move( newEnv ), std::move( newNeed ), std::move( have ), 191 std::move( newOpen ), crntResnSlot ); 192 } 193 } 194 195 // break if no satisfying match 196 if ( matches.empty() ) return false; 197 198 // defer if too many satisfying matches 199 if ( matches.size() > 1 ) { 200 sat.deferred.emplace_back( assn.first, assn.second, std::move( matches ) ); 201 return true; 202 } 203 204 // otherwise bind unique match in ongoing scope 205 AssnCandidate & match = matches.front(); 206 addToSymbolTable( match.have, sat.symtab ); 207 sat.newNeed.insert( match.need.begin(), match.need.end() ); 208 sat.cand->env = std::move( match.env ); 209 sat.cand->open = std::move( match.open ); 210 211 bindAssertion( assn.first, assn.second, sat.cand, match, sat.inferred ); 212 return true; 213 } 214 215 /// Map of candidate return types to recursive assertion satisfaction costs 216 using PruneMap = std::unordered_map< std::string, CostVec >; 217 218 /// Gets the pruning key for a candidate (derived from environment-adjusted return type) 219 std::string pruneKey( const Candidate & cand ) { 220 ast::ptr< ast::Type > resType = cand.expr->result; 221 cand.env.apply( resType ); 222 return Mangle::mangle( resType, Mangle::typeMode() ); 223 } 224 225 /// Associates inferred parameters with an expression 226 struct InferMatcher final { 227 InferCache & inferred; 228 229 InferMatcher( InferCache & inferred ) : inferred( inferred ) {} 230 231 const ast::Expr * postmutate( const ast::Expr * expr ) { 232 // Skip if no slots to find 233 if ( expr->inferred.mode != ast::Expr::InferUnion::Slots ) return expr; 234 235 // find inferred parameters for resolution slots 236 ast::InferredParams newInferred; 237 for ( UniqueId slot : expr->inferred.resnSlots() ) { 238 // fail if no matching assertions found 239 auto it = inferred.find( slot ); 240 if ( it == inferred.end() ) { 241 assert(!"missing assertion"); 242 } 243 244 // place inferred parameters into new map 245 for ( auto & entry : it->second ) { 246 // recurse on inferParams of resolved expressions 247 entry.second.expr = postmutate( entry.second.expr ); 248 auto res = newInferred.emplace( entry ); 249 assert( res.second && "all assertions newly placed" ); 250 } 251 } 252 253 ast::Expr * ret = mutate( expr ); 254 ret->inferred.set_inferParams( std::move( newInferred ) ); 255 return ret; 256 } 257 }; 258 259 /// Replace ResnSlots with InferParams and add alternative to output list, if it meets pruning 260 /// threshold. 261 void finalizeAssertions( 262 CandidateRef & cand, InferCache & inferred, PruneMap & thresholds, CostVec && costs, 263 CandidateList & out 264 ) { 265 // prune if cheaper alternative for same key has already been generated 266 std::string key = pruneKey( *cand ); 267 auto it = thresholds.find( key ); 268 if ( it != thresholds.end() ) { 269 if ( it->second < costs ) return; 270 } else { 271 thresholds.emplace_hint( it, key, std::move( costs ) ); 272 } 273 274 // replace resolution slots with inferred parameters, add to output 275 ast::Pass< InferMatcher > matcher{ inferred }; 276 cand->expr = cand->expr->accept( matcher ); 277 out.emplace_back( cand ); 278 } 279 280 /// Combo iterator that combines candidates into an output list, merging their environments. 281 /// Rejects an appended candidate if environments cannot be merged. See `Common/FilterCombos.h` 282 /// for description of "combo iterator". 283 class CandidateEnvMerger { 284 /// Current list of merged candidates 285 std::vector< DeferRef > crnt; 286 /// Stack of environments to support backtracking 287 std::vector< ast::TypeEnvironment > envs; 288 /// Stack of open variables to support backtracking 289 std::vector< ast::OpenVarSet > opens; 290 /// Symbol table to use for merges 291 const ast::SymbolTable & symtab; 292 293 public: 294 /// The merged environment/open variables and the list of candidates 295 struct OutType { 296 ast::TypeEnvironment env; 297 ast::OpenVarSet open; 298 std::vector< DeferRef > assns; 299 Cost cost; 300 301 OutType( 302 const ast::TypeEnvironment & e, const ast::OpenVarSet & o, 303 const std::vector< DeferRef > & as, const ast::SymbolTable & symtab ) 304 : env( e ), open( o ), assns( as ), cost( Cost::zero ) { 305 // compute combined conversion cost 306 for ( const DeferRef & assn : assns ) { 307 // compute conversion cost from satisfying decl to assertion 308 cost += computeConversionCost( 309 assn.match.adjType, assn.decl->get_type(), symtab, env ); 310 311 // mark vars+specialization on function-type assertions 312 const ast::FunctionType * func = 313 GenPoly::getFunctionType( assn.match.cdata.id->get_type() ); 314 if ( ! func ) continue; 315 316 for ( const ast::DeclWithType * param : func->params ) { 317 cost.decSpec( specCost( param->get_type() ) ); 318 } 319 320 cost.incVar( func->forall.size() ); 321 322 for ( const ast::TypeDecl * td : func->forall ) { 323 cost.decSpec( td->assertions.size() ); 324 } 325 } 326 } 327 328 bool operator< ( const OutType & o ) const { return cost < o.cost; } 329 }; 330 331 CandidateEnvMerger( 332 const ast::TypeEnvironment & env, const ast::OpenVarSet & open, 333 const ast::SymbolTable & syms ) 334 : crnt(), envs{ env }, opens{ open }, symtab( syms ) {} 335 336 bool append( DeferRef i ) { 337 ast::TypeEnvironment env = envs.back(); 338 ast::OpenVarSet open = opens.back(); 339 mergeOpenVars( open, i.match.open ); 340 341 if ( ! env.combine( i.match.env, open, symtab ) ) return false; 342 343 crnt.emplace_back( i ); 344 envs.emplace_back( std::move( env ) ); 345 opens.emplace_back( std::move( open ) ); 346 return true; 347 } 348 349 void backtrack() { 350 crnt.pop_back(); 351 envs.pop_back(); 352 opens.pop_back(); 353 } 354 355 OutType finalize() { return { envs.back(), opens.back(), crnt, symtab }; } 356 }; 357 358 /// Limit to depth of recursion of assertion satisfaction 359 static const int recursionLimit = 4; 360 /// Maximum number of simultaneously-deferred assertions to attempt concurrent satisfaction of 361 static const int deferLimit = 10; 362 } // anonymous namespace 363 22 364 void satisfyAssertions( 23 Candidate & alt, const ast::SymbolTable & symtab, CandidateList & out,365 CandidateRef & cand, const ast::SymbolTable & symtab, CandidateList & out, 24 366 std::vector<std::string> & errors 25 367 ) { 26 #warning unimplemented 27 (void)alt; (void)symtab; (void)out; (void)errors; 28 assert(false); 368 // finish early if no assertions to satisfy 369 if ( cand->need.empty() ) { 370 out.emplace_back( cand ); 371 return; 372 } 373 374 // build list of possible combinations of satisfying declarations 375 std::vector< SatState > sats{ SatState{ cand, symtab } }; 376 std::vector< SatState > nextSats{}; 377 378 // pruning thresholds by result type of output candidates. 379 // Candidates *should* be generated in sorted order, so no need to retroactively prune 380 PruneMap thresholds; 381 382 // satisfy assertions in breadth-first order over the recursion tree of assertion satisfaction. 383 // Stop recursion at a limited number of levels deep to avoid infinite loops. 384 for ( unsigned level = 0; level < recursionLimit; ++level ) { 385 // for each current mutually-compatible set of assertions 386 for ( SatState & sat : sats ) { 387 // stop this branch if a better option is already found 388 auto it = thresholds.find( pruneKey( *sat.cand ) ); 389 if ( it != thresholds.end() && it->second < sat.costs ) goto nextSat; 390 391 // make initial pass at matching assertions 392 for ( auto & assn : sat.need ) { 393 // fail early if any assertion is not satisfiable 394 if ( ! satisfyAssertion( assn, sat ) ) { 395 Indenter tabs{ 3 }; 396 std::ostringstream ss; 397 ss << tabs << "Unsatisfiable alternative:\n"; 398 print( ss, *sat.cand, ++tabs ); 399 ss << (tabs-1) << "Could not satisfy assertion:\n"; 400 ast::print( ss, assn.first, tabs ); 401 402 errors.emplace_back( ss.str() ); 403 goto nextSat; 404 } 405 } 406 407 if ( sat.deferred.empty() ) { 408 // either add successful match or push back next state 409 if ( sat.newNeed.empty() ) { 410 finalizeAssertions( 411 sat.cand, sat.inferred, thresholds, std::move( sat.costs ), out ); 412 } else { 413 nextSats.emplace_back( std::move( sat ), IterateState ); 414 } 415 } else if ( sat.deferred.size() > deferLimit ) { 416 // too many deferred assertions to attempt mutual compatibility 417 Indenter tabs{ 3 }; 418 std::ostringstream ss; 419 ss << tabs << "Unsatisfiable alternative:\n"; 420 print( ss, *sat.cand, ++tabs ); 421 ss << (tabs-1) << "Too many non-unique satisfying assignments for assertions:\n"; 422 for ( const auto & d : sat.deferred ) { 423 ast::print( ss, d.decl, tabs ); 424 } 425 426 errors.emplace_back( ss.str() ); 427 goto nextSat; 428 } else { 429 // combine deferred assertions by mutual compatibility 430 std::vector< CandidateEnvMerger::OutType > compatible = filterCombos( 431 sat.deferred, CandidateEnvMerger{ sat.cand->env, sat.cand->open, sat.symtab } ); 432 433 // fail early if no mutually-compatible assertion satisfaction 434 if ( compatible.empty() ) { 435 Indenter tabs{ 3 }; 436 std::ostringstream ss; 437 ss << tabs << "Unsatisfiable alternative:\n"; 438 print( ss, *sat.cand, ++tabs ); 439 ss << (tabs-1) << "No mutually-compatible satisfaction for assertions:\n"; 440 for ( const auto& d : sat.deferred ) { 441 ast::print( ss, d.decl, tabs ); 442 } 443 444 errors.emplace_back( ss.str() ); 445 goto nextSat; 446 } 447 448 // sort by cost (for overall pruning order) 449 std::sort( compatible.begin(), compatible.end() ); 450 451 // process mutually-compatible combinations 452 for ( auto & compat : compatible ) { 453 // set up next satisfaction state 454 CandidateRef nextCand = std::make_shared<Candidate>( 455 sat.cand->expr, std::move( compat.env ), std::move( compat.open ), 456 ast::AssertionSet{} /* need moved into satisfaction state */, 457 sat.cand->cost, sat.cand->cvtCost ); 458 459 ast::AssertionSet nextNewNeed{ sat.newNeed }; 460 InferCache nextInferred{ sat.inferred }; 461 462 CostVec nextCosts{ sat.costs }; 463 nextCosts.back() += compat.cost; 464 465 ast::SymbolTable nextSymtab{ sat.symtab }; 466 467 // add compatible assertions to new satisfaction state 468 for ( DeferRef r : compat.assns ) { 469 AssnCandidate match = r.match; 470 addToSymbolTable( match.have, nextSymtab ); 471 nextNewNeed.insert( match.need.begin(), match.need.end() ); 472 473 bindAssertion( r.decl, r.info, nextCand, match, nextInferred ); 474 } 475 476 // either add successful match or push back next state 477 if ( nextNewNeed.empty() ) { 478 finalizeAssertions( 479 nextCand, nextInferred, thresholds, std::move( nextCosts ), out ); 480 } else { 481 nextSats.emplace_back( 482 std::move( nextCand ), std::move( nextNewNeed ), 483 std::move( nextInferred ), std::move( nextCosts ), 484 std::move( nextSymtab ) ); 485 } 486 } 487 } 488 nextSat:; } 489 490 // finish or reset for next round 491 if ( nextSats.empty() ) return; 492 sats.swap( nextSats ); 493 nextSats.clear(); 494 } 495 496 // exceeded recursion limit if reaches here 497 if ( out.empty() ) { 498 SemanticError( cand->expr->location, "Too many recursive assertions" ); 499 } 29 500 } 30 501 -
src/ResolvExpr/SatisfyAssertions.hpp
r1e5dedc4 r3c6e417 29 29 /// Recursively satisfies all assertions provided in a candidate; returns true if succeeds 30 30 void satisfyAssertions( 31 Candidate & alt, const ast::SymbolTable & symtab, CandidateList & out,31 CandidateRef & cand, const ast::SymbolTable & symtab, CandidateList & out, 32 32 std::vector<std::string> & errors ); 33 33 -
src/ResolvExpr/SpecCost.cc
r1e5dedc4 r3c6e417 9 9 // Author : Aaron B. Moss 10 10 // Created On : Tue Oct 02 15:50:00 2018 11 // Last Modified By : A aron B. Moss12 // Last Modified On : Tue Oct 02 15:50:00 201813 // Update Count : 111 // Last Modified By : Andrew Beach 12 // Last Modified On : Wed Jun 19 10:43:00 2019 13 // Update Count : 2 14 14 // 15 15 16 16 #include <limits> 17 17 #include <list> 18 18 #include <type_traits> 19 20 #include "AST/Pass.hpp" 21 #include "AST/Type.hpp" 19 22 #include "Common/PassVisitor.h" 20 23 #include "SynTree/Declaration.h" … … 61 64 visit_children = false; 62 65 } 63 66 64 67 private: 65 68 // returns minimum non-negative count + 1 over type parameters (-1 if none such) … … 80 83 visit_children = false; 81 84 } 82 85 83 86 // look for polymorphic parameters 84 87 void previsit(UnionInstType* uty) { … … 111 114 return counter.pass.get_count(); 112 115 } 116 117 namespace { 118 /// The specialization counter inner class. 119 class SpecCounter : public ast::WithShortCircuiting, public ast::WithVisitorRef<SpecCounter> { 120 int count = -1; ///< specialization count (-1 for none) 121 122 // Converts the max value to -1 (none), otherwise increments the value. 123 static int toNoneOrInc( int value ) { 124 assert( 0 <= value ); 125 return value < std::numeric_limits<int>::max() ? value + 1 : -1; 126 } 127 128 template<typename T> using MapperT = 129 typename std::add_pointer<ast::Type const *(typename T::value_type const &)>::type; 130 131 // Update the minimum to the new lowest non-none value. 132 template<typename T> 133 void updateMinimumPresent( int & minimum, const T & list, MapperT<T> mapper ) { 134 for ( const auto & node : list ) { 135 count = -1; 136 mapper( node )->accept( *visitor ); 137 if ( count != -1 && count < minimum ) minimum = count; 138 } 139 } 140 141 // Returns minimum non-negative count + 1 over type parameters (-1 if none such). 142 template<typename T> 143 int minimumPresent( const T & list, MapperT<T> mapper ) { 144 int minCount = std::numeric_limits<int>::max(); 145 updateMinimumPresent( minCount, list, mapper ); 146 return toNoneOrInc( minCount ); 147 } 148 149 // The three mappers: 150 static const ast::Type * decl_type( const ast::ptr< ast::DeclWithType > & decl ) { 151 return decl->get_type(); 152 } 153 static const ast::Type * expr_result( const ast::ptr< ast::Expr > & expr ) { 154 return expr->result; 155 } 156 static const ast::Type * type_deref( const ast::ptr< ast::Type > & type ) { 157 return type.get(); 158 } 159 160 public: 161 int get_count() const { return 0 <= count ? count : 0; } 162 163 // Mark specialization of base type. 164 void postvisit( const ast::PointerType * ) { if ( count >= 0 ) ++count; } 165 void postvisit( const ast::ArrayType * ) { if ( count >= 0 ) ++count; } 166 void postvisit( const ast::ReferenceType * ) { if ( count >= 0 ) ++count; } 167 168 // Use the minimal specialization value over returns and params. 169 void previsit( const ast::FunctionType * fty ) { 170 int minCount = std::numeric_limits<int>::max(); 171 updateMinimumPresent( minCount, fty->params, decl_type ); 172 updateMinimumPresent( minCount, fty->returns, decl_type ); 173 // Add another level to minCount if set. 174 count = toNoneOrInc( minCount ); 175 // We have already visited children. 176 visit_children = false; 177 } 178 179 // Look for polymorphic parameters. 180 void previsit( const ast::StructInstType * sty ) { 181 count = minimumPresent( sty->params, expr_result ); 182 visit_children = false; 183 } 184 185 // Look for polymorphic parameters. 186 void previsit( const ast::UnionInstType * uty ) { 187 count = minimumPresent( uty->params, expr_result ); 188 visit_children = false; 189 } 190 191 // Note polymorphic type (which may be specialized). 192 // xxx - maybe account for open/closed type variables 193 void postvisit( const ast::TypeInstType * ) { count = 0; } 194 195 // Use the minimal specialization over elements. 196 // xxx - maybe don't increment, tuple flattening doesn't necessarily specialize 197 void previsit( const ast::TupleType * tty ) { 198 count = minimumPresent( tty->types, type_deref ); 199 visit_children = false; 200 } 201 }; 202 203 } // namespace 204 205 int specCost( const ast::Type * type ) { 206 if ( nullptr == type ) { 207 return 0; 208 } 209 ast::Pass<SpecCounter> counter; 210 type->accept( *counter.pass.visitor ); 211 return counter.pass.get_count(); 212 } 213 113 214 } // namespace ResolvExpr 114 215 -
src/ResolvExpr/typeops.h
r1e5dedc4 r3c6e417 78 78 // in CastCost.cc 79 79 Cost castCost( Type *src, Type *dest, const SymTab::Indexer &indexer, const TypeEnvironment &env ); 80 Cost castCost( 81 const ast::Type * src, const ast::Type * dst, const ast::SymbolTable & symtab, 82 const ast::TypeEnvironment & env ); 80 83 81 84 // in ConversionCost.cc 82 85 Cost conversionCost( Type *src, Type *dest, const SymTab::Indexer &indexer, const TypeEnvironment &env ); 86 Cost conversionCost( 87 const ast::Type * src, const ast::Type * dst, const ast::SymbolTable & symtab, 88 const ast::TypeEnvironment & env ); 83 89 84 90 // in AlternativeFinder.cc … … 127 133 // in PolyCost.cc 128 134 int polyCost( Type *type, const TypeEnvironment &env, const SymTab::Indexer &indexer ); 135 int polyCost( 136 const ast::Type * type, const ast::SymbolTable & symtab, const ast::TypeEnvironment & env ); 129 137 130 138 // in SpecCost.cc 131 139 int specCost( Type *type ); 140 int specCost( const ast::Type * type ); 132 141 133 142 // in Occurs.cc … … 146 155 // in AlternativeFinder.cc 147 156 void referenceToRvalueConversion( Expression *& expr, Cost & cost ); 157 // in CandidateFinder.cpp 148 158 const ast::Expr * referenceToRvalueConversion( const ast::Expr * expr, Cost & cost ); 149 159 -
src/SymTab/Mangler.h
r1e5dedc4 r3c6e417 101 101 using Mode = bitfield<mangle_flags>; 102 102 103 static inline Mode typeMode() { return NoOverrideable | Type; } 104 103 105 /// Mangle declaration name 104 106 std::string mangle( const ast::Node * decl, Mode mode = {} ); -
src/SymTab/Validate.cc
r1e5dedc4 r3c6e417 1367 1367 return addrExpr; 1368 1368 } 1369 1370 const ast::Type * validateType( const ast::Type * type, const ast::SymbolTable & symtab ) { 1371 #warning unimplemented 1372 (void)type; (void)symtab; 1373 assert(false); 1374 return nullptr; 1375 } 1369 1376 } // namespace SymTab 1370 1377 -
src/SymTab/Validate.h
r1e5dedc4 r3c6e417 22 22 class Type; 23 23 24 namespace ast { 25 class Type; 26 class SymbolTable; 27 } 28 24 29 namespace SymTab { 25 30 class Indexer; … … 28 33 void validate( std::list< Declaration * > &translationUnit, bool doDebug = false ); 29 34 void validateType( Type *type, const Indexer *indexer ); 35 36 const ast::Type * validateType( const ast::Type * type, const ast::SymbolTable & symtab ); 30 37 } // namespace SymTab 31 38 -
src/Tuples/Tuples.cc
r1e5dedc4 r3c6e417 10 10 // Created On : Mon Jun 17 14:41:00 2019 11 11 // Last Modified By : Andrew Beach 12 // Last Modified On : Wed Jun 12 15:43:00 201913 // Update Count : 012 // Last Modified On : Tue Jun 18 9:31:00 2019 13 // Update Count : 1 14 14 // 15 15 … … 27 27 /// impure. 28 28 struct ImpurityDetector : public ast::WithShortCircuiting { 29 ImpurityDetector( bool ignoreUnique ) : ignoreUnique( ignoreUnique ) {}30 29 bool maybeImpure = false; 31 bool ignoreUnique;32 30 33 31 void previsit( ast::ApplicationExpr const * appExpr ) { 34 visit_children = false;35 32 if ( ast::DeclWithType const * function = InitTweak::getFunction( appExpr ) ) { 36 33 if ( function->linkage == ast::Linkage::Intrinsic 37 34 && ( function->name == "*?" || function->name == "?[?]" ) ) { 38 visit_children = true;39 35 return; 40 36 } 41 37 } 42 maybeImpure = true; 38 maybeImpure = true; visit_children = false; 43 39 } 44 40 void previsit( ast::UntypedExpr const * ) { 45 41 maybeImpure = true; visit_children = false; 46 42 } 43 }; 44 struct ImpurityDetectorIgnoreUnique : public ImpurityDetector { 47 45 void previsit( ast::UniqueExpr const * ) { 48 if ( ignoreUnique ) { 49 visit_children = false; 50 } 46 visit_children = false; 51 47 } 52 48 }; 53 49 54 bool detectImpurity( const ast::Expr * expr, bool ignoreUnique ) { 55 ast::Pass<ImpurityDetector> detector( ignoreUnique ); 50 template<typename Detector> 51 bool detectImpurity( const ast::Expr * expr ) { 52 ast::Pass<Detector> detector; 56 53 expr->accept( detector ); 57 54 return detector.pass.maybeImpure; … … 59 56 } // namespace 60 57 58 bool maybeImpure( const ast::Expr * expr ) { 59 return detectImpurity<ImpurityDetector>( expr ); 60 } 61 61 62 bool maybeImpureIgnoreUnique( const ast::Expr * expr ) { 62 return detectImpurity ( expr, true);63 return detectImpurity<ImpurityDetectorIgnoreUnique>( expr ); 63 64 } 64 65 -
src/Tuples/Tuples.h
r1e5dedc4 r3c6e417 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Andrew Beach 12 // Last Modified On : Wed Jun 12 10:39:00 201713 // Update Count : 1 712 // Last Modified On : Tue Jun 18 09:36:00 2019 13 // Update Count : 18 14 14 // 15 15 … … 56 56 /// returns true if the expression may contain side-effects. 57 57 bool maybeImpure( Expression * expr ); 58 bool maybeImpure( const ast::Expr * expr ); 58 59 59 60 /// Returns true if the expression may contain side-effect,
Note: See TracChangeset
for help on using the changeset viewer.