Changeset 8cb149f
- Timestamp:
- Feb 2, 2022, 2:50:46 PM (22 months ago)
- Branches:
- ADT, ast-experimental, enum, forall-pointer-decay, master, pthread-emulation, qualifiedEnum
- Children:
- 21a99cc
- Parents:
- 4de48c5 (diff), 4e7171f (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 36 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/fig/base.fig
r4de48c5 r8cb149f 12 12 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6900 4200 20 20 6900 4200 6920 4200 13 13 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6975 4200 20 20 6975 4200 6995 4200 14 -6 15 6 6375 5100 6675 5250 16 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6450 5175 20 20 6450 5175 6470 5175 17 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6525 5175 20 20 6525 5175 6545 5175 18 1 3 0 1 0 0 50 -1 20 0.000 1 0.0000 6600 5175 20 20 6600 5175 6620 5175 14 19 -6 15 20 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3900 2400 300 300 3900 2400 4200 2400 … … 75 80 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 76 81 2400 2475 3000 2475 82 2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7 83 3300 5210 3150 4950 2850 4950 2700 5210 2850 5470 3150 5470 84 3300 5210 85 2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7 86 4500 5210 4350 4950 4050 4950 3900 5210 4050 5470 4350 5470 87 4500 5210 88 2 3 0 1 0 7 50 -1 -1 0.000 0 0 0 0 0 7 89 5700 5210 5550 4950 5250 4950 5100 5210 5250 5470 5550 5470 90 5700 5210 77 91 4 2 -1 50 -1 0 12 0.0000 2 135 630 2100 3075 Threads\001 78 92 4 2 -1 50 -1 0 12 0.0000 2 165 450 2100 2850 Ready\001 … … 82 96 4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 3550 TS\001 83 97 4 1 -1 50 -1 0 11 0.0000 2 135 180 2700 2650 TS\001 98 4 2 -1 50 -1 0 12 0.0000 2 135 900 2100 5175 Processors\001 -
doc/theses/thierry_delisle_PhD/thesis/glossary.tex
r4de48c5 r8cb149f 40 40 41 41 \textit{Synonyms : User threads, Lightweight threads, Green threads, Virtual threads, Tasks.} 42 } 43 44 \longnewglossaryentry{rmr} 45 {name={remote memory reference}} 46 { 47 42 48 } 43 49 -
doc/theses/thierry_delisle_PhD/thesis/text/core.tex
r4de48c5 r8cb149f 49 49 50 50 \section{Design} 51 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The common solution to the single point of contention is to shard the ready-queue so each \gls{hthrd} can access the ready-queue without contention, increasing performance.51 In general, a na\"{i}ve \glsxtrshort{fifo} ready-queue does not scale with increased parallelism from \glspl{hthrd}, resulting in decreased performance. The problem is adding/removing \glspl{thrd} is a single point of contention. As shown in the evaluation sections, most production schedulers do scale when adding \glspl{hthrd}. The solution to this problem is to shard the ready-queue : create multiple sub-ready-queues that multiple \glspl{hthrd} can access and modify without interfering. 52 52 53 \subsection{Sharding} \label{sec:sharding} 54 An interesting approach to sharding a queue is presented in \cit{Trevors paper}. This algorithm presents a queue with a relaxed \glsxtrshort{fifo} guarantee using an array of strictly \glsxtrshort{fifo} sublists as shown in Figure~\ref{fig:base}. Each \emph{cell} of the array has a timestamp for the last operation and a pointer to a linked-list with a lock. Each node in the list is marked with a timestamp indicating when it is added to the list. A push operation is done by picking a random cell, acquiring the list lock, and pushing to the list. If the cell is locked, the operation is simply retried on another random cell until a lock is acquired. A pop operation is done in a similar fashion except two random cells are picked. If both cells are unlocked with non-empty lists, the operation pops the node with the oldest timestamp. If one of the cells is unlocked and non-empty, the operation pops from that cell. If both cells are either locked or empty, the operation picks two new random cells and tries again. 53 Before going into the design of \CFA's scheduler proper, I want to discuss two sharding solutions which served as the inspiration scheduler in this thesis. 55 54 55 \subsection{Work-Stealing} 56 57 As I mentioned in \ref{existing:workstealing}, a popular pattern shard the ready-queue is work-stealing. As mentionned, in this pattern each \gls{proc} has its own ready-queue and \glspl{proc} only access each other's ready-queue if they run out of work. 58 The interesting aspect of workstealing happen in easier scheduling cases, \ie enough work for everyone but no more and no load balancing needed. In these cases, work-stealing is close to optimal scheduling: it can achieve perfect locality and have no contention. 59 On the other hand, work-stealing schedulers only attempt to do load-balancing when a \gls{proc} runs out of work. 60 This means that the scheduler may never balance unfairness that does not result in a \gls{proc} running out of work. 61 Chapter~\ref{microbench} shows that in pathological cases this problem can lead to indefinite starvation. 62 63 64 Based on these observation, I conclude that \emph{perfect} scheduler should behave very similarly to work-stealing in the easy cases, but should have more proactive load-balancing if the need arises. 65 66 \subsection{Relaxed-Fifo} 67 An entirely different scheme is to create a ``relaxed-FIFO'' queue as in \todo{cite Trevor's paper}. This approach forgos any ownership between \gls{proc} and ready-queue, and simply creates a pool of ready-queues from which the \glspl{proc} can pick from. 68 \Glspl{proc} choose ready-queus at random, but timestamps are added to all elements of the queue and dequeues are done by picking two queues and dequeing the oldest element. 69 The result is a queue that has both decent scalability and sufficient fairness. 70 The lack of ownership means that as long as one \gls{proc} is still able to repeatedly dequeue elements, it is unlikely that any element will stay on the queue for much longer than any other element. 71 This contrasts with work-stealing, where \emph{any} \gls{proc} busy for an extended period of time results in all the elements on its local queue to have to wait. Unless another \gls{proc} runs out of work. 72 73 An important aspects of this scheme's fairness approach is that the timestamps make it possible to evaluate how long elements have been on the queue. 74 However, another major aspect is that \glspl{proc} will eagerly search for these older elements instead of focusing on specific queues. 75 76 While the fairness, of this scheme is good, it does suffer in terms of performance. 77 It requires very wide sharding, \eg at least 4 queues per \gls{hthrd}, and the randomness means locality can suffer significantly and finding non-empty queues can be difficult. 78 79 \section{\CFA} 80 The \CFA is effectively attempting to merge these two approaches, keeping the best of both. 81 It is based on the 56 82 \begin{figure} 57 83 \centering 58 84 \input{base.pstex_t} 59 \caption[ Relaxed FIFO list]{Relaxed FIFO list \smallskip\newline List at the base of the scheduler: an array of strictly FIFO lists. The timestamp is in all nodes and cell arrays.}85 \caption[Base \CFA design]{Base \CFA design \smallskip\newline A list of sub-ready queues offers the sharding, two per \glspl{proc}. However, \glspl{proc} can access any of the sub-queues.} 60 86 \label{fig:base} 61 87 \end{figure} 62 88 63 \subsection{Finding threads}64 Once threads have been distributed onto multiple queues, identifying empty queues becomes a problem. Indeed, if the number of \glspl{thrd} does not far exceed the number of queues, it is probable that several of the cell queues are empty. Figure~\ref{fig:empty} shows an example with 2 \glspl{thrd} running on 8 queues, where the chances of getting an empty queue is 75\% per pick, meaning two random picks yield a \gls{thrd} only half the time. This scenario leads to performance problems since picks that do not yield a \gls{thrd} are not useful and do not necessarily help make more informed guesses.65 89 66 \begin{figure}67 \centering68 \input{empty.pstex_t}69 \caption[``More empty'' Relaxed FIFO list]{``More empty'' Relaxed FIFO list \smallskip\newline Emptier state of the queue: the array contains many empty cells, that is strictly FIFO lists containing no elements.}70 \label{fig:empty}71 \end{figure}72 90 73 There are several solutions to this problem, but they ultimately all have to encode if a cell has an empty list. My results show the density and locality of this encoding is generally the dominating factor in these scheme. Classic solutions to this problem use one of three techniques to encode the information: 91 % The common solution to the single point of contention is to shard the ready-queue so each \gls{hthrd} can access the ready-queue without contention, increasing performance. 74 92 75 \paragraph{Dense Information} Figure~\ref{fig:emptybit} shows a dense bitmask to identify the cell queues currently in use. This approach means processors can often find \glspl{thrd} in constant time, regardless of how many underlying queues are empty. Furthermore, modern x86 CPUs have extended bit manipulation instructions (BMI2) that allow searching the bitmask with very little overhead compared to the randomized selection approach for a filled ready queue, offering good performance even in cases with many empty inner queues. However, this technique has its limits: with a single word\footnote{Word refers here to however many bits can be written atomically.} bitmask, the total amount of ready-queue sharding is limited to the number of bits in the word. With a multi-word bitmask, this maximum limit can be increased arbitrarily, but the look-up time increases. Finally, a dense bitmap, either single or multi-word, causes additional contention problems that reduces performance because of cache misses after updates. This central update bottleneck also means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available \glspl{thrd} but none on queue when the subsequent atomic check is done. 93 % \subsection{Sharding} \label{sec:sharding} 94 % An interesting approach to sharding a queue is presented in \cit{Trevors paper}. This algorithm presents a queue with a relaxed \glsxtrshort{fifo} guarantee using an array of strictly \glsxtrshort{fifo} sublists as shown in Figure~\ref{fig:base}. Each \emph{cell} of the array has a timestamp for the last operation and a pointer to a linked-list with a lock. Each node in the list is marked with a timestamp indicating when it is added to the list. A push operation is done by picking a random cell, acquiring the list lock, and pushing to the list. If the cell is locked, the operation is simply retried on another random cell until a lock is acquired. A pop operation is done in a similar fashion except two random cells are picked. If both cells are unlocked with non-empty lists, the operation pops the node with the oldest timestamp. If one of the cells is unlocked and non-empty, the operation pops from that cell. If both cells are either locked or empty, the operation picks two new random cells and tries again. 76 95 77 \begin{figure} 78 \centering 79 \vspace*{-5pt} 80 {\resizebox{0.75\textwidth}{!}{\input{emptybit.pstex_t}}} 81 \vspace*{-5pt} 82 \caption[Underloaded queue with bitmask]{Underloaded queue with bitmask indicating array cells with items.} 83 \label{fig:emptybit} 96 % \begin{figure} 97 % \centering 98 % \input{base.pstex_t} 99 % \caption[Relaxed FIFO list]{Relaxed FIFO list \smallskip\newline List at the base of the scheduler: an array of strictly FIFO lists. The timestamp is in all nodes and cell arrays.} 100 % \label{fig:base} 101 % \end{figure} 84 102 85 \vspace*{10pt} 86 {\resizebox{0.75\textwidth}{!}{\input{emptytree.pstex_t}}} 87 \vspace*{-5pt} 88 \caption[Underloaded queue with binary search-tree]{Underloaded queue with binary search-tree indicating array cells with items.} 89 \label{fig:emptytree} 103 % \subsection{Finding threads} 104 % Once threads have been distributed onto multiple queues, identifying empty queues becomes a problem. Indeed, if the number of \glspl{thrd} does not far exceed the number of queues, it is probable that several of the cell queues are empty. Figure~\ref{fig:empty} shows an example with 2 \glspl{thrd} running on 8 queues, where the chances of getting an empty queue is 75\% per pick, meaning two random picks yield a \gls{thrd} only half the time. This scenario leads to performance problems since picks that do not yield a \gls{thrd} are not useful and do not necessarily help make more informed guesses. 90 105 91 \vspace*{10pt}92 {\resizebox{0.95\textwidth}{!}{\input{emptytls.pstex_t}}} 93 \vspace*{-5pt}94 \caption[Underloaded queue with per processor bitmask]{Underloaded queue with per processor bitmask indicating array cells with items.}95 \label{fig:emptytls}96 \end{figure}106 % \begin{figure} 107 % \centering 108 % \input{empty.pstex_t} 109 % \caption[``More empty'' Relaxed FIFO list]{``More empty'' Relaxed FIFO list \smallskip\newline Emptier state of the queue: the array contains many empty cells, that is strictly FIFO lists containing no elements.} 110 % \label{fig:empty} 111 % \end{figure} 97 112 98 \paragraph{Sparse Information} Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}. However, this approach may lead to poorer performance due to the inherent pointer chasing cost while still allowing significant contention on the nodes of the tree if the tree is shallow. 113 % There are several solutions to this problem, but they ultimately all have to encode if a cell has an empty list. My results show the density and locality of this encoding is generally the dominating factor in these scheme. Classic solutions to this problem use one of three techniques to encode the information: 99 114 100 \paragraph{Local Information} Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but each \gls{hthrd} keeps its own independent copy. While this approach can offer good scalability \emph{and} low latency, the liveliness and discovery of the information can become a problem. This case is made worst in systems with few processors where even blind random picks can find \glspl{thrd} in a few tries.115 % \paragraph{Dense Information} Figure~\ref{fig:emptybit} shows a dense bitmask to identify the cell queues currently in use. This approach means processors can often find \glspl{thrd} in constant time, regardless of how many underlying queues are empty. Furthermore, modern x86 CPUs have extended bit manipulation instructions (BMI2) that allow searching the bitmask with very little overhead compared to the randomized selection approach for a filled ready queue, offering good performance even in cases with many empty inner queues. However, this technique has its limits: with a single word\footnote{Word refers here to however many bits can be written atomically.} bitmask, the total amount of ready-queue sharding is limited to the number of bits in the word. With a multi-word bitmask, this maximum limit can be increased arbitrarily, but the look-up time increases. Finally, a dense bitmap, either single or multi-word, causes additional contention problems that reduces performance because of cache misses after updates. This central update bottleneck also means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available \glspl{thrd} but none on queue when the subsequent atomic check is done. 101 116 102 I built a prototype of these approaches and none of these techniques offer satisfying performance when few threads are present. All of these approach hit the same 2 problems. First, randomly picking sub-queues is very fast. That speed means any improvement to the hit rate can easily be countered by a slow-down in look-up speed, whether or not there are empty lists. Second, the array is already sharded to avoid contention bottlenecks, so any denser data structure tends to become a bottleneck. In all cases, these factors meant the best cases scenario, \ie many threads, would get worst throughput, and the worst-case scenario, few threads, would get a better hit rate, but an equivalent poor throughput. As a result I tried an entirely different approach. 117 % \begin{figure} 118 % \centering 119 % \vspace*{-5pt} 120 % {\resizebox{0.75\textwidth}{!}{\input{emptybit.pstex_t}}} 121 % \vspace*{-5pt} 122 % \caption[Underloaded queue with bitmask]{Underloaded queue with bitmask indicating array cells with items.} 123 % \label{fig:emptybit} 103 124 104 \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/} 105 In the worst-case scenario there are only few \glspl{thrd} ready to run, or more precisely given $P$ \glspl{proc}\footnote{For simplicity, this assumes there is a one-to-one match between \glspl{proc} and \glspl{hthrd}.}, $T$ \glspl{thrd} and $\epsilon$ a very small number, than the worst case scenario can be represented by $T = P + \epsilon$, with $\epsilon \ll P$. It is important to note in this case that fairness is effectively irrelevant. Indeed, this case is close to \emph{actually matching} the model of the ``Ideal multi-tasking CPU'' on page \pageref{q:LinuxCFS}. In this context, it is possible to use a purely internal-locality based approach and still meet the fairness requirements. This approach simply has each \gls{proc} running a single \gls{thrd} repeatedly. Or from the shared ready-queue viewpoint, each \gls{proc} pushes to a given sub-queue and then pops from the \emph{same} subqueue. The challenge is for the the scheduler to achieve good performance in both the $T = P + \epsilon$ case and the $T \gg P$ case, without affecting the fairness guarantees in the later. 125 % \vspace*{10pt} 126 % {\resizebox{0.75\textwidth}{!}{\input{emptytree.pstex_t}}} 127 % \vspace*{-5pt} 128 % \caption[Underloaded queue with binary search-tree]{Underloaded queue with binary search-tree indicating array cells with items.} 129 % \label{fig:emptytree} 106 130 107 To handle this case, I use a \glsxtrshort{prng}\todo{Fix missing long form} in a novel way. There exist \glsxtrshort{prng}s that are fast, compact and can be run forward \emph{and} backwards. Linear congruential generators~\cite{wiki:lcg} are an example of \glsxtrshort{prng}s of such \glsxtrshort{prng}s. The novel approach is to use the ability to run backwards to ``replay'' the \glsxtrshort{prng}. The scheduler uses an exclusive \glsxtrshort{prng} instance per \gls{proc}, the random-number seed effectively starts an encoding that produces a list of all accessed subqueues, from latest to oldest. Replaying the \glsxtrshort{prng} to identify cells accessed recently and which probably have data still cached. 131 % \vspace*{10pt} 132 % {\resizebox{0.95\textwidth}{!}{\input{emptytls.pstex_t}}} 133 % \vspace*{-5pt} 134 % \caption[Underloaded queue with per processor bitmask]{Underloaded queue with per processor bitmask indicating array cells with items.} 135 % \label{fig:emptytls} 136 % \end{figure} 108 137 109 The algorithm works as follows: 110 \begin{itemize} 111 \item Each \gls{proc} has two \glsxtrshort{prng} instances, $F$ and $B$. 112 \item Push and Pop operations occur as discussed in Section~\ref{sec:sharding} with the following exceptions: 113 \begin{itemize} 114 \item Push operations use $F$ going forward on each try and on success $F$ is copied into $B$. 115 \item Pop operations use $B$ going backwards on each try. 116 \end{itemize} 117 \end{itemize} 138 % \paragraph{Sparse Information} Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}. However, this approach may lead to poorer performance due to the inherent pointer chasing cost while still allowing significant contention on the nodes of the tree if the tree is shallow. 118 139 119 The main benefit of this technique is that it basically respects the desired properties of Figure~\ref{fig:fair}. When looking for work, a \gls{proc} first looks at the last cell they pushed to, if any, and then move backwards through its accessed cells. As the \gls{proc} continues looking for work, $F$ moves backwards and $B$ stays in place. As a result, the relation between the two becomes weaker, which means that the probablisitic fairness of the algorithm reverts to normal. Chapter~\ref{proofs} discusses more formally the fairness guarantees of this algorithm.140 % \paragraph{Local Information} Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but each \gls{hthrd} keeps its own independent copy. While this approach can offer good scalability \emph{and} low latency, the liveliness and discovery of the information can become a problem. This case is made worst in systems with few processors where even blind random picks can find \glspl{thrd} in a few tries. 120 141 121 \section{Details} 142 % I built a prototype of these approaches and none of these techniques offer satisfying performance when few threads are present. All of these approach hit the same 2 problems. First, randomly picking sub-queues is very fast. That speed means any improvement to the hit rate can easily be countered by a slow-down in look-up speed, whether or not there are empty lists. Second, the array is already sharded to avoid contention bottlenecks, so any denser data structure tends to become a bottleneck. In all cases, these factors meant the best cases scenario, \ie many threads, would get worst throughput, and the worst-case scenario, few threads, would get a better hit rate, but an equivalent poor throughput. As a result I tried an entirely different approach. 143 144 % \subsection{Dynamic Entropy}\cit{https://xkcd.com/2318/} 145 % In the worst-case scenario there are only few \glspl{thrd} ready to run, or more precisely given $P$ \glspl{proc}\footnote{For simplicity, this assumes there is a one-to-one match between \glspl{proc} and \glspl{hthrd}.}, $T$ \glspl{thrd} and $\epsilon$ a very small number, than the worst case scenario can be represented by $T = P + \epsilon$, with $\epsilon \ll P$. It is important to note in this case that fairness is effectively irrelevant. Indeed, this case is close to \emph{actually matching} the model of the ``Ideal multi-tasking CPU'' on page \pageref{q:LinuxCFS}. In this context, it is possible to use a purely internal-locality based approach and still meet the fairness requirements. This approach simply has each \gls{proc} running a single \gls{thrd} repeatedly. Or from the shared ready-queue viewpoint, each \gls{proc} pushes to a given sub-queue and then pops from the \emph{same} subqueue. The challenge is for the the scheduler to achieve good performance in both the $T = P + \epsilon$ case and the $T \gg P$ case, without affecting the fairness guarantees in the later. 146 147 % To handle this case, I use a \glsxtrshort{prng}\todo{Fix missing long form} in a novel way. There exist \glsxtrshort{prng}s that are fast, compact and can be run forward \emph{and} backwards. Linear congruential generators~\cite{wiki:lcg} are an example of \glsxtrshort{prng}s of such \glsxtrshort{prng}s. The novel approach is to use the ability to run backwards to ``replay'' the \glsxtrshort{prng}. The scheduler uses an exclusive \glsxtrshort{prng} instance per \gls{proc}, the random-number seed effectively starts an encoding that produces a list of all accessed subqueues, from latest to oldest. Replaying the \glsxtrshort{prng} to identify cells accessed recently and which probably have data still cached. 148 149 % The algorithm works as follows: 150 % \begin{itemize} 151 % \item Each \gls{proc} has two \glsxtrshort{prng} instances, $F$ and $B$. 152 % \item Push and Pop operations occur as discussed in Section~\ref{sec:sharding} with the following exceptions: 153 % \begin{itemize} 154 % \item Push operations use $F$ going forward on each try and on success $F$ is copied into $B$. 155 % \item Pop operations use $B$ going backwards on each try. 156 % \end{itemize} 157 % \end{itemize} 158 159 % The main benefit of this technique is that it basically respects the desired properties of Figure~\ref{fig:fair}. When looking for work, a \gls{proc} first looks at the last cell they pushed to, if any, and then move backwards through its accessed cells. As the \gls{proc} continues looking for work, $F$ moves backwards and $B$ stays in place. As a result, the relation between the two becomes weaker, which means that the probablisitic fairness of the algorithm reverts to normal. Chapter~\ref{proofs} discusses more formally the fairness guarantees of this algorithm. 160 161 % \section{Details} -
doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex
r4de48c5 r8cb149f 4 4 5 5 In Memory Plain Text 6 7 Networked Plain Text8 6 9 7 Networked ZIPF -
doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
r4de48c5 r8cb149f 2 2 3 3 The first step of evaluation is always to test-out small controlled cases, to ensure that the basics are working properly. 4 This sections presents f ourdifferent experimental setup, evaluating some of the basic features of \CFA's scheduler.4 This sections presents five different experimental setup, evaluating some of the basic features of \CFA's scheduler. 5 5 6 6 \section{Cycling latency} 7 7 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready-queue. 8 While these two operation also describe a \texttt{yield} operation, many systems use this as the most basic benchmark.9 However, yielding can be treated as a special case, since it also carries the information that the length of the ready queuewill not change.8 Since these two operation also describe a \texttt{yield} operation, many systems use this as the most basic benchmark. 9 However, yielding can be treated as a special case, since it also carries the information that the number of the ready \glspl{at} will not change. 10 10 Not all systems use this information, but those which do may appear to have better performance than they would for disconnected push/pop pairs. 11 11 For this reason, I chose a different first benchmark, which I call the Cycle Benchmark. 12 This benchmark arranges many threads into multiple rings of threads.12 This benchmark arranges many \glspl{at} into multiple rings of \glspl{at}. 13 13 Each ring is effectively a circular singly-linked list. 14 At runtime, each thread unparks the next threadbefore parking itself.14 At runtime, each \gls{at} unparks the next \gls{at} before parking itself. 15 15 This corresponds to the desired pair of ready queue operations. 16 Unparking the next thread requires pushing that thread onto the ready queue and the ensuing park will cause the runtime to pop a threadfrom the ready-queue.16 Unparking the next \gls{at} requires pushing that \gls{at} onto the ready queue and the ensuing park will cause the runtime to pop a \gls{at} from the ready-queue. 17 17 Figure~\ref{fig:cycle} shows a visual representation of this arrangement. 18 18 19 The goal of this ring is that the underlying runtime cannot rely on the guarantee that the number of ready threads will stay constant over the duration of the experiment. 20 In fact, the total number of threads waiting on the ready is expected to vary a little because of the race between the next thread unparking and the current thread parking. 21 The size of the cycle is also decided based on this race: cycles that are too small may see the 22 chain of unparks go full circle before the first thread can park. 19 The goal of this ring is that the underlying runtime cannot rely on the guarantee that the number of ready \glspl{at} will stay constant over the duration of the experiment. 20 In fact, the total number of \glspl{at} waiting on the ready queue is expected to vary because of the race between the next \gls{at} unparking and the current \gls{at} parking. 21 The size of the cycle is also decided based on this race: cycles that are too small may see the chain of unparks go full circle before the first \gls{at} can park. 23 22 While this would not be a correctness problem, every runtime system must handle that race, it could lead to pushes and pops being optimized away. 24 Since silently omitting ready-queue operations would throw off the measuring of these operations. 25 Therefore the ring of threads must be big enough so the threads have the time to fully park before they are unparked. 23 Since silently omitting ready-queue operations would throw off the measuring of these operations, the ring of \glspl{at} must be big enough so the \glspl{at} have the time to fully park before they are unparked. 26 24 Note that this problem is only present on SMP machines and is significantly mitigated by the fact that there are multiple rings in the system. 27 25 … … 29 27 \centering 30 28 \input{cycle.pstex_t} 31 \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each thread unparks the next threadin the cycle before parking itself.}29 \caption[Cycle benchmark]{Cycle benchmark\smallskip\newline Each \gls{at} unparks the next \gls{at} in the cycle before parking itself.} 32 30 \label{fig:cycle} 33 31 \end{figure} 34 32 35 33 \todo{check term ``idle sleep handling''} 36 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of processorsavailable.34 To avoid this benchmark from being dominated by the idle sleep handling, the number of rings is kept at least as high as the number of \glspl{proc} available. 37 35 Beyond this point, adding more rings serves to mitigate even more the idle sleep handling. 38 This is to avoid the case where one of the worker threads runs out of work because of the variation on the number of ready threadsmentionned above.36 This is to avoid the case where one of the worker \glspl{at} runs out of work because of the variation on the number of ready \glspl{at} mentionned above. 39 37 40 38 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V} with respect to the loop condition. 41 39 42 \todo{mention where to get the code.} 40 \todo{code, setup, results} 41 \begin{lstlisting} 42 Thread.main() { 43 count := 0 44 for { 45 wait() 46 this.next.wake() 47 count ++ 48 if must_stop() { break } 49 } 50 global.count += count 51 } 52 \end{lstlisting} 53 43 54 44 55 \section{Yield} 45 56 For completion, I also include the yield benchmark. 46 This benchmark is much simpler than the cycle tests, it simply creates many threads that call \texttt{yield}. 57 This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call \texttt{yield}. 58 As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of \texttt{yield}, due to potential shortcuts in the routine. 59 Its only interesting variable is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) could be empty. 60 This sometimes puts more strain on the idle sleep handling, compared to scenarios where there is clearly plenty of work to be done. 61 62 \todo{code, setup, results} 63 64 \begin{lstlisting} 65 Thread.main() { 66 count := 0 67 while !stop { 68 yield() 69 count ++ 70 } 71 global.count += count 72 } 73 \end{lstlisting} 74 75 76 \section{Churn} 77 The Cycle and Yield benchmark represents an ``easy'' scenario for a scheduler, \eg, an embarrassingly parallel application. 78 In these benchmarks, \glspl{at} can be easily partitioned over the different \glspl{proc} up-front and none of the \glspl{at} communicate with each other. 79 80 The Churn benchmark represents more chaotic usages, where there is no relation between the last \gls{proc} on which a \gls{at} ran and the \gls{proc} that unblocked it. 81 When a \gls{at} is unblocked from a different \gls{proc} than the one on which it last ran, the unblocking \gls{proc} must either ``steal'' the \gls{at} or place it on a remote queue. 82 This results can result in either contention on the remote queue or \glspl{rmr} on \gls{at} data structure. 83 In either case, this benchmark aims to highlight how each scheduler handles these cases, since both cases can lead to performance degradation if they are not handled correctly. 84 85 To achieve this the benchmark uses a fixed size array of \newterm{chair}s, where a chair is a data structure that holds a single blocked \gls{at}. 86 When a \gls{at} attempts to block on the chair, it must first unblocked the \gls{at} currently blocked on said chair, if any. 87 This creates a flow where \glspl{at} push each other out of the chairs before being pushed out themselves. 88 For this benchmark to work however, the number of \glspl{at} must be equal or greater to the number of chairs plus the number of \glspl{proc}. 89 90 \todo{code, setup, results} 91 \begin{lstlisting} 92 Thread.main() { 93 count := 0 94 for { 95 r := random() % len(spots) 96 next := xchg(spots[r], this) 97 if next { next.wake() } 98 wait() 99 count ++ 100 if must_stop() { break } 101 } 102 global.count += count 103 } 104 \end{lstlisting} 47 105 48 106 \section{Locality} 49 107 108 \todo{code, setup, results} 109 50 110 \section{Transfer} 111 The last benchmark is more exactly characterize as an experiment than a benchmark. 112 It tests the behavior of the schedulers for a particularly misbehaved workload. 113 In this workload, one of the \gls{at} is selected at random to be the leader. 114 The leader then spins in a tight loop until it has observed that all other \glspl{at} have acknowledged its leadership. 115 The leader \gls{at} then picks a new \gls{at} to be the ``spinner'' and the cycle repeats. 116 117 The benchmark comes in two flavours for the behavior of the non-leader \glspl{at}: 118 once they acknowledged the leader, they either block on a semaphore or yield repeatadly. 119 120 This experiment is designed to evaluate the short term load balancing of the scheduler. 121 Indeed, schedulers where the runnable \glspl{at} are partitioned on the \glspl{proc} may need to balance the \glspl{at} for this experient to terminate. 122 This is because the spinning \gls{at} is effectively preventing the \gls{proc} from runnning any other \glspl{thrd}. 123 In the semaphore flavour, the number of runnable \glspl{at} will eventually dwindle down to only the leader. 124 This is a simpler case to handle for schedulers since \glspl{proc} eventually run out of work. 125 In the yielding flavour, the number of runnable \glspl{at} stays constant. 126 This is a harder case to handle because corrective measures must be taken even if work is still available. 127 Note that languages that have mandatory preemption do circumvent this problem by forcing the spinner to yield. 128 129 \todo{code, setup, results} 130 \begin{lstlisting} 131 Thread.lead() { 132 this.idx_seen = ++lead_idx 133 if lead_idx > stop_idx { 134 done := true 135 return 136 } 137 138 // Wait for everyone to acknowledge my leadership 139 start: = timeNow() 140 for t in threads { 141 while t.idx_seen != lead_idx { 142 asm pause 143 if (timeNow() - start) > 5 seconds { error() } 144 } 145 } 146 147 // pick next leader 148 leader := threads[ prng() % len(threads) ] 149 150 // wake every one 151 if !exhaust { 152 for t in threads { 153 if t != me { t.wake() } 154 } 155 } 156 } 157 158 Thread.wait() { 159 this.idx_seen := lead_idx 160 if exhaust { wait() } 161 else { yield() } 162 } 163 164 Thread.main() { 165 while !done { 166 if leader == me { this.lead() } 167 else { this.wait() } 168 } 169 } 170 \end{lstlisting} -
doc/theses/thierry_delisle_PhD/thesis/text/existing.tex
r4de48c5 r8cb149f 33 33 34 34 35 \section{Work Stealing} 35 \section{Work Stealing}\label{existing:workstealing} 36 36 One of the most popular scheduling algorithm in practice (see~\ref{existing:prod}) is work-stealing. This idea, introduce by \cite{DBLP:conf/fpca/BurtonS81}, effectively has each worker work on its local tasks first, but allows the possibility for other workers to steal local tasks if they run out of tasks. \cite{DBLP:conf/focs/Blumofe94} introduced the more familiar incarnation of this, where each workers has queue of tasks to accomplish and workers without tasks steal tasks from random workers. (The Burton and Sleep algorithm had trees of tasks and stole only among neighbours). Blumofe and Leiserson also prove worst case space and time requirements for well-structured computations. 37 37 -
src/AST/Convert.cpp
r4de48c5 r8cb149f 9 9 // Author : Thierry Delisle 10 10 // Created On : Thu May 09 15::37::05 2019 11 // Last Modified By : Andrew Beach12 // Last Modified On : Wed Jul 14 16:15:00 202113 // Update Count : 3711 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Wed Feb 2 13:19:22 2022 13 // Update Count : 41 14 14 // 15 15 … … 393 393 auto stmt = new IfStmt( 394 394 get<Expression>().accept1( node->cond ), 395 get<Statement>().accept1( node->then Part),396 get<Statement>().accept1( node->else Part),395 get<Statement>().accept1( node->then ), 396 get<Statement>().accept1( node->else_ ), 397 397 get<Statement>().acceptL( node->inits ) 398 398 ); … … 419 419 } 420 420 421 const ast::Stmt * visit( const ast::While Stmt * node ) override final {421 const ast::Stmt * visit( const ast::WhileDoStmt * node ) override final { 422 422 if ( inCache( node ) ) return nullptr; 423 423 auto inits = get<Statement>().acceptL( node->inits ); 424 auto stmt = new While Stmt(424 auto stmt = new WhileDoStmt( 425 425 get<Expression>().accept1( node->cond ), 426 426 get<Statement>().accept1( node->body ), 427 get<Statement>().accept1( node->else_ ), 427 428 inits, 428 429 node->isDoWhile … … 437 438 get<Expression>().accept1( node->cond ), 438 439 get<Expression>().accept1( node->inc ), 439 get<Statement>().accept1( node->body ) 440 get<Statement>().accept1( node->body ), 441 get<Statement>().accept1( node->else_ ) 440 442 ); 441 443 return stmtPostamble( stmt, node ); … … 1872 1874 old->location, 1873 1875 GET_ACCEPT_1(condition, Expr), 1874 GET_ACCEPT_1(then Part, Stmt),1875 GET_ACCEPT_1(else Part, Stmt),1876 GET_ACCEPT_1(then, Stmt), 1877 GET_ACCEPT_1(else_, Stmt), 1876 1878 GET_ACCEPT_V(initialization, Stmt), 1877 1879 GET_LABELS_V(old->labels) … … 1902 1904 } 1903 1905 1904 virtual void visit( const While Stmt * old ) override final {1906 virtual void visit( const WhileDoStmt * old ) override final { 1905 1907 if ( inCache( old ) ) return; 1906 this->node = new ast::While Stmt(1908 this->node = new ast::WhileDoStmt( 1907 1909 old->location, 1908 1910 GET_ACCEPT_1(condition, Expr), 1909 1911 GET_ACCEPT_1(body, Stmt), 1912 GET_ACCEPT_1(else_, Stmt), 1910 1913 GET_ACCEPT_V(initialization, Stmt), 1911 1914 old->isDoWhile, … … 1923 1926 GET_ACCEPT_1(increment, Expr), 1924 1927 GET_ACCEPT_1(body, Stmt), 1928 GET_ACCEPT_1(else_, Stmt), 1925 1929 GET_LABELS_V(old->labels) 1926 1930 ); -
src/AST/Fwd.hpp
r4de48c5 r8cb149f 10 10 // Created On : Wed May 8 16:05:00 2019 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:37:39 202113 // Update Count : 412 // Last Modified On : Tue Feb 1 09:08:33 2022 13 // Update Count : 5 14 14 // 15 15 … … 44 44 class DirectiveStmt; 45 45 class IfStmt; 46 class While Stmt;46 class WhileDoStmt; 47 47 class ForStmt; 48 48 class SwitchStmt; -
src/AST/Node.cpp
r4de48c5 r8cb149f 10 10 // Created On : Thu May 16 14:16:00 2019 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:25:06 202113 // Update Count : 212 // Last Modified On : Tue Feb 1 09:09:39 2022 13 // Update Count : 3 14 14 // 15 15 … … 146 146 template class ast::ptr_base< ast::IfStmt, ast::Node::ref_type::weak >; 147 147 template class ast::ptr_base< ast::IfStmt, ast::Node::ref_type::strong >; 148 template class ast::ptr_base< ast::While Stmt, ast::Node::ref_type::weak >;149 template class ast::ptr_base< ast::While Stmt, ast::Node::ref_type::strong >;148 template class ast::ptr_base< ast::WhileDoStmt, ast::Node::ref_type::weak >; 149 template class ast::ptr_base< ast::WhileDoStmt, ast::Node::ref_type::strong >; 150 150 template class ast::ptr_base< ast::ForStmt, ast::Node::ref_type::weak >; 151 151 template class ast::ptr_base< ast::ForStmt, ast::Node::ref_type::strong >; -
src/AST/Pass.hpp
r4de48c5 r8cb149f 146 146 const ast::Stmt * visit( const ast::DirectiveStmt * ) override final; 147 147 const ast::Stmt * visit( const ast::IfStmt * ) override final; 148 const ast::Stmt * visit( const ast::While Stmt* ) override final;148 const ast::Stmt * visit( const ast::WhileDoStmt * ) override final; 149 149 const ast::Stmt * visit( const ast::ForStmt * ) override final; 150 150 const ast::Stmt * visit( const ast::SwitchStmt * ) override final; -
src/AST/Pass.impl.hpp
r4de48c5 r8cb149f 756 756 maybe_accept( node, &IfStmt::inits ); 757 757 maybe_accept( node, &IfStmt::cond ); 758 maybe_accept_as_compound( node, &IfStmt::then Part);759 maybe_accept_as_compound( node, &IfStmt::else Part);758 maybe_accept_as_compound( node, &IfStmt::then ); 759 maybe_accept_as_compound( node, &IfStmt::else_ ); 760 760 } 761 761 … … 764 764 765 765 //-------------------------------------------------------------------------- 766 // While Stmt767 template< typename core_t > 768 const ast::Stmt * ast::Pass< core_t >::visit( const ast::While Stmt * node ) {766 // WhileDoStmt 767 template< typename core_t > 768 const ast::Stmt * ast::Pass< core_t >::visit( const ast::WhileDoStmt * node ) { 769 769 VISIT_START( node ); 770 770 … … 772 772 // while statements introduce a level of scope (for the initialization) 773 773 guard_symtab guard { *this }; 774 maybe_accept( node, &While Stmt::inits );775 maybe_accept( node, &While Stmt::cond );776 maybe_accept_as_compound( node, &While Stmt::body );774 maybe_accept( node, &WhileDoStmt::inits ); 775 maybe_accept( node, &WhileDoStmt::cond ); 776 maybe_accept_as_compound( node, &WhileDoStmt::body ); 777 777 } 778 778 -
src/AST/Print.cpp
r4de48c5 r8cb149f 511 511 ++indent; 512 512 os << indent; 513 safe_print( node->then Part);514 --indent; 515 516 if ( node->else Part!= 0 ) {513 safe_print( node->then ); 514 --indent; 515 516 if ( node->else_ != 0 ) { 517 517 os << indent << "... else:" << endl; 518 518 ++indent; 519 519 os << indent; 520 node->else Part->accept( *this );520 node->else_->accept( *this ); 521 521 --indent; 522 522 } // if … … 524 524 } 525 525 526 virtual const ast::Stmt * visit( const ast::While Stmt * node ) override final {526 virtual const ast::Stmt * visit( const ast::WhileDoStmt * node ) override final { 527 527 if ( node->isDoWhile ) { os << "Do-"; } 528 528 os << "While on condition:" << endl; -
src/AST/Stmt.hpp
r4de48c5 r8cb149f 10 10 // Created On : Wed May 8 13:00:00 2019 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Jan 31 22:38:53202213 // Update Count : 1212 // Last Modified On : Tue Feb 1 17:44:46 2022 13 // Update Count : 24 14 14 // 15 15 … … 42 42 : ParseNode(loc), labels(std::move(labels)) {} 43 43 44 Stmt(const Stmt & o) : ParseNode(o), labels(o.labels) {}44 Stmt(const Stmt & o) : ParseNode(o), labels(o.labels) {} 45 45 46 46 const Stmt * accept( Visitor & v ) const override = 0; … … 56 56 57 57 CompoundStmt(const CodeLocation & loc, std::list<ptr<Stmt>> && ks = {}, 58 std::vector<Label> && labels = {} )58 std::vector<Label> && labels = {} ) 59 59 : Stmt(loc, std::move(labels)), kids(std::move(ks)) {} 60 60 61 CompoundStmt( const CompoundStmt & o );62 CompoundStmt( CompoundStmt && o ) = default;61 CompoundStmt( const CompoundStmt & o ); 62 CompoundStmt( CompoundStmt && o ) = default; 63 63 64 64 void push_back( const Stmt * s ) { kids.emplace_back( s ); } … … 88 88 ptr<Expr> expr; 89 89 90 ExprStmt( const CodeLocation & loc, const Expr* e, std::vector<Label>&& labels = {} )90 ExprStmt( const CodeLocation & loc, const Expr* e, std::vector<Label> && labels = {} ) 91 91 : Stmt(loc, std::move(labels)), expr(e) {} 92 92 … … 139 139 public: 140 140 ptr<Expr> cond; 141 ptr<Stmt> then Part;142 ptr<Stmt> else Part;141 ptr<Stmt> then; 142 ptr<Stmt> else_; 143 143 std::vector<ptr<Stmt>> inits; 144 144 145 IfStmt( const CodeLocation & loc, const Expr * cond, const Stmt * then Part,146 const Stmt * else Part= nullptr, std::vector<ptr<Stmt>> && inits = {},145 IfStmt( const CodeLocation & loc, const Expr * cond, const Stmt * then, 146 const Stmt * else_ = nullptr, std::vector<ptr<Stmt>> && inits = {}, 147 147 std::vector<Label> && labels = {} ) 148 : Stmt(loc, std::move(labels)), cond(cond), then Part(thenPart), elsePart(elsePart),148 : Stmt(loc, std::move(labels)), cond(cond), then(then), else_(else_), 149 149 inits(std::move(inits)) {} 150 150 … … 191 191 192 192 // While loop: while (...) ... else ... or do ... while (...) else ...; 193 class While Stmt final : public Stmt {193 class WhileDoStmt final : public Stmt { 194 194 public: 195 195 ptr<Expr> cond; 196 196 ptr<Stmt> body; 197 ptr<Stmt> else Part;197 ptr<Stmt> else_; 198 198 std::vector<ptr<Stmt>> inits; 199 199 bool isDoWhile; 200 200 201 While Stmt( const CodeLocation & loc, const Expr * cond, const Stmt * body,201 WhileDoStmt( const CodeLocation & loc, const Expr * cond, const Stmt * body, 202 202 std::vector<ptr<Stmt>> && inits, bool isDoWhile = false, std::vector<Label> && labels = {} ) 203 : Stmt(loc, std::move(labels)), cond(cond), body(body), inits(std::move(inits)), isDoWhile(isDoWhile) {} 204 205 const Stmt * accept( Visitor & v ) const override { return v.visit( this ); } 206 private: 207 WhileStmt * clone() const override { return new WhileStmt{ *this }; } 203 : Stmt(loc, std::move(labels)), cond(cond), body(body), else_(nullptr), inits(std::move(inits)), isDoWhile(isDoWhile) {} 204 205 WhileDoStmt( const CodeLocation & loc, const Expr * cond, const Stmt * body, const Stmt * else_, 206 std::vector<ptr<Stmt>> && inits, bool isDoWhile = false, std::vector<Label> && labels = {} ) 207 : Stmt(loc, std::move(labels)), cond(cond), body(body), else_(else_), inits(std::move(inits)), isDoWhile(isDoWhile) {} 208 209 const Stmt * accept( Visitor & v ) const override { return v.visit( this ); } 210 private: 211 WhileDoStmt * clone() const override { return new WhileDoStmt{ *this }; } 208 212 MUTATE_FRIEND 209 213 }; … … 216 220 ptr<Expr> inc; 217 221 ptr<Stmt> body; 218 ptr<Stmt> else Part;222 ptr<Stmt> else_; 219 223 220 224 ForStmt( const CodeLocation & loc, std::vector<ptr<Stmt>> && inits, const Expr * cond, 221 const Expr * inc, const Stmt * body, std::vector<Label> && labels = {} ) 222 : Stmt(loc, std::move(labels)), inits(std::move(inits)), cond(cond), inc(inc), body(body) {} 225 const Expr * inc, const Stmt * body, std::vector<Label> && label = {} ) 226 : Stmt(loc, std::move(label)), inits(std::move(inits)), cond(cond), inc(inc), body(body), else_(nullptr) {} 227 228 ForStmt( const CodeLocation & loc, std::vector<ptr<Stmt>> && inits, const Expr * cond, 229 const Expr * inc, const Stmt * body, const Stmt * else_, std::vector<Label> && labels = {} ) 230 : Stmt(loc, std::move(labels)), inits(std::move(inits)), cond(cond), inc(inc), body(body), else_(else_) {} 223 231 224 232 const Stmt * accept( Visitor & v ) const override { return v.visit( this ); } -
src/AST/Visitor.hpp
r4de48c5 r8cb149f 10 10 // Created On : Thr May 9 15:28:00 2019 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:25:07 202113 // Update Count : 112 // Last Modified On : Tue Feb 1 09:09:34 2022 13 // Update Count : 2 14 14 // 15 15 … … 38 38 virtual const ast::Stmt * visit( const ast::DirectiveStmt * ) = 0; 39 39 virtual const ast::Stmt * visit( const ast::IfStmt * ) = 0; 40 virtual const ast::Stmt * visit( const ast::While Stmt* ) = 0;40 virtual const ast::Stmt * visit( const ast::WhileDoStmt * ) = 0; 41 41 virtual const ast::Stmt * visit( const ast::ForStmt * ) = 0; 42 42 virtual const ast::Stmt * visit( const ast::SwitchStmt * ) = 0; -
src/CodeGen/CodeGenerator.cc
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 19:00:42 202113 // Update Count : 5 3612 // Last Modified On : Tue Feb 1 16:29:25 2022 13 // Update Count : 540 14 14 // 15 15 #include "CodeGenerator.h" … … 42 42 bool wantSpacing( Statement * stmt) { 43 43 return dynamic_cast< IfStmt * >( stmt ) || dynamic_cast< CompoundStmt * >( stmt ) || 44 dynamic_cast< While Stmt * >( stmt ) || dynamic_cast< ForStmt * >( stmt ) || dynamic_cast< SwitchStmt *>( stmt );44 dynamic_cast< WhileDoStmt * >( stmt ) || dynamic_cast< ForStmt * >( stmt ) || dynamic_cast< SwitchStmt *>( stmt ); 45 45 } 46 46 … … 955 955 output << " ) "; 956 956 957 ifStmt->get_then Part()->accept( *visitor );958 959 if ( ifStmt->get_else Part() != 0) {957 ifStmt->get_then()->accept( *visitor ); 958 959 if ( ifStmt->get_else() != 0) { 960 960 output << " else "; 961 ifStmt->get_else Part()->accept( *visitor );961 ifStmt->get_else()->accept( *visitor ); 962 962 } // if 963 963 } … … 1125 1125 } 1126 1126 1127 void CodeGenerator::postvisit( While Stmt * whileStmt ) {1128 if ( while Stmt->get_isDoWhile() ) {1127 void CodeGenerator::postvisit( WhileDoStmt * whileDoStmt ) { 1128 if ( whileDoStmt->get_isDoWhile() ) { 1129 1129 output << "do"; 1130 1130 } else { 1131 1131 output << "while ("; 1132 while Stmt->get_condition()->accept( *visitor );1132 whileDoStmt->get_condition()->accept( *visitor ); 1133 1133 output << ")"; 1134 1134 } // if 1135 1135 output << " "; 1136 1136 1137 output << CodeGenerator::printLabels( while Stmt->get_body()->get_labels() );1138 while Stmt->get_body()->accept( *visitor );1137 output << CodeGenerator::printLabels( whileDoStmt->get_body()->get_labels() ); 1138 whileDoStmt->get_body()->accept( *visitor ); 1139 1139 1140 1140 output << indent; 1141 1141 1142 if ( while Stmt->get_isDoWhile() ) {1142 if ( whileDoStmt->get_isDoWhile() ) { 1143 1143 output << " while ("; 1144 while Stmt->get_condition()->accept( *visitor );1144 whileDoStmt->get_condition()->accept( *visitor ); 1145 1145 output << ");"; 1146 1146 } // if -
src/CodeGen/CodeGenerator.h
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:35:38 202113 // Update Count : 6 312 // Last Modified On : Tue Feb 1 09:23:21 2022 13 // Update Count : 64 14 14 // 15 15 … … 116 116 void postvisit( WaitForStmt * ); 117 117 void postvisit( WithStmt * ); 118 void postvisit( While Stmt * );118 void postvisit( WhileDoStmt * ); 119 119 void postvisit( ForStmt * ); 120 120 void postvisit( NullStmt * ); -
src/Common/CodeLocationTools.cpp
r4de48c5 r8cb149f 10 10 // Created On : Fri Dec 4 15:42:00 2020 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:35:37 202113 // Update Count : 212 // Last Modified On : Tue Feb 1 09:14:39 2022 13 // Update Count : 3 14 14 // 15 15 … … 109 109 macro(DirectiveStmt, Stmt) \ 110 110 macro(IfStmt, Stmt) \ 111 macro(While Stmt, Stmt) \111 macro(WhileDoStmt, Stmt) \ 112 112 macro(ForStmt, Stmt) \ 113 113 macro(SwitchStmt, Stmt) \ -
src/Common/PassVisitor.h
r4de48c5 r8cb149f 92 92 virtual void visit( IfStmt * ifStmt ) override final; 93 93 virtual void visit( const IfStmt * ifStmt ) override final; 94 virtual void visit( While Stmt * whileStmt ) override final;95 virtual void visit( const While Stmt * whileStmt ) override final;94 virtual void visit( WhileDoStmt * whileDoStmt ) override final; 95 virtual void visit( const WhileDoStmt * whileDoStmt ) override final; 96 96 virtual void visit( ForStmt * forStmt ) override final; 97 97 virtual void visit( const ForStmt * forStmt ) override final; … … 277 277 virtual Statement * mutate( DirectiveStmt * dirStmt ) override final; 278 278 virtual Statement * mutate( IfStmt * ifStmt ) override final; 279 virtual Statement * mutate( While Stmt * whileStmt ) override final;279 virtual Statement * mutate( WhileDoStmt * whileDoStmt ) override final; 280 280 virtual Statement * mutate( ForStmt * forStmt ) override final; 281 281 virtual Statement * mutate( SwitchStmt * switchStmt ) override final; -
src/Common/PassVisitor.impl.h
r4de48c5 r8cb149f 1189 1189 maybeAccept_impl( node->initialization, *this ); 1190 1190 visitExpression ( node->condition ); 1191 node->then Part = visitStatement( node->thenPart);1192 node->else Part = visitStatement( node->elsePart);1191 node->then = visitStatement( node->then ); 1192 node->else_ = visitStatement( node->else_ ); 1193 1193 } 1194 1194 VISIT_END( node ); … … 1203 1203 maybeAccept_impl( node->initialization, *this ); 1204 1204 visitExpression ( node->condition ); 1205 visitStatement ( node->then Part);1206 visitStatement ( node->else Part);1205 visitStatement ( node->then ); 1206 visitStatement ( node->else_ ); 1207 1207 } 1208 1208 VISIT_END( node ); … … 1217 1217 maybeMutate_impl( node->initialization, *this ); 1218 1218 node->condition = mutateExpression( node->condition ); 1219 node->then Part = mutateStatement ( node->thenPart);1220 node->else Part = mutateStatement ( node->elsePart);1219 node->then = mutateStatement ( node->then ); 1220 node->else_ = mutateStatement ( node->else_ ); 1221 1221 } 1222 1222 MUTATE_END( Statement, node ); … … 1224 1224 1225 1225 //-------------------------------------------------------------------------- 1226 // While Stmt1227 template< typename pass_type > 1228 void PassVisitor< pass_type >::visit( While Stmt * node ) {1226 // WhileDoStmt 1227 template< typename pass_type > 1228 void PassVisitor< pass_type >::visit( WhileDoStmt * node ) { 1229 1229 VISIT_START( node ); 1230 1230 … … 1241 1241 1242 1242 template< typename pass_type > 1243 void PassVisitor< pass_type >::visit( const While Stmt * node ) {1243 void PassVisitor< pass_type >::visit( const WhileDoStmt * node ) { 1244 1244 VISIT_START( node ); 1245 1245 … … 1256 1256 1257 1257 template< typename pass_type > 1258 Statement * PassVisitor< pass_type >::mutate( While Stmt * node ) {1258 Statement * PassVisitor< pass_type >::mutate( WhileDoStmt * node ) { 1259 1259 MUTATE_START( node ); 1260 1260 -
src/ControlStruct/ForExprMutator.cc
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Mar 11 22:26:52 201913 // Update Count : 1 412 // Last Modified On : Tue Feb 1 09:26:12 2022 13 // Update Count : 16 14 14 // 15 15 … … 45 45 return hoist( forStmt, forStmt->initialization ); 46 46 } 47 Statement * ForExprMutator::postmutate( While Stmt * whileStmt ) {48 return hoist( while Stmt, whileStmt->initialization );47 Statement * ForExprMutator::postmutate( WhileDoStmt * whileDoStmt ) { 48 return hoist( whileDoStmt, whileDoStmt->initialization ); 49 49 } 50 50 } // namespace ControlStruct -
src/ControlStruct/ForExprMutator.h
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Sun Jan 30 09:14:46202213 // Update Count : 612 // Last Modified On : Tue Feb 1 09:18:50 2022 13 // Update Count : 7 14 14 // 15 15 … … 18 18 class IfStmt; 19 19 class ForStmt; 20 class While Stmt;20 class WhileDoStmt; 21 21 class Statement; 22 22 … … 26 26 Statement * postmutate( IfStmt * ); 27 27 Statement * postmutate( ForStmt * ); 28 Statement * postmutate( While Stmt * );28 Statement * postmutate( WhileDoStmt * ); 29 29 }; 30 30 } // namespace ControlStruct -
src/ControlStruct/HoistControlDecls.cpp
r4de48c5 r8cb149f 10 10 // Created On : Fri Dec 3 15:34:00 2021 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Jan 31 18:52:35202213 // Update Count : 2 312 // Last Modified On : Tue Feb 1 18:59:47 2022 13 // Update Count : 25 14 14 // 15 15 … … 35 35 36 36 CompoundStmt * block = new CompoundStmt( stmt->location ); // create empty compound statement 37 // { 38 // } 37 // {} 39 38 40 39 for ( const Stmt * next : stmt->inits ) { // link conditional declarations into compound … … 69 68 return hoist<ForStmt>( stmt ); 70 69 } 71 const Stmt * postvisit( const While Stmt * stmt ) {72 return hoist<While Stmt>( stmt );70 const Stmt * postvisit( const WhileDoStmt * stmt ) { 71 return hoist<WhileDoStmt>( stmt ); 73 72 } 74 73 }; -
src/ControlStruct/LabelFixer.cc
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Jan 31 22:28:31202213 // Update Count : 16 112 // Last Modified On : Tue Feb 1 09:12:09 2022 13 // Update Count : 162 14 14 // 15 15 … … 29 29 bool LabelFixer::Entry::insideLoop() { 30 30 return ( dynamic_cast< ForStmt * > ( definition ) || 31 dynamic_cast< While Stmt * > ( definition ) );31 dynamic_cast< WhileDoStmt * > ( definition ) ); 32 32 } 33 33 -
src/ControlStruct/LabelGeneratorNew.cpp
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Jan 31 18:51:10202213 // Update Count : 7 112 // Last Modified On : Wed Feb 2 09:11:17 2022 13 // Update Count : 72 14 14 // 15 15 … … 28 28 static int current = 0; 29 29 30 assert ( ( (void)"CFA internal error: parameter statement cannot be null pointer", stmt ));30 assertf( stmt, "CFA internal error: parameter statement cannot be null pointer" ); 31 31 32 32 enum { size = 128 }; 33 33 char buf[size]; // space to build label 34 34 int len = snprintf( buf, size, "__L%d__%s", current++, suffix.c_str() ); 35 assert ( ( (void)"CFA Internal error: buffer overflow creating label", len < size ));35 assertf( len < size, "CFA Internal error: buffer overflow creating label" ); 36 36 37 37 // What does this do? 38 38 if ( ! stmt->labels.empty() ) { 39 39 len = snprintf( buf + len, size - len, "_%s__", stmt->labels.front().name.c_str() ); 40 assert ( ( (void)"CFA Internal error: buffer overflow creating label", len < size - len ));40 assertf( len < size - len, "CFA Internal error: buffer overflow creating label" ); 41 41 } // if 42 42 -
src/ControlStruct/MLEMutator.cc
r4de48c5 r8cb149f 9 9 // Author : Rodolfo G. Esteves 10 10 // Created On : Mon May 18 07:44:20 2015 11 // Last Modified By : Andrew Beach12 // Last Modified On : Wed Jan 22 11:50:00 202013 // Update Count : 22 311 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Tue Feb 1 09:26:28 2022 13 // Update Count : 225 14 14 // 15 15 … … 39 39 namespace { 40 40 bool isLoop( const MultiLevelExitMutator::Entry & e ) { 41 return dynamic_cast< While Stmt * >( e.get_controlStructure() )41 return dynamic_cast< WhileDoStmt * >( e.get_controlStructure() ) 42 42 || dynamic_cast< ForStmt * >( e.get_controlStructure() ); 43 43 } … … 295 295 } 296 296 297 void MultiLevelExitMutator::premutate( While Stmt * whileStmt ) {298 return prehandleLoopStmt( while Stmt );297 void MultiLevelExitMutator::premutate( WhileDoStmt * whileDoStmt ) { 298 return prehandleLoopStmt( whileDoStmt ); 299 299 } 300 300 … … 303 303 } 304 304 305 Statement * MultiLevelExitMutator::postmutate( While Stmt * whileStmt ) {306 return posthandleLoopStmt( while Stmt );305 Statement * MultiLevelExitMutator::postmutate( WhileDoStmt * whileDoStmt ) { 306 return posthandleLoopStmt( whileDoStmt ); 307 307 } 308 308 -
src/ControlStruct/MLEMutator.h
r4de48c5 r8cb149f 9 9 // Author : Rodolfo G. Esteves 10 10 // Created On : Mon May 18 07:44:20 2015 11 // Last Modified By : Andrew Beach12 // Last Modified On : Wed Jan 22 11:50:00 202013 // Update Count : 4811 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Tue Feb 1 09:27:24 2022 13 // Update Count : 50 14 14 // 15 15 … … 42 42 void premutate( CompoundStmt *cmpndStmt ); 43 43 Statement * postmutate( BranchStmt *branchStmt ) throw ( SemanticErrorException ); 44 void premutate( While Stmt *whileStmt );45 Statement * postmutate( While Stmt *whileStmt );44 void premutate( WhileDoStmt *whileDoStmt ); 45 Statement * postmutate( WhileDoStmt *whileDoStmt ); 46 46 void premutate( ForStmt *forStmt ); 47 47 Statement * postmutate( ForStmt *forStmt ); … … 67 67 stmt( stmt ), breakExit( breakExit ), contExit( contExit ) {} 68 68 69 explicit Entry( While Stmt *stmt, Label breakExit, Label contExit ) :69 explicit Entry( WhileDoStmt *stmt, Label breakExit, Label contExit ) : 70 70 stmt( stmt ), breakExit( breakExit ), contExit( contExit ) {} 71 71 -
src/ControlStruct/MultiLevelExit.cpp
r4de48c5 r8cb149f 10 10 // Created On : Mon Nov 1 13:48:00 2021 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Mon Jan 31 22:35:08202213 // Update Count : 2 812 // Last Modified On : Tue Feb 1 18:48:47 2022 13 // Update Count : 29 14 14 // 15 15 … … 40 40 41 41 enum Kind { 42 ForStmtK, While StmtK, CompoundStmtK, IfStmtK, CaseStmtK, SwitchStmtK, TryStmtK42 ForStmtK, WhileDoStmtK, CompoundStmtK, IfStmtK, CaseStmtK, SwitchStmtK, TryStmtK 43 43 } kind; 44 44 … … 53 53 Entry( const ForStmt * stmt, Label breakExit, Label contExit ) : 54 54 stmt( stmt ), firstTarget( breakExit ), secondTarget( contExit ), kind( ForStmtK ) {} 55 Entry( const While Stmt * stmt, Label breakExit, Label contExit ) :56 stmt( stmt ), firstTarget( breakExit ), secondTarget( contExit ), kind( While StmtK ) {}55 Entry( const WhileDoStmt * stmt, Label breakExit, Label contExit ) : 56 stmt( stmt ), firstTarget( breakExit ), secondTarget( contExit ), kind( WhileDoStmtK ) {} 57 57 Entry( const CompoundStmt *stmt, Label breakExit ) : 58 58 stmt( stmt ), firstTarget( breakExit ), secondTarget(), kind( CompoundStmtK ) {} … … 66 66 stmt( stmt ), firstTarget( breakExit ), secondTarget(), kind( TryStmtK ) {} 67 67 68 bool isContTarget() const { return kind <= While StmtK; }68 bool isContTarget() const { return kind <= WhileDoStmtK; } 69 69 bool isBreakTarget() const { return kind != CaseStmtK; } 70 70 bool isFallTarget() const { return kind == CaseStmtK; } … … 72 72 73 73 // These routines set a target as being "used" by a BranchStmt 74 Label useContExit() { assert( kind <= While StmtK ); return useTarget(secondTarget); }74 Label useContExit() { assert( kind <= WhileDoStmtK ); return useTarget(secondTarget); } 75 75 Label useBreakExit() { assert( kind != CaseStmtK ); return useTarget(firstTarget); } 76 76 Label useFallExit() { assert( kind == CaseStmtK ); return useTarget(firstTarget); } … … 78 78 79 79 // These routines check if a specific label for a statement is used by a BranchStmt 80 bool isContUsed() const { assert( kind <= While StmtK ); return secondTarget.used; }80 bool isContUsed() const { assert( kind <= WhileDoStmtK ); return secondTarget.used; } 81 81 bool isBreakUsed() const { assert( kind != CaseStmtK ); return firstTarget.used; } 82 82 bool isFallUsed() const { assert( kind == CaseStmtK ); return firstTarget.used; } … … 112 112 const CompoundStmt * previsit( const CompoundStmt * ); 113 113 const BranchStmt * postvisit( const BranchStmt * ); 114 void previsit( const While Stmt * );115 const While Stmt * postvisit( const WhileStmt * );114 void previsit( const WhileDoStmt * ); 115 const WhileDoStmt * postvisit( const WhileDoStmt * ); 116 116 void previsit( const ForStmt * ); 117 117 const ForStmt * postvisit( const ForStmt * ); … … 342 342 } 343 343 344 void MultiLevelExitCore::previsit( const While Stmt * stmt ) {344 void MultiLevelExitCore::previsit( const WhileDoStmt * stmt ) { 345 345 return prehandleLoopStmt( stmt ); 346 346 } 347 347 348 const While Stmt * MultiLevelExitCore::postvisit( const WhileStmt * stmt ) {348 const WhileDoStmt * MultiLevelExitCore::postvisit( const WhileDoStmt * stmt ) { 349 349 return posthandleLoopStmt( stmt ); 350 350 } -
src/Parser/ParseNode.h
r4de48c5 r8cb149f 10 10 // Created On : Sat May 16 13:28:16 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Sat Jan 29 09:45:56202213 // Update Count : 90 112 // Last Modified On : Wed Feb 2 09:15:49 2022 13 // Update Count : 905 14 14 // 15 15 … … 410 410 411 411 Expression * build_if_control( CondCtl * ctl, std::list< Statement * > & init ); 412 Statement * build_if( CondCtl * ctl, StatementNode * then _stmt, StatementNode * else_stmt);412 Statement * build_if( CondCtl * ctl, StatementNode * then, StatementNode * else_ ); 413 413 Statement * build_switch( bool isSwitch, ExpressionNode * ctl, StatementNode * stmt ); 414 414 Statement * build_case( ExpressionNode * ctl ); 415 415 Statement * build_default(); 416 Statement * build_while( CondCtl * ctl, StatementNode * stmt );417 Statement * build_do_while( ExpressionNode * ctl, StatementNode * stmt );418 Statement * build_for( ForCtrl * forctl, StatementNode * stmt );416 Statement * build_while( CondCtl * ctl, StatementNode * stmt, StatementNode * else_ = nullptr ); 417 Statement * build_do_while( ExpressionNode * ctl, StatementNode * stmt, StatementNode * else_ = nullptr ); 418 Statement * build_for( ForCtrl * forctl, StatementNode * stmt, StatementNode * else_ = nullptr ); 419 419 Statement * build_branch( BranchStmt::Type kind ); 420 420 Statement * build_branch( std::string * identifier, BranchStmt::Type kind ); … … 424 424 Statement * build_resume( ExpressionNode * ctl ); 425 425 Statement * build_resume_at( ExpressionNode * ctl , ExpressionNode * target ); 426 Statement * build_try( StatementNode * try_ stmt, StatementNode * catch_stmt, StatementNode * finally_stmt);427 Statement * build_catch( CatchStmt::Kind kind, DeclarationNode * decl, ExpressionNode *cond, StatementNode *body );426 Statement * build_try( StatementNode * try_, StatementNode * catch_, StatementNode * finally_ ); 427 Statement * build_catch( CatchStmt::Kind kind, DeclarationNode * decl, ExpressionNode * cond, StatementNode * body ); 428 428 Statement * build_finally( StatementNode * stmt ); 429 429 Statement * build_compound( StatementNode * first ); -
src/Parser/StatementNode.cc
r4de48c5 r8cb149f 5 5 // file "LICENCE" distributed with Cforall. 6 6 // 7 // StatementNode.cc -- 7 // StatementNode.cc -- Transform from parse data-structures to AST data-structures, usually deleting the parse 8 // data-structure after the transformation. 8 9 // 9 10 // Author : Rodolfo G. Esteves 10 11 // Created On : Sat May 16 14:59:41 2015 11 12 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Sat Jan 29 09:45:51202213 // Update Count : 38413 // Last Modified On : Wed Feb 2 12:27:58 2022 14 // Update Count : 424 14 15 // 15 16 … … 63 64 // convert from StatementNode list to Statement list 64 65 StatementNode * node = dynamic_cast< StatementNode * >(prev); 65 std::list< Statement * > stmts;66 list< Statement * > stmts; 66 67 buildMoveList( stmt, stmts ); 67 68 // splice any new Statements to end of current Statements … … 78 79 } // build_expr 79 80 80 Expression * build_if_control( CondCtl * ctl, std::list< Statement * > & init ) {81 Expression * build_if_control( CondCtl * ctl, list< Statement * > & init ) { 81 82 if ( ctl->init != 0 ) { 82 83 buildMoveList( ctl->init, init ); … … 100 101 } // build_if_control 101 102 102 Statement * build_if( CondCtl * ctl, StatementNode * then_stmt, StatementNode * else_stmt ) { 103 Statement * thenb, * elseb = nullptr; 104 std::list< Statement * > branches; 105 buildMoveList< Statement, StatementNode >( then_stmt, branches ); 106 assert( branches.size() == 1 ); 107 thenb = branches.front(); 108 109 if ( else_stmt ) { 110 std::list< Statement * > branches; 111 buildMoveList< Statement, StatementNode >( else_stmt, branches ); 112 assert( branches.size() == 1 ); 113 elseb = branches.front(); 114 } // if 115 116 std::list< Statement * > init; 117 Expression * cond = build_if_control( ctl, init ); 118 return new IfStmt( cond, thenb, elseb, init ); 103 Statement * build_if( CondCtl * ctl, StatementNode * then, StatementNode * else_ ) { 104 list< Statement * > astinit; // maybe empty 105 Expression * astcond = build_if_control( ctl, astinit ); // ctl deleted, cond/init set 106 107 Statement * astthen, * astelse = nullptr; 108 list< Statement * > aststmt; 109 buildMoveList< Statement, StatementNode >( then, aststmt ); 110 assert( aststmt.size() == 1 ); 111 astthen = aststmt.front(); 112 113 if ( else_ ) { 114 list< Statement * > aststmt; 115 buildMoveList< Statement, StatementNode >( else_, aststmt ); 116 assert( aststmt.size() == 1 ); 117 astelse = aststmt.front(); 118 } // if 119 120 return new IfStmt( astcond, astthen, astelse, astinit ); 119 121 } // build_if 120 122 121 123 Statement * build_switch( bool isSwitch, ExpressionNode * ctl, StatementNode * stmt ) { 122 std::list< Statement * > branches;123 buildMoveList< Statement, StatementNode >( stmt, branches);124 if ( ! isSwitch ) { 125 for ( Statement * stmt : branches) {124 list< Statement * > aststmt; 125 buildMoveList< Statement, StatementNode >( stmt, aststmt ); 126 if ( ! isSwitch ) { // choose statement 127 for ( Statement * stmt : aststmt ) { 126 128 CaseStmt * caseStmt = strict_dynamic_cast< CaseStmt * >( stmt ); 127 129 if ( ! caseStmt->stmts.empty() ) { // code after "case" => end of case list … … 131 133 } // for 132 134 } // if 133 // branches.size() == 0 for switch (...) {}, i.e., no declaration or statements134 return new SwitchStmt( maybeMoveBuild< Expression >(ctl), branches);135 // aststmt.size() == 0 for switch (...) {}, i.e., no declaration or statements 136 return new SwitchStmt( maybeMoveBuild< Expression >(ctl), aststmt ); 135 137 } // build_switch 136 138 137 139 Statement * build_case( ExpressionNode * ctl ) { 138 std::list< Statement * > branches; 139 return new CaseStmt( maybeMoveBuild< Expression >(ctl), branches ); 140 return new CaseStmt( maybeMoveBuild< Expression >(ctl), {} ); // no init 140 141 } // build_case 141 142 142 143 Statement * build_default() { 143 std::list< Statement * > branches; 144 return new CaseStmt( nullptr, branches, true ); 144 return new CaseStmt( nullptr, {}, true ); // no init 145 145 } // build_default 146 146 147 Statement * build_while( CondCtl * ctl, StatementNode * stmt ) { 148 std::list< Statement * > branches; 149 buildMoveList< Statement, StatementNode >( stmt, branches ); 150 assert( branches.size() == 1 ); 151 152 std::list< Statement * > init; 153 Expression * cond = build_if_control( ctl, init ); 154 return new WhileStmt( cond, branches.front(), init, false ); 147 Statement * build_while( CondCtl * ctl, StatementNode * stmt, StatementNode * else_ ) { 148 list< Statement * > astinit; // maybe empty 149 Expression * astcond = build_if_control( ctl, astinit ); // ctl deleted, cond/init set 150 151 list< Statement * > aststmt; // loop body, compound created if empty 152 buildMoveList< Statement, StatementNode >( stmt, aststmt ); 153 assert( aststmt.size() == 1 ); 154 155 list< Statement * > astelse; // else clause, maybe empty 156 buildMoveList< Statement, StatementNode >( else_, astelse ); 157 158 return new WhileDoStmt( astcond, aststmt.front(), astelse.front(), astinit, false ); 155 159 } // build_while 156 160 157 Statement * build_do_while( ExpressionNode * ctl, StatementNode * stmt ) { 158 std::list< Statement * > branches; 159 buildMoveList< Statement, StatementNode >( stmt, branches ); 160 assert( branches.size() == 1 ); 161 162 std::list< Statement * > init; 163 return new WhileStmt( notZeroExpr( maybeMoveBuild< Expression >(ctl) ), branches.front(), init, true ); 161 Statement * build_do_while( ExpressionNode * ctl, StatementNode * stmt, StatementNode * else_ ) { 162 list< Statement * > aststmt; // loop body, compound created if empty 163 buildMoveList< Statement, StatementNode >( stmt, aststmt ); 164 assert( aststmt.size() == 1 ); // compound created if empty 165 166 list< Statement * > astelse; // else clause, maybe empty 167 buildMoveList< Statement, StatementNode >( else_, astelse ); 168 169 // do-while cannot have declarations in the contitional, so init is always empty 170 return new WhileDoStmt( notZeroExpr( maybeMoveBuild< Expression >(ctl) ), aststmt.front(), astelse.front(), {}, true ); 164 171 } // build_do_while 165 172 166 Statement * build_for( ForCtrl * forctl, StatementNode * stmt ) { 167 std::list< Statement * > branches; 168 buildMoveList< Statement, StatementNode >( stmt, branches ); 169 assert( branches.size() == 1 ); 170 171 std::list< Statement * > init; 172 if ( forctl->init != 0 ) { 173 buildMoveList( forctl->init, init ); 174 } // if 175 176 Expression * cond = 0; 177 if ( forctl->condition != 0 ) 178 cond = notZeroExpr( maybeMoveBuild< Expression >(forctl->condition) ); 179 180 Expression * incr = 0; 181 if ( forctl->change != 0 ) 182 incr = maybeMoveBuild< Expression >(forctl->change); 183 173 Statement * build_for( ForCtrl * forctl, StatementNode * stmt, StatementNode * else_ ) { 174 list< Statement * > astinit; // maybe empty 175 buildMoveList( forctl->init, astinit ); 176 177 Expression * astcond = nullptr; // maybe empty 178 astcond = notZeroExpr( maybeMoveBuild< Expression >(forctl->condition) ); 179 180 Expression * astincr = nullptr; // maybe empty 181 astincr = maybeMoveBuild< Expression >(forctl->change); 184 182 delete forctl; 185 return new ForStmt( init, cond, incr, branches.front() ); 183 184 list< Statement * > aststmt; // loop body, compound created if empty 185 buildMoveList< Statement, StatementNode >( stmt, aststmt ); 186 assert( aststmt.size() == 1 ); 187 188 list< Statement * > astelse; // else clause, maybe empty 189 buildMoveList< Statement, StatementNode >( else_, astelse ); 190 191 return new ForStmt( astinit, astcond, astincr, aststmt.front(), astelse.front() ); 186 192 } // build_for 187 193 … … 191 197 } // build_branch 192 198 193 Statement * build_branch( st d::string * identifier, BranchStmt::Type kind ) {199 Statement * build_branch( string * identifier, BranchStmt::Type kind ) { 194 200 Statement * ret = new BranchStmt( * identifier, kind ); 195 201 delete identifier; // allocated by lexer … … 202 208 203 209 Statement * build_return( ExpressionNode * ctl ) { 204 std::list< Expression * > exps;210 list< Expression * > exps; 205 211 buildMoveList( ctl, exps ); 206 212 return new ReturnStmt( exps.size() > 0 ? exps.back() : nullptr ); … … 208 214 209 215 Statement * build_throw( ExpressionNode * ctl ) { 210 std::list< Expression * > exps;216 list< Expression * > exps; 211 217 buildMoveList( ctl, exps ); 212 assertf( exps.size() < 2, " This means we are leaking memory");218 assertf( exps.size() < 2, "CFA internal error: leaking memory" ); 213 219 return new ThrowStmt( ThrowStmt::Terminate, !exps.empty() ? exps.back() : nullptr ); 214 220 } // build_throw 215 221 216 222 Statement * build_resume( ExpressionNode * ctl ) { 217 std::list< Expression * > exps;223 list< Expression * > exps; 218 224 buildMoveList( ctl, exps ); 219 assertf( exps.size() < 2, " This means we are leaking memory");225 assertf( exps.size() < 2, "CFA internal error: leaking memory" ); 220 226 return new ThrowStmt( ThrowStmt::Resume, !exps.empty() ? exps.back() : nullptr ); 221 227 } // build_resume … … 227 233 } // build_resume_at 228 234 229 Statement * build_try( StatementNode * try_ stmt, StatementNode * catch_stmt, StatementNode * finally_stmt) {230 std::list< CatchStmt * > branches;231 buildMoveList< CatchStmt, StatementNode >( catch_ stmt, branches);232 CompoundStmt * tryBlock = strict_dynamic_cast< CompoundStmt * >(maybeMoveBuild< Statement >(try_ stmt));233 FinallyStmt * finallyBlock = dynamic_cast< FinallyStmt * >(maybeMoveBuild< Statement >(finally_ stmt) );234 return new TryStmt( tryBlock, branches, finallyBlock );235 Statement * build_try( StatementNode * try_, StatementNode * catch_, StatementNode * finally_ ) { 236 list< CatchStmt * > aststmt; 237 buildMoveList< CatchStmt, StatementNode >( catch_, aststmt ); 238 CompoundStmt * tryBlock = strict_dynamic_cast< CompoundStmt * >(maybeMoveBuild< Statement >(try_)); 239 FinallyStmt * finallyBlock = dynamic_cast< FinallyStmt * >(maybeMoveBuild< Statement >(finally_) ); 240 return new TryStmt( tryBlock, aststmt, finallyBlock ); 235 241 } // build_try 236 242 237 243 Statement * build_catch( CatchStmt::Kind kind, DeclarationNode * decl, ExpressionNode * cond, StatementNode * body ) { 238 std::list< Statement * > branches;239 buildMoveList< Statement, StatementNode >( body, branches);240 assert( branches.size() == 1 );241 return new CatchStmt( kind, maybeMoveBuild< Declaration >(decl), maybeMoveBuild< Expression >(cond), branches.front() );244 list< Statement * > aststmt; 245 buildMoveList< Statement, StatementNode >( body, aststmt ); 246 assert( aststmt.size() == 1 ); 247 return new CatchStmt( kind, maybeMoveBuild< Declaration >(decl), maybeMoveBuild< Expression >(cond), aststmt.front() ); 242 248 } // build_catch 243 249 244 250 Statement * build_finally( StatementNode * stmt ) { 245 std::list< Statement * > branches;246 buildMoveList< Statement, StatementNode >( stmt, branches);247 assert( branches.size() == 1 );248 return new FinallyStmt( dynamic_cast< CompoundStmt * >( branches.front() ) );251 list< Statement * > aststmt; 252 buildMoveList< Statement, StatementNode >( stmt, aststmt ); 253 assert( aststmt.size() == 1 ); 254 return new FinallyStmt( dynamic_cast< CompoundStmt * >( aststmt.front() ) ); 249 255 } // build_finally 250 256 … … 254 260 node->type = type; 255 261 256 std::list< Statement * > stmts;262 list< Statement * > stmts; 257 263 buildMoveList< Statement, StatementNode >( then, stmts ); 258 264 if(!stmts.empty()) { … … 319 325 } // build_waitfor_timeout 320 326 321 WaitForStmt * build_waitfor_timeout( ExpressionNode * timeout, StatementNode * stmt, ExpressionNode * when, StatementNode * else_ stmt, ExpressionNode * else_when ) {327 WaitForStmt * build_waitfor_timeout( ExpressionNode * timeout, StatementNode * stmt, ExpressionNode * when, StatementNode * else_, ExpressionNode * else_when ) { 322 328 auto node = new WaitForStmt(); 323 329 … … 326 332 node->timeout.condition = notZeroExpr( maybeMoveBuild<Expression>( when ) ); 327 333 328 node->orelse.statement = maybeMoveBuild<Statement >( else_ stmt);334 node->orelse.statement = maybeMoveBuild<Statement >( else_ ); 329 335 node->orelse.condition = notZeroExpr( maybeMoveBuild<Expression>( else_when ) ); 330 336 … … 333 339 334 340 Statement * build_with( ExpressionNode * exprs, StatementNode * stmt ) { 335 std::list< Expression * > e;341 list< Expression * > e; 336 342 buildMoveList( exprs, e ); 337 343 Statement * s = maybeMoveBuild<Statement>( stmt ); … … 361 367 362 368 Statement * build_asm( bool voltile, Expression * instruction, ExpressionNode * output, ExpressionNode * input, ExpressionNode * clobber, LabelNode * gotolabels ) { 363 std::list< Expression * > out, in;364 std::list< ConstantExpr * > clob;369 list< Expression * > out, in; 370 list< ConstantExpr * > clob; 365 371 366 372 buildMoveList( output, out ); … … 375 381 376 382 Statement * build_mutex( ExpressionNode * exprs, StatementNode * stmt ) { 377 std::list< Expression * > expList;383 list< Expression * > expList; 378 384 buildMoveList( exprs, expList ); 379 385 Statement * body = maybeMoveBuild<Statement>( stmt ); -
src/Parser/parser.yy
r4de48c5 r8cb149f 10 10 // Created On : Sat Sep 1 20:22:55 2001 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Sun Jan 30 09:41:13 202213 // Update Count : 516 512 // Last Modified On : Tue Feb 1 11:06:13 2022 13 // Update Count : 5167 14 14 // 15 15 … … 1197 1197 { $$ = new StatementNode( build_while( $3, maybe_build_compound( $5 ) ) ); } 1198 1198 | WHILE '(' conditional_declaration ')' statement ELSE statement // CFA 1199 { SemanticError( yylloc, "Loop default block is currently unimplemented." ); $$ = nullptr; } 1199 // { SemanticError( yylloc, "Loop default block is currently unimplemented." ); $$ = nullptr; } 1200 { $$ = new StatementNode( build_while( $3, maybe_build_compound( $5 ), $7 ) ); } 1200 1201 | DO statement WHILE '(' ')' ';' // CFA => do while( 1 ) 1201 1202 { $$ = new StatementNode( build_do_while( new ExpressionNode( build_constantInteger( *new string( "1" ) ) ), maybe_build_compound( $2 ) ) ); } … … 1203 1204 { $$ = new StatementNode( build_do_while( $5, maybe_build_compound( $2 ) ) ); } 1204 1205 | DO statement WHILE '(' comma_expression ')' ELSE statement // CFA 1205 { SemanticError( yylloc, "Loop default block is currently unimplemented." ); $$ = nullptr; } 1206 // { SemanticError( yylloc, "Loop default block is currently unimplemented." ); $$ = nullptr; } 1207 { $$ = new StatementNode( build_do_while( $5, maybe_build_compound( $2 ), $8 ) ); } 1206 1208 | FOR '(' ')' statement // CFA => for ( ;; ) 1207 1209 { $$ = new StatementNode( build_for( new ForCtrl( (ExpressionNode * )nullptr, (ExpressionNode * )nullptr, (ExpressionNode * )nullptr ), maybe_build_compound( $4 ) ) ); } … … 1209 1211 { $$ = new StatementNode( build_for( $3, maybe_build_compound( $5 ) ) ); } 1210 1212 | FOR '(' for_control_expression_list ')' statement ELSE statement // CFA 1211 { SemanticError( yylloc, "Loop default block is currently unimplemented." ); $$ = nullptr; } 1213 // { SemanticError( yylloc, "Loop default block is currently unimplemented." ); $$ = nullptr; } 1214 { $$ = new StatementNode( build_for( $3, maybe_build_compound( $5 ), $7 ) ); } 1212 1215 ; 1213 1216 -
src/ResolvExpr/Resolver.cc
r4de48c5 r8cb149f 9 9 // Author : Aaron B. Moss 10 10 // Created On : Sun May 17 12:17:01 2015 11 // Last Modified By : Andrew Beach12 // Last Modified On : Fri Mar 27 11:58:00 202013 // Update Count : 24 211 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Tue Feb 1 16:27:14 2022 13 // Update Count : 245 14 14 // 15 15 … … 80 80 void previsit( AsmStmt * asmStmt ); 81 81 void previsit( IfStmt * ifStmt ); 82 void previsit( While Stmt * whileStmt );82 void previsit( WhileDoStmt * whileDoStmt ); 83 83 void previsit( ForStmt * forStmt ); 84 84 void previsit( SwitchStmt * switchStmt ); … … 502 502 } 503 503 504 void Resolver_old::previsit( While Stmt * whileStmt ) {505 findIntegralExpression( while Stmt->condition, indexer );504 void Resolver_old::previsit( WhileDoStmt * whileDoStmt ) { 505 findIntegralExpression( whileDoStmt->condition, indexer ); 506 506 } 507 507 … … 572 572 573 573 void Resolver_old::previsit( CatchStmt * catchStmt ) { 574 // Until we are very sure this invarent (ifs that move between passes have then Part)574 // Until we are very sure this invarent (ifs that move between passes have then) 575 575 // holds, check it. This allows a check for when to decode the mangling. 576 576 if ( IfStmt * ifStmt = dynamic_cast<IfStmt *>( catchStmt->body ) ) { 577 assert( ifStmt->then Part);577 assert( ifStmt->then ); 578 578 } 579 579 // Encode the catchStmt so the condition can see the declaration. … … 588 588 // Decode the catchStmt so everything is stored properly. 589 589 IfStmt * ifStmt = dynamic_cast<IfStmt *>( catchStmt->body ); 590 if ( nullptr != ifStmt && nullptr == ifStmt->then Part) {590 if ( nullptr != ifStmt && nullptr == ifStmt->then ) { 591 591 assert( ifStmt->condition ); 592 assert( ifStmt->else Part);592 assert( ifStmt->else_ ); 593 593 catchStmt->cond = ifStmt->condition; 594 catchStmt->body = ifStmt->else Part;594 catchStmt->body = ifStmt->else_; 595 595 ifStmt->condition = nullptr; 596 ifStmt->else Part= nullptr;596 ifStmt->else_ = nullptr; 597 597 delete ifStmt; 598 598 } … … 1272 1272 const ast::AsmStmt * previsit( const ast::AsmStmt * ); 1273 1273 const ast::IfStmt * previsit( const ast::IfStmt * ); 1274 const ast::While Stmt * previsit( const ast::WhileStmt * );1274 const ast::WhileDoStmt * previsit( const ast::WhileDoStmt * ); 1275 1275 const ast::ForStmt * previsit( const ast::ForStmt * ); 1276 1276 const ast::SwitchStmt * previsit( const ast::SwitchStmt * ); … … 1581 1581 } 1582 1582 1583 const ast::While Stmt * Resolver_new::previsit( const ast::WhileStmt * whileStmt ) {1583 const ast::WhileDoStmt * Resolver_new::previsit( const ast::WhileDoStmt * whileDoStmt ) { 1584 1584 return ast::mutate_field( 1585 while Stmt, &ast::WhileStmt::cond, findIntegralExpression( whileStmt->cond, symtab ) );1585 whileDoStmt, &ast::WhileDoStmt::cond, findIntegralExpression( whileDoStmt->cond, symtab ) ); 1586 1586 } 1587 1587 … … 1669 1669 1670 1670 const ast::CatchStmt * Resolver_new::previsit( const ast::CatchStmt * catchStmt ) { 1671 // Until we are very sure this invarent (ifs that move between passes have then Part)1671 // Until we are very sure this invarent (ifs that move between passes have then) 1672 1672 // holds, check it. This allows a check for when to decode the mangling. 1673 1673 if ( auto ifStmt = catchStmt->body.as<ast::IfStmt>() ) { 1674 assert( ifStmt->then Part);1674 assert( ifStmt->then ); 1675 1675 } 1676 1676 // Encode the catchStmt so the condition can see the declaration. … … 1687 1687 // Decode the catchStmt so everything is stored properly. 1688 1688 const ast::IfStmt * ifStmt = catchStmt->body.as<ast::IfStmt>(); 1689 if ( nullptr != ifStmt && nullptr == ifStmt->then Part) {1689 if ( nullptr != ifStmt && nullptr == ifStmt->then ) { 1690 1690 assert( ifStmt->cond ); 1691 assert( ifStmt->else Part);1691 assert( ifStmt->else_ ); 1692 1692 ast::CatchStmt * stmt = ast::mutate( catchStmt ); 1693 1693 stmt->cond = ifStmt->cond; 1694 stmt->body = ifStmt->else Part;1694 stmt->body = ifStmt->else_; 1695 1695 // ifStmt should be implicately deleted here. 1696 1696 return stmt; -
src/SynTree/Mutator.h
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:35:36 202113 // Update Count : 1812 // Last Modified On : Tue Feb 1 09:26:49 2022 13 // Update Count : 20 14 14 // 15 15 #pragma once … … 42 42 virtual Statement * mutate( DirectiveStmt * dirStmt ) = 0; 43 43 virtual Statement * mutate( IfStmt * ifStmt ) = 0; 44 virtual Statement * mutate( While Stmt * whileStmt ) = 0;44 virtual Statement * mutate( WhileDoStmt * whileDoStmt ) = 0; 45 45 virtual Statement * mutate( ForStmt * forStmt ) = 0; 46 46 virtual Statement * mutate( SwitchStmt * switchStmt ) = 0; -
src/SynTree/Statement.cc
r4de48c5 r8cb149f 9 9 // Author : Richard C. Bilson 10 10 // Created On : Mon May 18 07:44:20 2015 11 // Last Modified By : Andrew Beach12 // Last Modified On : Mon Jan 20 16:03:00 202013 // Update Count : 7111 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Wed Feb 2 11:55:19 2022 13 // Update Count : 80 14 14 // 15 15 … … 145 145 } 146 146 147 IfStmt::IfStmt( Expression * condition, Statement * then Part, Statement * elsePart, std::list<Statement *> initialization ):148 Statement(), condition( condition ), then Part( thenPart ), elsePart( elsePart), initialization( initialization ) {}147 IfStmt::IfStmt( Expression * condition, Statement * then, Statement * else_, std::list<Statement *> initialization ): 148 Statement(), condition( condition ), then( then ), else_( else_ ), initialization( initialization ) {} 149 149 150 150 IfStmt::IfStmt( const IfStmt & other ) : 151 Statement( other ), condition( maybeClone( other.condition ) ), then Part( maybeClone( other.thenPart ) ), elsePart( maybeClone( other.elsePart) ) {151 Statement( other ), condition( maybeClone( other.condition ) ), then( maybeClone( other.then ) ), else_( maybeClone( other.else_ ) ) { 152 152 cloneAll( other.initialization, initialization ); 153 153 } … … 156 156 deleteAll( initialization ); 157 157 delete condition; 158 delete then Part;159 delete else Part;158 delete then; 159 delete else_; 160 160 } 161 161 … … 177 177 178 178 os << indent+1; 179 then Part->print( os, indent+1 );180 181 if ( else Part!= nullptr ) {179 then->print( os, indent+1 ); 180 181 if ( else_ != nullptr ) { 182 182 os << indent << "... else: " << endl; 183 183 os << indent+1; 184 else Part->print( os, indent+1 );184 else_->print( os, indent+1 ); 185 185 } // if 186 186 } … … 246 246 } 247 247 248 WhileStmt::WhileStmt( Expression * condition, Statement * body, std::list< Statement * > & initialization, bool isDoWhile ): 249 Statement(), condition( condition), body( body), initialization( initialization ), isDoWhile( isDoWhile) { 250 } 251 252 WhileStmt::WhileStmt( const WhileStmt & other ): 248 WhileDoStmt::WhileDoStmt( Expression * condition, Statement * body, const std::list< Statement * > & initialization, bool isDoWhile ): 249 Statement(), condition( condition ), body( body ), else_( nullptr ), initialization( initialization ), isDoWhile( isDoWhile) { 250 } 251 252 WhileDoStmt::WhileDoStmt( Expression * condition, Statement * body, Statement * else_, const std::list< Statement * > & initialization, bool isDoWhile ): 253 Statement(), condition( condition), body( body ), else_( else_ ), initialization( initialization ), isDoWhile( isDoWhile) { 254 } 255 256 WhileDoStmt::WhileDoStmt( const WhileDoStmt & other ): 253 257 Statement( other ), condition( maybeClone( other.condition ) ), body( maybeClone( other.body ) ), isDoWhile( other.isDoWhile ) { 254 258 } 255 259 256 While Stmt::~WhileStmt() {260 WhileDoStmt::~WhileDoStmt() { 257 261 delete body; 258 262 delete condition; 259 263 } 260 264 261 void While Stmt::print( std::ostream & os, Indenter indent ) const {265 void WhileDoStmt::print( std::ostream & os, Indenter indent ) const { 262 266 os << "While on condition: " << endl ; 263 267 condition->print( os, indent+1 ); … … 268 272 } 269 273 270 ForStmt::ForStmt( std::list<Statement *> initialization, Expression * condition, Expression * increment, Statement * body ):271 Statement(), initialization( initialization ), condition( condition ), increment( increment ), body( body ) {274 ForStmt::ForStmt( std::list<Statement *> initialization, Expression * condition, Expression * increment, Statement * body, Statement * else_ ): 275 Statement(), initialization( initialization ), condition( condition ), increment( increment ), body( body ), else_( else_ ) { 272 276 } 273 277 274 278 ForStmt::ForStmt( const ForStmt & other ): 275 Statement( other ), condition( maybeClone( other.condition ) ), increment( maybeClone( other.increment ) ), body( maybeClone( other.body ) ) {279 Statement( other ), condition( maybeClone( other.condition ) ), increment( maybeClone( other.increment ) ), body( maybeClone( other.body ) ), else_( maybeClone( other.else_ ) ) { 276 280 cloneAll( other.initialization, initialization ); 277 281 … … 283 287 delete increment; 284 288 delete body; 289 delete else_; 285 290 } 286 291 … … 311 316 os << "\n" << indent << "... with body: \n" << indent+1; 312 317 body->print( os, indent+1 ); 318 } 319 320 if ( else_ != nullptr ) { 321 os << "\n" << indent << "... with body: \n" << indent+1; 322 else_->print( os, indent+1 ); 313 323 } 314 324 os << endl; -
src/SynTree/Statement.h
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Jan 10 14:13:24 202013 // Update Count : 8512 // Last Modified On : Wed Feb 2 11:49:17 2022 13 // Update Count : 94 14 14 // 15 15 … … 148 148 public: 149 149 Expression * condition; 150 Statement * then Part;151 Statement * else Part;150 Statement * then; 151 Statement * else_; 152 152 std::list<Statement *> initialization; 153 153 154 IfStmt( Expression * condition, Statement * then Part, Statement * elsePart,154 IfStmt( Expression * condition, Statement * then, Statement * else_, 155 155 std::list<Statement *> initialization = std::list<Statement *>() ); 156 156 IfStmt( const IfStmt & other ); … … 160 160 Expression * get_condition() { return condition; } 161 161 void set_condition( Expression * newValue ) { condition = newValue; } 162 Statement * get_then Part() { return thenPart; }163 void set_then Part( Statement * newValue ) { thenPart= newValue; }164 Statement * get_else Part() { return elsePart; }165 void set_else Part( Statement * newValue ) { elsePart= newValue; }162 Statement * get_then() { return then; } 163 void set_then( Statement * newValue ) { then = newValue; } 164 Statement * get_else() { return else_; } 165 void set_else( Statement * newValue ) { else_ = newValue; } 166 166 167 167 virtual IfStmt * clone() const override { return new IfStmt( *this ); } … … 225 225 }; 226 226 227 class While Stmt : public Statement {227 class WhileDoStmt : public Statement { 228 228 public: 229 229 Expression * condition; 230 230 Statement * body; 231 Statement * else_; 231 232 std::list<Statement *> initialization; 232 233 bool isDoWhile; 233 234 234 WhileStmt( Expression * condition, Statement * body, std::list<Statement *> & initialization, bool isDoWhile = false ); 235 WhileStmt( const WhileStmt & other ); 236 virtual ~WhileStmt(); 235 WhileDoStmt( Expression * condition, Statement * body, const std::list<Statement *> & initialization, bool isDoWhile = false ); 236 WhileDoStmt( Expression * condition, Statement * body, Statement * else_, const std::list<Statement *> & initialization, bool isDoWhile = false ); 237 WhileDoStmt( const WhileDoStmt & other ); 238 virtual ~WhileDoStmt(); 237 239 238 240 Expression * get_condition() { return condition; } … … 243 245 void set_isDoWhile( bool newValue ) { isDoWhile = newValue; } 244 246 245 virtual While Stmt * clone() const override { return new WhileStmt( *this ); }247 virtual WhileDoStmt * clone() const override { return new WhileDoStmt( *this ); } 246 248 virtual void accept( Visitor & v ) override { v.visit( this ); } 247 249 virtual void accept( Visitor & v ) const override { v.visit( this ); } … … 256 258 Expression * increment; 257 259 Statement * body; 258 259 ForStmt( std::list<Statement *> initialization, Expression * condition = nullptr, Expression * increment = nullptr, Statement * body = nullptr ); 260 Statement * else_; 261 262 ForStmt( std::list<Statement *> initialization, Expression * condition = nullptr, Expression * increment = nullptr, Statement * body = nullptr, Statement * else_ = nullptr ); 260 263 ForStmt( const ForStmt & other ); 261 264 virtual ~ForStmt(); -
src/SynTree/SynTree.h
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:56:44 202113 // Update Count : 1 312 // Last Modified On : Tue Feb 1 09:22:33 2022 13 // Update Count : 14 14 14 // 15 15 … … 45 45 class DirectiveStmt; 46 46 class IfStmt; 47 class While Stmt;47 class WhileDoStmt; 48 48 class ForStmt; 49 49 class SwitchStmt; -
src/SynTree/Visitor.h
r4de48c5 r8cb149f 10 10 // Created On : Mon May 18 07:44:20 2015 11 11 // Last Modified By : Peter A. Buhr 12 // Last Modified On : Fri Mar 12 18:35:35 202113 // Update Count : 1 512 // Last Modified On : Tue Feb 1 09:26:57 2022 13 // Update Count : 17 14 14 // 15 15 … … 60 60 virtual void visit( IfStmt * node ) { visit( const_cast<const IfStmt *>(node) ); } 61 61 virtual void visit( const IfStmt * ifStmt ) = 0; 62 virtual void visit( While Stmt * node ) { visit( const_cast<const WhileStmt *>(node) ); }63 virtual void visit( const While Stmt * whileStmt ) = 0;62 virtual void visit( WhileDoStmt * node ) { visit( const_cast<const WhileDoStmt *>(node) ); } 63 virtual void visit( const WhileDoStmt * whileDoStmt ) = 0; 64 64 virtual void visit( ForStmt * node ) { visit( const_cast<const ForStmt *>(node) ); } 65 65 virtual void visit( const ForStmt * forStmt ) = 0;
Note: See TracChangeset
for help on using the changeset viewer.