| 1 | \chapter{Performance}
|
|---|
| 2 | \label{c:performance}
|
|---|
| 3 |
|
|---|
| 4 | Performance is of secondary importance for most of this project.
|
|---|
| 5 | Instead, the focus was to get the features working. The only performance
|
|---|
| 6 | requirement is to ensure the tests for correctness run in a reasonable
|
|---|
| 7 | amount of time. Hence, a few basic performance tests were performed to
|
|---|
| 8 | check this requirement.
|
|---|
| 9 |
|
|---|
| 10 | \section{Test Set-Up}
|
|---|
| 11 | Tests were run in \CFA, C++, Java and Python.
|
|---|
| 12 | In addition there are two sets of tests for \CFA,
|
|---|
| 13 | one with termination and one with resumption.
|
|---|
| 14 |
|
|---|
| 15 | C++ is the most comparable language because both it and \CFA use the same
|
|---|
| 16 | framework, libunwind.
|
|---|
| 17 | In fact, the comparison is almost entirely in quality of implementation.
|
|---|
| 18 | Specifically, \CFA's EHM has had significantly less time to be optimized and
|
|---|
| 19 | does not generate its own assembly. It does have a slight advantage in that
|
|---|
| 20 | \Cpp has to do some extra bookkeeping to support its utility functions,
|
|---|
| 21 | but otherwise \Cpp should have a significant advantage.
|
|---|
| 22 |
|
|---|
| 23 | Java, a popular language with similar termination semantics, but
|
|---|
| 24 | it is implemented in a very different environment, a virtual machine with
|
|---|
| 25 | garbage collection.
|
|---|
| 26 | It also implements the finally clause on try blocks allowing for a direct
|
|---|
| 27 | feature-to-feature comparison.
|
|---|
| 28 | As with \Cpp, Java's implementation is mature, has more optimizations
|
|---|
| 29 | and extra features as compared to \CFA.
|
|---|
| 30 |
|
|---|
| 31 | Python is used as an alternative comparison because of the \CFA EHM's
|
|---|
| 32 | current performance goals, which is to not be prohibitively slow while the
|
|---|
| 33 | features are designed and examined. Python has similar performance goals for
|
|---|
| 34 | creating quick scripts and its wide use suggests it has achieved those goals.
|
|---|
| 35 |
|
|---|
| 36 | Unfortunately, there are no notable modern programming languages with
|
|---|
| 37 | resumption exceptions. Even the older programming languages with resumption
|
|---|
| 38 | seem to be notable only for having resumption.
|
|---|
| 39 | Instead, resumption is compared to its simulation in other programming
|
|---|
| 40 | languages: fixup functions that are explicitly passed into a function.
|
|---|
| 41 |
|
|---|
| 42 | All tests are run inside a main loop that repeatedly performs a test.
|
|---|
| 43 | This approach avoids start-up or tear-down time from
|
|---|
| 44 | affecting the timing results.
|
|---|
| 45 | The number of times the loop is run is configurable from the command line;
|
|---|
| 46 | the number used in the timing runs is given with the results per test.
|
|---|
| 47 | % Tests ran their main loop a million times.
|
|---|
| 48 | The Java tests run the main loop 1000 times before
|
|---|
| 49 | beginning the actual test to ``warm-up" the JVM.
|
|---|
| 50 | % All other languages are precompiled or interpreted.
|
|---|
| 51 |
|
|---|
| 52 | Timing is done internally, with time measured immediately before and
|
|---|
| 53 | after the test loop. The difference is calculated and printed.
|
|---|
| 54 | The loop structure and internal timing means it is impossible to test
|
|---|
| 55 | unhandled exceptions in \Cpp and Java as that would cause the process to
|
|---|
| 56 | terminate.
|
|---|
| 57 | Luckily, performance on the ``give-up and kill the process" path is not
|
|---|
| 58 | critical.
|
|---|
| 59 |
|
|---|
| 60 | The exceptions used in these tests are always based off of
|
|---|
| 61 | the base exception for the language.
|
|---|
| 62 | This requirement minimizes performance differences based
|
|---|
| 63 | on the object model used to represent the exception.
|
|---|
| 64 |
|
|---|
| 65 | All tests are designed to be as minimal as possible, while still preventing
|
|---|
| 66 | excessive optimizations.
|
|---|
| 67 | For example, empty inline assembly blocks are used in \CFA and \Cpp to
|
|---|
| 68 | prevent excessive optimizations while adding no actual work.
|
|---|
| 69 |
|
|---|
| 70 | % We don't use catch-alls but if we did:
|
|---|
| 71 | % Catch-alls are done by catching the root exception type (not using \Cpp's
|
|---|
| 72 | % \code{C++}{catch(...)}).
|
|---|
| 73 |
|
|---|
| 74 | When collecting data, each test is run eleven times. The top three and bottom
|
|---|
| 75 | three results are discarded and the remaining five values are averaged.
|
|---|
| 76 | The test are run with the latest (still pre-release) \CFA compiler,
|
|---|
| 77 | using gcc-10 as a backend.
|
|---|
| 78 | g++-10 is used for \Cpp.
|
|---|
| 79 | Java tests are complied and run with version 11.0.11.
|
|---|
| 80 | Python used version 3.8.
|
|---|
| 81 | The machines used to run the tests are:
|
|---|
| 82 | \todo{Get patch versions for python, gcc and g++.}
|
|---|
| 83 | \begin{itemize}[nosep]
|
|---|
| 84 | \item ARM 2280 Kunpeng 920 48-core 2$\times$socket
|
|---|
| 85 | \lstinline{@} 2.6 GHz running Linux v5.11.0-25
|
|---|
| 86 | \item AMD 6380 Abu Dhabi 16-core 4$\times$socket
|
|---|
| 87 | \lstinline{@} 2.5 GHz running Linux v5.11.0-25
|
|---|
| 88 | \end{itemize}
|
|---|
| 89 | Representing the two major families of hardware architecture.
|
|---|
| 90 |
|
|---|
| 91 | \section{Tests}
|
|---|
| 92 | The following tests were selected to test the performance of different
|
|---|
| 93 | components of the exception system.
|
|---|
| 94 | They should provide a guide as to where the EHM's costs are found.
|
|---|
| 95 |
|
|---|
| 96 | \paragraph{Stack Traversal}
|
|---|
| 97 | This group measures the cost of traversing the stack,
|
|---|
| 98 | (and in termination, unwinding it).
|
|---|
| 99 | Inside the main loop is a call to a recursive function.
|
|---|
| 100 | This function calls itself F times before raising an exception.
|
|---|
| 101 | F is configurable from the command line, but is usually 100.
|
|---|
| 102 | This builds up many stack frames, and any contents they may have,
|
|---|
| 103 | before the raise.
|
|---|
| 104 | The exception is always handled at the base of the stack.
|
|---|
| 105 | For example the Empty test for \CFA resumption looks like:
|
|---|
| 106 | \begin{cfa}
|
|---|
| 107 | void unwind_empty(unsigned int frames) {
|
|---|
| 108 | if (frames) {
|
|---|
| 109 | unwind_empty(frames - 1);
|
|---|
| 110 | } else {
|
|---|
| 111 | throwResume (empty_exception){&empty_vt};
|
|---|
| 112 | }
|
|---|
| 113 | }
|
|---|
| 114 | \end{cfa}
|
|---|
| 115 | Other test cases have additional code around the recursive call adding
|
|---|
| 116 | something besides simple stack frames to the stack.
|
|---|
| 117 | Note that both termination and resumption have to traverse over
|
|---|
| 118 | the stack but only termination has to unwind it.
|
|---|
| 119 | \begin{itemize}[nosep]
|
|---|
| 120 | % \item None:
|
|---|
| 121 | % Reuses the empty test code (see below) except that the number of frames
|
|---|
| 122 | % is set to 0 (this is the only test for which the number of frames is not
|
|---|
| 123 | % 100). This isolates the start-up and shut-down time of a throw.
|
|---|
| 124 | \item Empty:
|
|---|
| 125 | The repeating function is empty except for the necessary control code.
|
|---|
| 126 | As other traversal tests add to this, it is the baseline for the group
|
|---|
| 127 | as the cost comes from traversing over and unwinding a stack frame
|
|---|
| 128 | that has no other interactions with the exception system.
|
|---|
| 129 | \item Destructor:
|
|---|
| 130 | The repeating function creates an object with a destructor before calling
|
|---|
| 131 | itself.
|
|---|
| 132 | Comparing this to the empty test gives the time to traverse over and
|
|---|
| 133 | unwind a destructor.
|
|---|
| 134 | \item Finally:
|
|---|
| 135 | The repeating function calls itself inside a try block with a finally clause
|
|---|
| 136 | attached.
|
|---|
| 137 | Comparing this to the empty test gives the time to traverse over and
|
|---|
| 138 | unwind a finally clause.
|
|---|
| 139 | \item Other Handler:
|
|---|
| 140 | The repeating function calls itself inside a try block with a handler that
|
|---|
| 141 | does not match the raised exception, but is of the same kind of handler.
|
|---|
| 142 | This means that the EHM has to check each handler, but continue
|
|---|
| 143 | over all of them until it reaches the base of the stack.
|
|---|
| 144 | Comparing this to the empty test gives the time to traverse over and
|
|---|
| 145 | unwind a handler.
|
|---|
| 146 | \end{itemize}
|
|---|
| 147 |
|
|---|
| 148 | \paragraph{Cross Try Statement}
|
|---|
| 149 | This group of tests measures the cost for setting up exception handling, if it is
|
|---|
| 150 | not used (because the exceptional case did not occur).
|
|---|
| 151 | Tests repeatedly cross (enter, execute, and leave) a try statement but never
|
|---|
| 152 | perform a raise.
|
|---|
| 153 | \begin{itemize}[nosep]
|
|---|
| 154 | \item Handler:
|
|---|
| 155 | The try statement has a handler (of the appropriate kind).
|
|---|
| 156 | \item Finally:
|
|---|
| 157 | The try statement has a finally clause.
|
|---|
| 158 | \end{itemize}
|
|---|
| 159 |
|
|---|
| 160 | \paragraph{Conditional Matching}
|
|---|
| 161 | This group measures the cost of conditional matching.
|
|---|
| 162 | Only \CFA implements the language level conditional match,
|
|---|
| 163 | the other languages mimic it with an ``unconditional" match (it still
|
|---|
| 164 | checks the exception's type) and conditional re-raise if it is not supposed
|
|---|
| 165 | to handle that exception.
|
|---|
| 166 |
|
|---|
| 167 | Here is the pattern shown in \CFA and \Cpp. Java and Python use the same
|
|---|
| 168 | pattern as \Cpp, but with their own syntax.
|
|---|
| 169 |
|
|---|
| 170 | \begin{minipage}{0.45\textwidth}
|
|---|
| 171 | \begin{cfa}
|
|---|
| 172 | try {
|
|---|
| 173 | ...
|
|---|
| 174 | } catch (exception_t * e ;
|
|---|
| 175 | should_catch(e)) {
|
|---|
| 176 | ...
|
|---|
| 177 | }
|
|---|
| 178 | \end{cfa}
|
|---|
| 179 | \end{minipage}
|
|---|
| 180 | \begin{minipage}{0.55\textwidth}
|
|---|
| 181 | \begin{lstlisting}[language=C++]
|
|---|
| 182 | try {
|
|---|
| 183 | ...
|
|---|
| 184 | } catch (std::exception & e) {
|
|---|
| 185 | if (!should_catch(e)) throw;
|
|---|
| 186 | ...
|
|---|
| 187 | }
|
|---|
| 188 | \end{lstlisting}
|
|---|
| 189 | \end{minipage}
|
|---|
| 190 | \begin{itemize}[nosep]
|
|---|
| 191 | \item Match All:
|
|---|
| 192 | The condition is always true. (Always matches or never re-raises.)
|
|---|
| 193 | \item Match None:
|
|---|
| 194 | The condition is always false. (Never matches or always re-raises.)
|
|---|
| 195 | \end{itemize}
|
|---|
| 196 |
|
|---|
| 197 | \paragraph{Resumption Simulation}
|
|---|
| 198 | A slightly altered version of the Empty Traversal test is used when comparing
|
|---|
| 199 | resumption to fix-up routines.
|
|---|
| 200 | The handler, the actual resumption handler or the fix-up routine,
|
|---|
| 201 | always captures a variable at the base of the loop,
|
|---|
| 202 | and receives a reference to a variable at the raise site, either as a
|
|---|
| 203 | field on the exception or an argument to the fix-up routine.
|
|---|
| 204 | % I don't actually know why that is here but not anywhere else.
|
|---|
| 205 |
|
|---|
| 206 | %\section{Cost in Size}
|
|---|
| 207 | %Using exceptions also has a cost in the size of the executable.
|
|---|
| 208 | %Although it is sometimes ignored
|
|---|
| 209 | %
|
|---|
| 210 | %There is a size cost to defining a personality function but the later problem
|
|---|
| 211 | %is the LSDA which will be generated for every function.
|
|---|
| 212 | %
|
|---|
| 213 | %(I haven't actually figured out how to compare this, probably using something
|
|---|
| 214 | %related to -fexceptions.)
|
|---|
| 215 |
|
|---|
| 216 | \section{Results}
|
|---|
| 217 | % First, introduce the tables.
|
|---|
| 218 | \autoref{t:PerformanceTermination},
|
|---|
| 219 | \autoref{t:PerformanceResumption}
|
|---|
| 220 | and~\autoref{t:PerformanceFixupRoutines}
|
|---|
| 221 | show the test results.
|
|---|
| 222 | In cases where a feature is not supported by a language, the test is skipped
|
|---|
| 223 | for that language and the result is marked N/A.
|
|---|
| 224 | There are also cases where the feature is supported but measuring its
|
|---|
| 225 | cost is impossible. This happened with Java, which uses a JIT that optimize
|
|---|
| 226 | away the tests and it cannot be stopped.\cite{Dice21}
|
|---|
| 227 | These tests are marked N/C.
|
|---|
| 228 | To get results in a consistent range (1 second to 1 minute is ideal,
|
|---|
| 229 | going higher is better than going low) N, the number of iterations of the
|
|---|
| 230 | main loop in each test, is varied between tests. It is also given in the
|
|---|
| 231 | results and has a value in the millions.
|
|---|
| 232 |
|
|---|
| 233 | An anomaly in some results came from \CFA's use of gcc nested functions.
|
|---|
| 234 | These nested functions are used to create closures that can access stack
|
|---|
| 235 | variables in their lexical scope.
|
|---|
| 236 | However, if they do so, then they can cause the benchmark's run-time to
|
|---|
| 237 | increase by an order of magnitude.
|
|---|
| 238 | The simplest solution is to make those values global variables instead
|
|---|
| 239 | of function local variables.
|
|---|
| 240 | % Do we know if editing a global inside nested function is a problem?
|
|---|
| 241 | Tests that had to be modified to avoid this problem have been marked
|
|---|
| 242 | with a ``*'' in the results.
|
|---|
| 243 |
|
|---|
| 244 | % Now come the tables themselves:
|
|---|
| 245 | % You might need a wider window for this.
|
|---|
| 246 |
|
|---|
| 247 | \begin{table}[htb]
|
|---|
| 248 | \centering
|
|---|
| 249 | \caption{Termination Performance Results (sec)}
|
|---|
| 250 | \label{t:PerformanceTermination}
|
|---|
| 251 | \begin{tabular}{|r|*{2}{|r r r r|}}
|
|---|
| 252 | \hline
|
|---|
| 253 | & \multicolumn{4}{c||}{AMD} & \multicolumn{4}{c|}{ARM} \\
|
|---|
| 254 | \cline{2-9}
|
|---|
| 255 | N\hspace{8pt} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} &
|
|---|
| 256 | \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\
|
|---|
| 257 | \hline
|
|---|
| 258 | Empty Traversal (1M) & 3.4 & 2.8 & 18.3 & 23.4 & 3.7 & 3.2 & 15.5 & 14.8 \\
|
|---|
| 259 | D'tor Traversal (1M) & 48.4 & 23.6 & N/A & N/A & 64.2 & 29.0 & N/A & N/A \\
|
|---|
| 260 | Finally Traversal (1M) & 3.4* & N/A & 17.9 & 29.0 & 4.1* & N/A & 15.6 & 19.0 \\
|
|---|
| 261 | Other Traversal (1M) & 3.6* & 23.2 & 18.2 & 32.7 & 4.0* & 24.5 & 15.5 & 21.4 \\
|
|---|
| 262 | Cross Handler (100M) & 6.0 & 0.9 & N/C & 37.4 & 10.0 & 0.8 & N/C & 32.2 \\
|
|---|
| 263 | Cross Finally (100M) & 0.9 & N/A & N/C & 44.1 & 0.8 & N/A & N/C & 37.3 \\
|
|---|
| 264 | Match All (10M) & 32.9 & 20.7 & 13.4 & 4.9 & 36.2 & 24.5 & 12.0 & 3.1 \\
|
|---|
| 265 | Match None (10M) & 32.7 & 50.3 & 11.0 & 5.1 & 36.3 & 71.9 & 12.3 & 4.2 \\
|
|---|
| 266 | \hline
|
|---|
| 267 | \end{tabular}
|
|---|
| 268 | \end{table}
|
|---|
| 269 |
|
|---|
| 270 | \begin{table}[htb]
|
|---|
| 271 | \centering
|
|---|
| 272 | \caption{Resumption Performance Results (sec)}
|
|---|
| 273 | \label{t:PerformanceResumption}
|
|---|
| 274 | \begin{tabular}{|r||r||r|}
|
|---|
| 275 | \hline
|
|---|
| 276 | N\hspace{8pt}
|
|---|
| 277 | & AMD & ARM \\
|
|---|
| 278 | \hline
|
|---|
| 279 | Empty Traversal (10M) & 0.2 & 0.3 \\
|
|---|
| 280 | D'tor Traversal (10M) & 1.8 & 1.0 \\
|
|---|
| 281 | Finally Traversal (10M) & 1.7 & 1.0 \\
|
|---|
| 282 | Other Traversal (10M) & 22.6 & 25.9 \\
|
|---|
| 283 | Cross Handler (100M) & 8.4 & 11.9 \\
|
|---|
| 284 | Match All (100M) & 2.3 & 3.2 \\
|
|---|
| 285 | Match None (100M) & 2.9 & 3.9 \\
|
|---|
| 286 | \hline
|
|---|
| 287 | \end{tabular}
|
|---|
| 288 | \end{table}
|
|---|
| 289 |
|
|---|
| 290 | \begin{table}[htb]
|
|---|
| 291 | \centering
|
|---|
| 292 | \small
|
|---|
| 293 | \caption{Resumption/Fixup Routine Comparison (sec)}
|
|---|
| 294 | \label{t:PerformanceFixupRoutines}
|
|---|
| 295 | \setlength{\tabcolsep}{5pt}
|
|---|
| 296 | \begin{tabular}{|r|*{2}{|r r r r r|}}
|
|---|
| 297 | \hline
|
|---|
| 298 | & \multicolumn{5}{c||}{AMD} & \multicolumn{5}{c|}{ARM} \\
|
|---|
| 299 | \cline{2-11}
|
|---|
| 300 | N\hspace{8pt} & \multicolumn{1}{c}{Raise} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} &
|
|---|
| 301 | \multicolumn{1}{c}{Raise} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\
|
|---|
| 302 | \hline
|
|---|
| 303 | Resume Empty (10M) & 3.8 & 3.5 & 14.7 & 2.3 & 176.1 & 0.3 & 0.1 & 8.9 & 1.2 & 119.9 \\
|
|---|
| 304 | %Resume Other (10M) & 4.0* & 0.1* & 21.9 & 6.2 & 381.0 & 0.3* & 0.1* & 13.2 & 5.0 & 290.7 \\
|
|---|
| 305 | \hline
|
|---|
| 306 | \end{tabular}
|
|---|
| 307 | \end{table}
|
|---|
| 308 |
|
|---|
| 309 | % Now discuss the results in the tables.
|
|---|
| 310 | One result not directly related to \CFA but important to keep in mind is that,
|
|---|
| 311 | for exceptions, the standard intuition about which languages should go
|
|---|
| 312 | faster often does not hold.
|
|---|
| 313 | For example, there are a few cases where Python out-performs
|
|---|
| 314 | \CFA, \Cpp and Java.
|
|---|
| 315 | The most likely explanation is that, since exceptions
|
|---|
| 316 | are rarely considered to be the common case, the more optimized languages
|
|---|
| 317 | make that case expensive to improve other cases.
|
|---|
| 318 | In addition, languages with high-level representations have a much
|
|---|
| 319 | easier time scanning the stack as there is less to decode.
|
|---|
| 320 |
|
|---|
| 321 | As stated,
|
|---|
| 322 | the performance tests are not attempting to show \CFA has a new competitive
|
|---|
| 323 | way of implementing exception handling.
|
|---|
| 324 | The only performance requirement is to insure the \CFA EHM has reasonable
|
|---|
| 325 | performance for prototyping.
|
|---|
| 326 | Although that may be hard to exactly quantify, I believe it has succeeded
|
|---|
| 327 | in that regard.
|
|---|
| 328 | Details on the different test cases follow.
|
|---|
| 329 |
|
|---|
| 330 | \subsection{Termination, \autoref{t:PerformanceTermination}}
|
|---|
| 331 |
|
|---|
| 332 | \begin{description}
|
|---|
| 333 | \item[Empty Traversal]
|
|---|
| 334 | \CFA is slower than \Cpp, but is still faster than the other languages
|
|---|
| 335 | and closer to \Cpp than other languages.
|
|---|
| 336 | This result is to be expected as \CFA is closer to \Cpp than the other languages.
|
|---|
| 337 |
|
|---|
| 338 | \item[D'tor Traversal]
|
|---|
| 339 | Running destructors causes a huge slowdown in the two languages that support
|
|---|
| 340 | them. \CFA has a higher proportionate slowdown but it is similar to \Cpp's.
|
|---|
| 341 | Considering the amount of work done in destructors is virtually zero (asm instruction), the cost
|
|---|
| 342 | must come from the change of context required to trigger the destructor.
|
|---|
| 343 |
|
|---|
| 344 | \item[Finally Traversal]
|
|---|
| 345 | Performance is similar to Empty Traversal in all languages that support finally
|
|---|
| 346 | clauses. Only Python seems to have a larger than random noise change in
|
|---|
| 347 | its run-time and it is still not large.
|
|---|
| 348 | Despite the similarity between finally clauses and destructors,
|
|---|
| 349 | finally clauses seem to avoid the spike that run-time destructors have.
|
|---|
| 350 | Possibly some optimization removes the cost of changing contexts.
|
|---|
| 351 | \todo{OK, I think the finally clause may have been optimized out.}
|
|---|
| 352 |
|
|---|
| 353 | \item[Other Traversal]
|
|---|
| 354 | For \Cpp, stopping to check if a handler applies seems to be about as
|
|---|
| 355 | expensive as stopping to run a destructor.
|
|---|
| 356 | This results in a significant jump.
|
|---|
| 357 |
|
|---|
| 358 | Other languages experience a small increase in run-time.
|
|---|
| 359 | The small increase likely comes from running the checks,
|
|---|
| 360 | but they could avoid the spike by not having the same kind of overhead for
|
|---|
| 361 | switching to the check's context.
|
|---|
| 362 |
|
|---|
| 363 | \todo{Could revisit Other Traversal, after Finally Traversal.}
|
|---|
| 364 |
|
|---|
| 365 | \item[Cross Handler]
|
|---|
| 366 | Here \CFA falls behind \Cpp by a much more significant margin.
|
|---|
| 367 | This is likely due to the fact \CFA has to insert two extra function
|
|---|
| 368 | calls, while \Cpp does not have to execute any other instructions.
|
|---|
| 369 | Python is much further behind.
|
|---|
| 370 |
|
|---|
| 371 | \item[Cross Finally]
|
|---|
| 372 | \CFA's performance now matches \Cpp's from Cross Handler.
|
|---|
| 373 | If the code from the finally clause is being inlined,
|
|---|
| 374 | which is just an asm comment, than there are no additional instructions
|
|---|
| 375 | to execute again when exiting the try statement normally.
|
|---|
| 376 |
|
|---|
| 377 | \item[Conditional Match]
|
|---|
| 378 | Both of the conditional matching tests can be considered on their own.
|
|---|
| 379 | However for evaluating the value of conditional matching itself, the
|
|---|
| 380 | comparison of the two sets of results is useful.
|
|---|
| 381 | Consider the massive jump in run-time for \Cpp going from match all to match
|
|---|
| 382 | none, which none of the other languages have.
|
|---|
| 383 | Some strange interaction is causing run-time to more than double for doing
|
|---|
| 384 | twice as many raises.
|
|---|
| 385 | Java and Python avoid this problem and have similar run-time for both tests,
|
|---|
| 386 | possibly through resource reuse or their program representation.
|
|---|
| 387 | However \CFA is built like \Cpp and avoids the problem as well, this matches
|
|---|
| 388 | the pattern of the conditional match, which makes the two execution paths
|
|---|
| 389 | very similar.
|
|---|
| 390 |
|
|---|
| 391 | \end{description}
|
|---|
| 392 |
|
|---|
| 393 | \subsection{Resumption, \autoref{t:PerformanceResumption}}
|
|---|
| 394 |
|
|---|
| 395 | Moving on to resumption, there is one general note,
|
|---|
| 396 | resumption is \textit{fast}. The only test where it fell
|
|---|
| 397 | behind termination is Cross Handler.
|
|---|
| 398 | In every other case, the number of iterations had to be increased by a
|
|---|
| 399 | factor of 10 to get the run-time in an appropriate range
|
|---|
| 400 | and in some cases resumption still took less time.
|
|---|
| 401 |
|
|---|
| 402 | % I tried \paragraph and \subparagraph, maybe if I could adjust spacing
|
|---|
| 403 | % between paragraphs those would work.
|
|---|
| 404 | \begin{description}
|
|---|
| 405 | \item[Empty Traversal]
|
|---|
| 406 | See above for the general speed-up notes.
|
|---|
| 407 | This result is not surprising as resumption's linked-list approach
|
|---|
| 408 | means that traversing over stack frames without a resumption handler is
|
|---|
| 409 | $O(1)$.
|
|---|
| 410 |
|
|---|
| 411 | \item[D'tor Traversal]
|
|---|
| 412 | Resumption does have the same spike in run-time that termination has.
|
|---|
| 413 | The run-time is actually very similar to Finally Traversal.
|
|---|
| 414 | As resumption does not unwind the stack, both destructors and finally
|
|---|
| 415 | clauses are run while walking down the stack during the recursion returns.
|
|---|
| 416 | So it follows their performance is similar.
|
|---|
| 417 |
|
|---|
| 418 | \item[Finally Traversal]
|
|---|
| 419 | % The increase in run-time from Empty Traversal (once adjusted for
|
|---|
| 420 | % the number of iterations) is roughly the same as for termination.
|
|---|
| 421 | % This suggests that the
|
|---|
| 422 | See D'tor Traversal discussion.
|
|---|
| 423 |
|
|---|
| 424 | \item[Other Traversal]
|
|---|
| 425 | Traversing across handlers reduces resumption's advantage as it actually
|
|---|
| 426 | has to stop and check each one.
|
|---|
| 427 | Resumption still came out ahead (adjusting for iterations) but by much less
|
|---|
| 428 | than the other cases.
|
|---|
| 429 |
|
|---|
| 430 | \item[Cross Handler]
|
|---|
| 431 | The only test case where resumption could not keep up with termination,
|
|---|
| 432 | although the difference is not as significant as many other cases.
|
|---|
| 433 | It is simply a matter of where the costs come from. \PAB{What does this mean?
|
|---|
| 434 | Even if \CFA termination
|
|---|
| 435 | is not ``zero-cost", passing through an empty function still seems to be
|
|---|
| 436 | cheaper than updating global values.}
|
|---|
| 437 |
|
|---|
| 438 | \item[Conditional Match]
|
|---|
| 439 | Resumption shows a slight slowdown if the exception is not matched
|
|---|
| 440 | by the first handler, which follows from the fact the second handler now has
|
|---|
| 441 | to be checked. However the difference is not large.
|
|---|
| 442 |
|
|---|
| 443 | \end{description}
|
|---|
| 444 |
|
|---|
| 445 | \subsection{Resumption/Fixup, \autoref{t:PerformanceFixupRoutines}}
|
|---|
| 446 |
|
|---|
| 447 | Finally are the results of the resumption/fixup routine comparison.
|
|---|
| 448 | These results are surprisingly varied. It is possible that creating a closure
|
|---|
| 449 | has more to do with performance than passing the argument through layers of
|
|---|
| 450 | calls.
|
|---|
| 451 | Even with 100 stack frames though, resumption is only about as fast as
|
|---|
| 452 | manually passing a fixup routine.
|
|---|
| 453 | However, as the number of fixup routines is increased, the cost of passing them
|
|---|
| 454 | should make the resumption dynamic-search cheaper.
|
|---|
| 455 | So there is a cost for the additional power and flexibility exceptions
|
|---|
| 456 | provide.
|
|---|