\chapter{Performance}
\label{c:performance}

Performance is of secondary importance for most of this project.
Instead, the focus was to get the features working. The only performance
requirement is to ensure the tests for correctness run in a reasonable
amount of time. Hence, only a few basic performance tests were performed to
check this requirement.

\section{Test Set-Up}
Tests were run in \CFA, C++, Java and Python.
In addition there are two sets of tests for \CFA,
one with termination and one with resumption.

GCC C++ is the most comparable language because both it and \CFA use the same
framework, libunwind.
In fact, the comparison is almost entirely in quality of implementation.
Specifically, \CFA's EHM has had significantly less time to be optimized and
does not generate its own assembly. It does have a slight advantage in that
\Cpp has to do some extra bookkeeping to support its utility functions,
but otherwise \Cpp should have a significant advantage.

Java, a popular language with similar termination semantics,
is implemented in a very different environment, a virtual machine with
garbage collection.
It also implements the finally clause on try blocks allowing for a direct
feature-to-feature comparison.
As with \Cpp, Java's implementation is mature, has more optimizations
and extra features as compared to \CFA.

Python is used as an alternative comparison because of the \CFA EHM's
current performance goals, which is to not be prohibitively slow while the
features are designed and examined. Python has similar performance goals for
creating quick scripts and its wide use suggests it has achieved those goals.

Unfortunately, there are no notable modern programming languages with
resumption exceptions. Even the older programming languages with resumption
seem to be notable only for having resumption.
On the other hand, the functional equivalents to resumption are too new.
There does not seem to be any standard implementations in well-known
languages; so far, they seem confined to extensions and research languages.
% There was some maybe interesting comparison to an OCaml extension
% but I'm not sure how to get that working if it is interesting.
Instead, resumption is compared to its simulation in other programming
languages: fixup functions that are explicitly passed into a function.

All tests are run inside a main loop that repeatedly performs a test.
This approach avoids start-up or tear-down time from
affecting the timing results.
The number of times the loop is run is configurable from the command line;
the number used in the timing runs is given with the results per test.
The Java tests run the main loop 1000 times before
beginning the actual test to ``warm up" the JVM.
% All other languages are precompiled or interpreted.

Timing is done internally, with time measured immediately before and
after the test loop. The difference is calculated and printed.
The loop structure and internal timing means it is impossible to test
unhandled exceptions in \Cpp and Java as that would cause the process to
terminate.
Luckily, performance on the ``give up and kill the process" path is not
critical.

The exceptions used in these tests are always based off of
the base exception for the language.
This requirement minimizes performance differences based
on the object model used to represent the exception.

All tests are designed to be as minimal as possible, while still preventing
excessive optimizations.
For example, empty inline assembly blocks are used in \CFA and \Cpp to
prevent excessive optimizations while adding no actual work.

% We don't use catch-alls but if we did:
% Catch-alls are done by catching the root exception type (not using \Cpp's
% \code{C++}{catch(...)}).

When collecting data, each test is run eleven times. The top three and bottom
three results are discarded and the remaining five values are averaged.
The test are run with the latest (still pre-release) \CFA compiler,
using gcc-10 10.3.0 as a backend.
g++-10 10.3.0 is used for \Cpp.
Java tests are complied and run with Oracle OpenJDK version 11.0.11.
Python used CPython version 3.8.10.
The machines used to run the tests are:
\begin{itemize}[nosep]
\item ARM 2280 Kunpeng 920 48-core 2$\times$socket
      \lstinline{@} 2.6 GHz running Linux v5.11.0-25
\item AMD 6380 Abu Dhabi 16-core 4$\times$socket
      \lstinline{@} 2.5 GHz running Linux v5.11.0-25
\end{itemize}
These represent the two major families of hardware architecture.

\section{Tests}
The following tests were selected to test the performance of different
components of the exception system.
They should provide a guide as to where the EHM's costs are found.

\paragraph{Stack Traversal}
This group of tests measures the cost of traversing the stack
(and in termination, unwinding it).
Inside the main loop is a call to a recursive function.
This function calls itself F times before raising an exception.
F is configurable from the command line, but is usually 100.
This builds up many stack frames, and any contents they may have,
before the raise.
The exception is always handled at the base of the stack.
For example the Empty test for \CFA resumption looks like:
\begin{cfa}
void unwind_empty(unsigned int frames) {
	if (frames) {
		unwind_empty(frames - 1);
	} else {
		throwResume (empty_exception){&empty_vt};
	}
}
\end{cfa}
Other test cases have additional code around the recursive call adding
something besides simple stack frames to the stack.
Note that both termination and resumption have to traverse over
the stack but only termination has to unwind it.
\begin{itemize}[nosep]
% \item None:
% Reuses the empty test code (see below) except that the number of frames
% is set to 0 (this is the only test for which the number of frames is not
% 100). This isolates the start-up and shut-down time of a throw.
\item Empty:
The repeating function is empty except for the necessary control code.
As other traversal tests add to this, it is the baseline for the group
as the cost comes from traversing over and unwinding a stack frame
that has no other interactions with the exception system.
\item Destructor:
The repeating function creates an object with a destructor before calling
itself.
Comparing this to the empty test gives the time to traverse over and
unwind a destructor.
\item Finally:
The repeating function calls itself inside a try block with a finally clause
attached.
Comparing this to the empty test gives the time to traverse over and
unwind a finally clause.
\item Other Handler:
The repeating function calls itself inside a try block with a handler that
does not match the raised exception, but is of the same kind of handler.
This means that the EHM has to check each handler, and continue
over all of them until it reaches the base of the stack.
Comparing this to the empty test gives the time to traverse over and
unwind a handler.
\end{itemize}

\paragraph{Cross Try Statement}
This group of tests measures the cost for setting up exception handling,
if it is
not used because the exceptional case did not occur.
Tests repeatedly cross (enter, execute and leave) a try statement but never
perform a raise.
\begin{itemize}[nosep]
\item Handler:
The try statement has a handler (of the appropriate kind).
\item Finally:
The try statement has a finally clause.
\end{itemize}

\paragraph{Conditional Matching}
This group measures the cost of conditional matching.
Only \CFA implements the language level conditional match,
the other languages mimic it with an ``unconditional" match (it still
checks the exception's type) and conditional re-raise if it is not supposed
to handle that exception.

Here is the pattern shown in \CFA and \Cpp. Java and Python use the same
pattern as \Cpp, but with their own syntax.

\begin{minipage}{0.45\textwidth}
\begin{cfa}
try {
	...
} catch (exception_t * e ;
		should_catch(e)) {
	...
}
\end{cfa}
\end{minipage}
\begin{minipage}{0.55\textwidth}
\begin{lstlisting}[language=C++]
try {
	...
} catch (std::exception & e) {
	if (!should_catch(e)) throw;
	...
}
\end{lstlisting}
\end{minipage}
\begin{itemize}[nosep]
\item Match All:
The condition is always true. (Always matches or never re-raises.)
\item Match None:
The condition is always false. (Never matches or always re-raises.)
\end{itemize}

\paragraph{Resumption Simulation}
A slightly altered version of the Empty Traversal test is used when comparing
resumption to fix-up routines.
The handler, the actual resumption handler or the fix-up routine,
always captures a variable at the base of the loop,
and receives a reference to a variable at the raise site, either as a
field on the exception or an argument to the fix-up routine.
% I don't actually know why that is here but not anywhere else.

%\section{Cost in Size}
%Using exceptions also has a cost in the size of the executable.
%Although it is sometimes ignored
%
%There is a size cost to defining a personality function but the later problem
%is the LSDA which will be generated for every function.
%
%(I haven't actually figured out how to compare this, probably using something
%related to -fexceptions.)

\section{Results}
% First, introduce the tables.
\autoref{t:PerformanceTermination},
\autoref{t:PerformanceResumption}
and~\autoref{t:PerformanceFixupRoutines}
show the test results.
In cases where a feature is not supported by a language, the test is skipped
for that language and the result is marked N/A.
There are also cases where the feature is supported but measuring its
cost is impossible. This happened with Java, which uses a JIT that optimizes
away the tests and cannot be stopped.\cite{Dice21}
These tests are marked N/C.
To get results in a consistent range (1 second to 1 minute is ideal,
going higher is better than going low) N, the number of iterations of the
main loop in each test, is varied between tests. It is also given in the
results and has a value in the millions.

An anomaly in some results came from \CFA's use of GCC nested functions.
These nested functions are used to create closures that can access stack
variables in their lexical scope.
However, if they do so, then they can cause the benchmark's run time to
increase by an order of magnitude.
The simplest solution is to make those values global variables instead
of function-local variables.
% Do we know if editing a global inside nested function is a problem?
Tests that had to be modified to avoid this problem have been marked
with a ``*'' in the results.

% Now come the tables themselves:
% You might need a wider window for this.

\begin{table}[htb]
\centering
\caption{Termination Performance Results (sec)}
\label{t:PerformanceTermination}
\begin{tabular}{|r|*{2}{|r r r r|}}
\hline
                       & \multicolumn{4}{c||}{AMD}         & \multicolumn{4}{c|}{ARM}  \\
\cline{2-9}
N\hspace{8pt}          & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} &
                         \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\
\hline
Empty Traversal (1M)   & 23.0  & 9.6   & 17.6  & 23.4      & 30.6  & 13.6  & 15.5  & 14.7  \\
D'tor Traversal (1M)   & 48.1  & 23.5  & N/A   & N/A       & 64.2  & 29.2  & N/A   & N/A   \\
Finally Traversal (1M) & 3.2*  & N/A   & 17.6  & 29.2      & 3.9*  & N/A   & 15.5  & 19.0  \\
Other Traversal (1M)   & 3.3*  & 23.9  & 17.7  & 32.8      & 3.9*  & 24.5  & 15.5  & 21.6  \\
Cross Handler (1B)     & 6.5   & 0.9   & N/C   & 38.0      & 9.6   & 0.8   & N/C   & 32.1  \\
Cross Finally (1B)     & 0.8   & N/A   & N/C   & 44.6      & 0.6   & N/A   & N/C   & 37.3  \\
Match All (10M)        & 30.5  & 20.6  & 11.2  & 3.9       & 36.9  & 24.6  & 10.7  & 3.1   \\
Match None (10M)       & 30.6  & 50.9  & 11.2  & 5.0       & 36.9  & 71.9  & 10.7  & 4.1   \\
\hline
\end{tabular}
\end{table}

\begin{table}[htb]
\centering
\caption{Resumption Performance Results (sec)}
\label{t:PerformanceResumption}
\begin{tabular}{|r||r||r|}
\hline
N\hspace{8pt}
                        & AMD     & ARM  \\
\hline
Empty Traversal (10M)   & 1.4     & 1.2  \\
D'tor Traversal (10M)   & 1.8     & 1.0  \\
Finally Traversal (10M) & 1.8     & 1.0  \\
Other Traversal (10M)   & 22.6    & 25.8 \\
Cross Handler (1B)      & 9.0     & 11.9 \\
Match All (100M)        & 2.3     & 3.2  \\
Match None (100M)       & 3.0     & 3.8  \\
\hline
\end{tabular}
\end{table}

\begin{table}[htb]
\centering
\small
\caption{Resumption/Fixup Routine Comparison (sec)}
\label{t:PerformanceFixupRoutines}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{|r|*{2}{|r r r r r|}}
\hline
            & \multicolumn{5}{c||}{AMD}     & \multicolumn{5}{c|}{ARM}  \\
\cline{2-11}
N\hspace{8pt}       & \multicolumn{1}{c}{Raise} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} &
              \multicolumn{1}{c}{Raise} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\
\hline
Resume Empty (10M)  & 1.4 & 1.4 & 15.4 & 2.3 & 178.0  & 1.2 & 1.2 & 8.9 & 1.2 & 118.4 \\
\hline
\end{tabular}
\end{table}

% Now discuss the results in the tables.
One result not directly related to \CFA but important to keep in mind is that,
for exceptions, the standard intuition about which languages should go
faster often does not hold.
For example, there are a few cases where Python out-performs
\CFA, \Cpp and Java.
% To be exact, the Match All and Match None cases.
The most likely explanation is that
the generally faster languages have made ``common cases fast" at the expense
of the rarer cases. Since exceptions are considered rare, they are made
expensive to help speed up common actions, such as entering and leaving try
statements.
Python, on the other hand, while generally slower than the other languages,
uses exceptions more and has not sacrificed their performance.
In addition, languages with high-level representations have a much
easier time scanning the stack as there is less to decode.

As stated,
the performance tests are not attempting to show \CFA has a new competitive
way of implementing exception handling.
The only performance requirement is to insure the \CFA EHM has reasonable
performance for prototyping.
Although that may be hard to exactly quantify, I believe it has succeeded
in that regard.
Details on the different test cases follow.

\subsection{Termination \texorpdfstring{(\autoref{t:PerformanceTermination})}{}}

\begin{description}
\item[Empty Traversal]
\CFA is slower than \Cpp, but is still faster than the other languages
and closer to \Cpp than other languages.
This result is to be expected,
as \CFA is closer to \Cpp than the other languages.

\item[D'tor Traversal]
Running destructors causes a huge slowdown in the two languages that support
them. \CFA has a higher proportionate slowdown but it is similar to \Cpp's.
Considering the amount of work done in destructors is effectively zero
(an assembly comment), the cost
must come from the change of context required to run the destructor.

\item[Finally Traversal]
Performance is similar to Empty Traversal in all languages that support finally
clauses. Only Python seems to have a larger than random noise change in
its run time and it is still not large.
Despite the similarity between finally clauses and destructors,
finally clauses seem to avoid the spike that run time destructors have.
Possibly some optimization removes the cost of changing contexts.

\item[Other Traversal]
For \Cpp, stopping to check if a handler applies seems to be about as
expensive as stopping to run a destructor.
This results in a significant jump.

Other languages experience a small increase in run time.
The small increase likely comes from running the checks,
but they could avoid the spike by not having the same kind of overhead for
switching to the check's context.

\item[Cross Handler]
Here, \CFA falls behind \Cpp by a much more significant margin.
This is likely due to the fact that \CFA has to insert two extra function
calls, while \Cpp does not have to execute any other instructions.
Python is much further behind.

\item[Cross Finally]
\CFA's performance now matches \Cpp's from Cross Handler.
If the code from the finally clause is being inlined,
which is just an asm comment, than there are no additional instructions
to execute again when exiting the try statement normally.

\item[Conditional Match]
Both of the conditional matching tests can be considered on their own.
However, for evaluating the value of conditional matching itself, the
comparison of the two sets of results is useful.
Consider the massive jump in run time for \Cpp going from match all to match
none, which none of the other languages have.
Some strange interaction is causing run time to more than double for doing
twice as many raises.
Java and Python avoid this problem and have similar run time for both tests,
possibly through resource reuse or their program representation.
However, \CFA is built like \Cpp, and avoids the problem as well.
This matches
the pattern of the conditional match, which makes the two execution paths
very similar.

\end{description}

\subsection{Resumption \texorpdfstring{(\autoref{t:PerformanceResumption})}{}}

Moving on to resumption, there is one general note:
resumption is \textit{fast}. The only test where it fell
behind termination is Cross Handler.
In every other case, the number of iterations had to be increased by a
factor of 10 to get the run time in an appropriate range
and in some cases resumption still took less time.

% I tried \paragraph and \subparagraph, maybe if I could adjust spacing
% between paragraphs those would work.
\begin{description}
\item[Empty Traversal]
See above for the general speed-up notes.
This result is not surprising as resumption's linked-list approach
means that traversing over stack frames without a resumption handler is
$O(1)$.

\item[D'tor Traversal]
Resumption does have the same spike in run time that termination has.
The run time is actually very similar to Finally Traversal.
As resumption does not unwind the stack, both destructors and finally
clauses are run while walking down the stack during the recursive returns.
So it follows their performance is similar.

\item[Finally Traversal]
Same as D'tor Traversal,
except termination did not have a spike in run time on this test case.

\item[Other Traversal]
Traversing across handlers reduces resumption's advantage as it actually
has to stop and check each one.
Resumption still came out ahead (adjusting for iterations) but by much less
than the other cases.

\item[Cross Handler]
The only test case where resumption could not keep up with termination,
although the difference is not as significant as many other cases.
It is simply a matter of where the costs come from:
both termination and resumption have some work to set up or tear down a
handler. It just so happens that resumption's work is slightly slower.

\item[Conditional Match]
Resumption shows a slight slowdown if the exception is not matched
by the first handler, which follows from the fact the second handler now has
to be checked. However, the difference is not large.

\end{description}

\subsection{Resumption/Fixup \texorpdfstring{(\autoref{t:PerformanceFixupRoutines})}{}}

Finally are the results of the resumption/fixup routine comparison.
These results are surprisingly varied. It is possible that creating a closure
has more to do with performance than passing the argument through layers of
calls.
At 100 stack frames, resumption and manual fixup routines have similar
performance in \CFA.
More experiments could try to tease out the exact trade-offs,
but the prototype's only performance goal is to be reasonable.
It is already in that range, and \CFA's fixup routine simulation is
one of the faster simulations as well.
Plus, exceptions add features and remove syntactic overhead,
so even at similar performance, resumptions have advantages
over fixup routines.
