# Changeset eaeca5f

Ignore:
Timestamp:
Aug 29, 2021, 11:46:13 AM (5 months ago)
Branches:
jacob/cs343-translation, master
Children:
75f8e04
Parents:
1d402be (diff), cfbab07 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'andrew-mmath' into 'master', latest round of updates to the thesis.

Location:
doc/theses/andrew_beach_MMath
Files:
8 edited

Unmodified
Removed
• ## doc/theses/andrew_beach_MMath/conclusion.tex

 r1d402be \chapter{Conclusion} \label{c:conclusion} % Just a little knot to tie the paper together. In the previous chapters this thesis presents the design and implementation of \CFA's EHM.  Both the design and implementation are based off of tools and In the previous chapters this thesis presents the design and implementation of \CFA's exception handling mechanism (EHM). Both the design and implementation are based off of tools and techniques developed for other programming languages but they were adapted to better fit \CFA's feature set and add a few features that do not exist in other EHMs, like conditional catch, default handlers, implicitly changing resumption into termination in the resumption default handler, and cancellation through coroutines and threads back to program main. better fit \CFA's feature set and add a few features that do not exist in other EHMs; including conditional matching, default handlers for unhandled exceptions and cancellation though coroutines and threads back to the program main stack. The resulting features cover all of the major use cases of the most popular such as virtuals independent of traditional objects. The implementation has been tested through a set of small but interesting micro-benchmarks and compared to other implementations. The \CFA project's test suite has been expanded to test the EHM. The implementation's performance has also been compared to other implementations with a small set of targeted micro-benchmarks. The results, while not cutting edge, are good enough for prototyping, which is \CFA's current stage of development. This initial EHM is a valuable new feature for \CFA in its own right but also serves as a tool and motivation for other developments in the language. This initial EHM will bring valuable new features to \CFA in its own right but also serves as a tool and motivation for other developments in the language.
• ## doc/theses/andrew_beach_MMath/existing.tex

 r1d402be Only those \CFA features pertaining to this thesis are discussed. % Also, only new features of \CFA will be discussed, A familiarity with C or C-like languages is assumed. \CFA has extensive overloading, allowing multiple definitions of the same name to be defined~\cite{Moss18}. \begin{lstlisting}[language=CFA,{moredelim=**[is][\color{red}]{@}{@}}] char @i@; int @i@; double @i@; int @f@(); double @f@(); void @g@( int ); void @g@( double ); \end{lstlisting} \begin{cfa} char i; int i; double i; int f(); double f(); void g( int ); void g( double ); \end{cfa} This feature requires name mangling so the assembly symbols are unique for different overloads. For compatibility with names in C, there is also a syntax int && rri = ri; rri = 3; &ri = &j; // rebindable &ri = &j; ri = 5; \end{cfa} \end{minipage} References are intended for pointer situations where dereferencing is the common usage, \ie the value is more important than the pointer. References are intended to be used when the indirection of a pointer is required, but the address is not as important as the value and dereferencing is the common usage. Mutable references may be assigned to by converting them to a pointer with a @&@ and then assigning a pointer to them, as in @&ri = &j;@ above with a @&@ and then assigning a pointer to them, as in @&ri = &j;@ above. % ??? \section{Operators} \CFA implements operator overloading by providing special names, where operator usages are translated into function calls using these names. operator expressions are translated into function calls using these names. An operator name is created by taking the operator symbols and joining them with @?@s to show where the arguments go. This syntax make it easy to tell the difference between prefix operations (such as @++?@) and post-fix operations (@?++@). For example, plus and equality operators are defined for a point type. As an example, here are the addition and equality operators for a point type. \begin{cfa} point ?+?(point a, point b) { return point{a.x + b.x, a.y + b.y}; } } \end{cfa} Note these special names are not limited to builtin operators, and hence, may be used with arbitrary types. \begin{cfa} double ?+?( int x, point y ); // arbitrary types \end{cfa} % Some near misses", that are that do not match an operator form but looks like % it may have been supposed to, will generate warning but otherwise they are % left alone. Because operators are never part of the type definition they may be added at any time, including on built-in types. Note that this syntax works effectively but a textual transformation, the compiler converts all operators into functions and then resolves them normally. This means any combination of types may be used, although nonsensical ones (like @double ?==?(point, int);@) are discouraged. This feature is also used for all builtin operators as well, although those are implicitly provided by the language. %\subsection{Constructors and Destructors} \CFA also provides constructors and destructors as operators, which means they are functions with special operator names rather than type names in \Cpp. While constructors and destructions are normally called implicitly by the compiler, the special operator names, allow explicit calls. % Placement new means that this is actually equivalent to C++. In \CFA, constructors and destructors are operators, which means they are functions with special operator names rather than type names in \Cpp. Both constructors and destructors can be implicity called by the compiler, however the operator names allow explicit calls. % Placement new means that this is actually equivant to C++. The special name for a constructor is @?{}@, which comes from the struct Example { ... }; void ?{}(Example & this) { ... } { Example a; Example b = {}; } void ?{}(Example & this, char first, int num) { ... } Example a;              // implicit constructor calls Example b = {}; Example c = {'a', 2}; \end{cfa} Both @a@ and @b@ are initialized with the first constructor, while @c@ is initialized with the second. Constructor calls can be replaced with C initialization using special operator \lstinline{@=}. \begin{cfa} Example d @= {42}; \end{cfa} { Example c = {'a', 2}; } \end{cfa} Both @a@ and @b@ will be initalized with the first constructor, @b@ because of the explicit call and @a@ implicitly. @c@ will be initalized with the second constructor. Currently, there is no general way to skip initialation. % I don't use @= anywhere in the thesis. % I don't like the \^{} symbol but $^\wedge$ isn't better. Similarly, destructors use the special name @^?{}@ (the @^@ has no special meaning). % These are a normally called implicitly called on a variable when it goes out % of scope. They can be called explicitly as well. \begin{cfa} void ^?{}(Example & this) { ... } { Example e;      // implicit constructor call ^?{}(e);                // explicit destructor call ?{}(e);         // explicit constructor call } // implicit destructor call Example d; ^?{}(d); Example e; } // Implicit call of ^?{}(e); \end{cfa} The global definition of @do_once@ is ignored, however if quadruple took a @double@ argument, then the global definition would be used instead as it is a better match. % Aaron's thesis might be a good reference here. To avoid typing long lists of assertions, constraints can be collect into convenient package called a @trait@, which can then be used in an assertion would then be a better match. \todo{cite Aaron's thesis (maybe)} To avoid typing long lists of assertions, constraints can be collected into convenient a package called a @trait@, which can then be used in an assertion instead of the individual constraints. \begin{cfa} node(T) * next; T * data; } }; node(int) inode; \end{cfa} }; CountUp countup; for (10) sout | resume(countup).next; // print 10 values \end{cfa} Each coroutine has a @main@ function, which takes a reference to a coroutine object and returns @void@. %[numbers=left] Why numbers on this one? \begin{cfa}[numbers=left,numberstyle=\scriptsize\sf] \begin{cfa} void main(CountUp & this) { for (unsigned int up = 0;; ++up) { this.next = up; for (unsigned int next = 0 ; true ; ++next) { this.next = next; suspend;$\label{suspend}$ } \end{cfa} In this function, or functions called by this function (helper functions), the @suspend@ statement is used to return execution to the coroutine's resumer without terminating the coroutine's function(s). @suspend@ statement is used to return execution to the coroutine's caller without terminating the coroutine's function. A coroutine is resumed by calling the @resume@ function, \eg @resume(countup)@. The first resume calls the @main@ function at the top. Thereafter, resume calls continue a coroutine in the last suspended function after the @suspend@ statement, in this case @main@ line~\ref{suspend}.  The @resume@ function takes a reference to the coroutine structure and returns the same reference. The return value allows easy access to communication variables defined in the coroutine object. For example, the @next@ value for coroutine object @countup@ is both generated and collected in the single expression: @resume(countup).next@. statement. In this case there is only one and, hence, the difference between subsequent calls is the state of variables inside the function and the coroutine object. The return value of @resume@ is a reference to the coroutine, to make it convent to access fields of the coroutine in the same expression. Here is a simple example in a helper function: \begin{cfa} unsigned int get_next(CountUp & this) { return resume(this).next; } \end{cfa} When the main function returns the coroutine halts and can no longer be resumed. \subsection{Monitor and Mutex Parameter} exclusion on a monitor object by qualifying an object reference parameter with @mutex@. \begin{lstlisting}[language=CFA,{moredelim=**[is][\color{red}]{@}{@}}] void example(MonitorA & @mutex@ argA, MonitorB & @mutex@ argB); \end{lstlisting} \begin{cfa} void example(MonitorA & mutex argA, MonitorB & mutex argB); \end{cfa} When the function is called, it implicitly acquires the monitor lock for all of the mutex parameters without deadlock.  This semantics means all functions with { StringWorker stringworker; // fork thread running in "main" } // implicitly join with thread / wait for completion } // Implicit call to join(stringworker), waits for completion. \end{cfa} The thread main is where a new thread starts execution after a fork operation
• ## doc/theses/andrew_beach_MMath/features.tex

 r1d402be \paragraph{Raise} The raise is the starting point for exception handling The raise is the starting point for exception handling, by raising an exception, which passes it to the EHM. \paragraph{Handle} The primary purpose of an EHM is to run some user code to handle a raised exception. This code is given, with some other information, in a handler. exception. This code is given, along with some other information, in a handler. A handler has three common features: the previously mentioned user code, a region of code it guards, and an exception label/condition that matches the raised exception. region of code it guards and an exception label/condition that matches against the raised exception. Only raises inside the guarded region and raising exceptions that match the label can be handled by a given handler. The @try@ statements of \Cpp, Java and Python are common examples. All three show the common features of guarded region, raise, matching and handler. \begin{cfa} try {                           // guarded region ... throw exception;        // raise ... } catch( exception ) {  // matching condition, with exception label ...                             // handler code } \end{cfa} also show another common feature of handlers, they are grouped by the guarded region. \subsection{Propagation} After an exception is raised comes what is usually the biggest step for the EHM: finding and setting up the handler for execution. The propagation from raise to EHM: finding and setting up the handler for execution. The propagation from raise to handler can be broken up into three different tasks: searching for a handler, matching against the handler and installing the handler. \paragraph{Searching} The EHM begins by searching for handlers that might be used to handle the exception. The search is restricted to handlers that have the raise site in their guarded the exception. The search will find handlers that have the raise site in their guarded region. The search includes handlers in the current function, as well as any in \paragraph{Matching} Each handler found is matched with the raised exception. The exception Each handler found is with the raised exception. The exception label defines a condition that is used with the exception and decides if there is a match or not. % In languages where the first match is used, this step is intertwined with searching; a match check is performed immediately after the search finds different course of action for this case. This situation only occurs with unchecked exceptions as checked exceptions (such as in Java) are guaranteed to find a matching handler. (such as in Java) can make the guarantee. The unhandled action is usually very general, such as aborting the program. A handler labeled with any given exception can handle exceptions of that type or any child type of that exception. The root of the exception hierarchy (here \code{C}{exception}) acts as a catch-all, leaf types catch single types, (here \code{C}{exception}) acts as a catch-all, leaf types catch single types and the exceptions in the middle can be used to catch different groups of related exceptions. This system has some notable advantages, such as multiple levels of grouping, the ability for libraries to add new exception types, and the isolation the ability for libraries to add new exception types and the isolation between different sub-hierarchies. This design is used in \CFA even though it is not a object-orientated For effective exception handling, additional information is often passed from the raise to the handler and back again. So far, only communication of the exception's identity is covered. A common communication method for passing more information is putting fields into the exception instance So far, only communication of the exceptions' identity is covered. A common communication method for adding information to an exception is putting fields into the exception instance and giving the handler access to them. Using reference fields pointing to data at the raise location allows data to be passed in both directions. % You can either have pointers/references in the exception, or have p/rs to % the exception when it doesn't have to be copied. Passing references or pointers allows data at the raise location to be updated, passing information in both directions. \section{Virtuals} \label{s:Virtuals} \label{s:virtuals} Virtual types and casts are not part of \CFA's EHM nor are they required for an EHM. However, one of the best ways to support an exception hierarchy is via a virtual hierarchy and dispatch system. Ideally, the virtual system should have been part of \CFA before the work Ideally, the virtual system would have been part of \CFA before the work on exception handling began, but unfortunately it was not. Hence, only the features and framework needed for the EHM were designed and implemented for this thesis. Other features were considered to ensure that designed and implemented for this thesis. Other features were considered to ensure that the structure could accommodate other desirable features in the future but are not implemented. The rest of this section only discusses the implemented subset of the virtual-system design. virtual system design. The virtual system supports multiple trees" of types. Each tree is number of children. Any type that belongs to any of these trees is called a virtual type. For example, the following hypothetical syntax creates two virtual-type trees. \begin{flushleft} \lstDeleteShortInline@ \begin{tabular}{@{\hspace{20pt}}l@{\hspace{20pt}}l} \begin{cfa} vtype V0, V1(V0), V2(V0); vtype W0, W1(W0), W2(W1); \end{cfa} & \raisebox{-0.6\totalheight}{\input{vtable}} \end{tabular} \lstMakeShortInline@ \end{flushleft} % A type's ancestors are its parent and its parent's ancestors. % The root type has no ancestors. % A type's descendants are its children and its children's descendants. Every virtual type (tree node) has a pointer to a virtual table with a unique @Id@ and a list of virtual members (see \autoref{s:VirtualSystem} for details). Children inherit their parent's list of virtual members but may add and/or replace members.  For example, \begin{cfa} vtable W0 | { int ?
• ## doc/theses/andrew_beach_MMath/future.tex

 r1d402be % Now take a step back and explain what exceptions are generally. Exception handling provides dynamic inter-function control flow. A language's EHM is a combination of language syntax and run-time components that are used to construct, raise, and handle exceptions, including all control flow. Exceptions are an active mechanism for replacing passive error/return codes and return unions (Go and Rust). Exception handling provides dynamic inter-function control flow. components that construct, raise, propagate and handle exceptions, to provide all of that control flow. There are two forms of exception handling covered in this thesis: termination, which acts as a multi-level return, and resumption, which is a dynamic function call. % PAB: Maybe this sentence was suppose to be deleted? Termination handling is much more common, to the extent that it is often seen as the only form of handling. % PAB: I like this sentence better than the next sentence. % This separation is uncommon because termination exception handling is so % much more common that it is often assumed. % WHY: Mention other forms of continuation and \cite{CommonLisp} here? Exception handling relies on the concept of nested functions to create handlers that deal with exceptions. % About other works: Often, when this separation is not made, termination exceptions are assumed as they are more common and may be the only form of handling provided in a language. All types of exception handling link a raise with a handler. Both operations are usually language primitives, although raises can be treated as a primitive function that takes an exception argument. Handlers are more complex as they are added to and removed from the stack during execution, must specify what they can handle and give the code to handle the exception. Exceptions work with different execution models but for the descriptions that follow a simple call stack, with functions added and removed in a first-in-last-out order, is assumed. Termination exception handling searches the stack for the handler, then unwinds the stack to where the handler was found before calling it. The handler is run inside the function that defined it and when it finishes it returns control to that function. \begin{center} \begin{tabular}[t]{ll} \begin{lstlisting}[aboveskip=0pt,belowskip=0pt,language=CFA,{moredelim=**[is][\color{red}]{@}{@}}] void f( void (*hp)() ) { hp(); } void g( void (*hp)() ) { f( hp ); } void h( int @i@, void (*hp)() ) { void @handler@() { // nested printf( "%d\n", @i@ ); } if ( i == 1 ) hp = handler; if ( i > 0 ) h( i - 1, hp ); else g( hp ); } h( 2, 0 ); \end{lstlisting} & \raisebox{-0.5\totalheight}{\input{handler}} \end{tabular} \input{callreturn} \end{center} The nested function @handler@ in the second stack frame is explicitly passed to function @f@. When this handler is called in @f@, it uses the parameter @i@ in the second stack frame, which is accessible by an implicit lexical-link pointer. Setting @hp@ in @h@ at different points in the recursion, results in invoking a different handler. Exception handling extends this idea by eliminating explicit handler passing, and instead, performing a stack search for a handler that matches some criteria (conditional dynamic call), and calls the handler at the top of the stack. It is the runtime search $O(N)$ that differentiates an EHM call (raise) from normal dynamic call $O(1)$ via a function or virtual-member pointer. Termination exception handling searches the stack for a handler, unwinds the stack to the frame containing the matching handler, and calling the handler at the top of the stack. \begin{center} \input{termination} \end{center} Note, since the handler can reference variables in @h@, @h@ must remain on the stack for the handler call. After the handler returns, control continues after the lexical location of the handler in @h@ (static return)~\cite[p.~108]{Tennent77}. Unwinding allows recover to any previous function on the stack, skipping any functions between it and the function containing the matching handler. Resumption exception handling searches the stack for a handler, does \emph{not} unwind the stack to the frame containing the matching handler, and calls the handler at the top of the stack. Resumption exception handling searches the stack for a handler and then calls it without removing any other stack frames. The handler is run on top of the existing stack, often as a new function or closure capturing the context in which the handler was defined. After the handler has finished running it returns control to the function that preformed the raise, usually starting after the raise. \begin{center} \input{resumption} \end{center} After the handler returns, control continues after the resume in @f@ (dynamic return). Not unwinding allows fix up of the problem in @f@ by any previous function on the stack, without disrupting the current set of stack frames. Although a powerful feature, exception handling tends to be complex to set up and expensive to use so it is often limited to unusual or exceptional" cases. The classic example is error handling, where exceptions are used to remove error handling logic from the main execution path, while paying The classic example is error handling, exceptions can be used to remove error handling logic from the main execution path, and pay most of the cost only when the error actually occurs. some of the underlying tools used to implement and express exception handling in other languages are absent in \CFA. Still the resulting basic syntax resembles that of other languages: \begin{lstlisting}[language=CFA,{moredelim=**[is][\color{red}]{@}{@}}] @try@ { Still the resulting syntax resembles that of other languages: \begin{cfa} try { ... T * object = malloc(request_size); if (!object) { @throw@ OutOfMemory{fixed_allocation, request_size}; throw OutOfMemory{fixed_allocation, request_size}; } ... } @catch@ (OutOfMemory * error) { } catch (OutOfMemory * error) { ... } \end{lstlisting} \end{cfa} % A note that yes, that was a very fast overview. The design and implementation of all of \CFA's EHM's features are % The current state of the project and what it contributes. The majority of the \CFA EHM is implemented in \CFA, except for a small amount of assembler code. In addition, a suite of tests and performance benchmarks were created as part of this project. The \CFA implementation techniques are generally applicable in other programming All of these features have been implemented in \CFA, covering both changes to the compiler and the run-time. In addition, a suite of test cases and performance benchmarks were created along side the implementation. The implementation techniques are generally applicable in other programming languages and much of the design is as well. Some parts of the EHM use features unique to \CFA, and hence, are harder to replicate in other programming languages. % Talk about other programming languages. Three well known programming languages with EHMs, %/exception handling C++, Java and Python are examined in the performance work. However, these languages focus on termination exceptions, so there is no comparison with resumption. Some parts of the EHM use other features unique to \CFA and would be harder to replicate in other programming languages. The contributions of this work are: \begin{enumerate} \item Designing \CFA's exception handling mechanism, adapting designs from other programming languages, and creating new features. \item Implementing stack unwinding for the \CFA EHM, including updating the \CFA compiler and run-time environment to generate and execute the EHM code. \item Designing and implementing a prototype virtual system. other programming languages and creating new features. \item Implementing stack unwinding and the \CFA EHM, including updating the \CFA compiler and the run-time environment. \item Designed and implemented a prototype virtual system. % I think the virtual system and per-call site default handlers are the only % "new" features, everything else is a matter of implementation. \item Creating tests and performance benchmarks to compare with EHM's in other languages. \item Creating tests to check the behaviour of the EHM. \item Creating benchmarks to check the performances of the EHM, as compared to other languages. \end{enumerate} %\todo{I can't figure out a good lead-in to the roadmap.} The thesis is organization as follows. The next section and parts of \autoref{c:existing} cover existing EHMs. New \CFA EHM features are introduced in \autoref{c:features}, The rest of this thesis is organized as follows. The current state of exceptions is covered in \autoref{s:background}. The existing state of \CFA is also covered in \autoref{c:existing}. New EHM features are introduced in \autoref{c:features}, covering their usage and design. That is followed by the implementation of these features in \autoref{c:implement}. Performance results are presented in \autoref{c:performance}. Summing up and possibilities for extending this project are discussed in \autoref{c:future}. Performance results are examined in \autoref{c:performance}. Possibilities to extend this project are discussed in \autoref{c:future}. Finally, the project is summarized in \autoref{c:conclusion}. \section{Background} \label{s:background} Exception handling is a well examined area in programming languages, with papers on the subject dating back the 70s~\cite{Goodenough75}. Exception handling has been examined before in programming languages, with papers on the subject dating back 70s.\cite{Goodenough75} Early exceptions were often treated as signals, which carried no information except their identity. Ada~\cite{Ada} still uses this system. except their identity. Ada still uses this system.\todo{cite Ada} The modern flag-ship for termination exceptions is \Cpp, which added them in its first major wave of non-object-orientated features in 1990. % https://en.cppreference.com/w/cpp/language/history While many EHMs have special exception types, \Cpp has the ability to use any type as an exception. However, this generality is not particularly useful, and has been pushed aside for classes, with a convention of inheriting from \todo{cite https://en.cppreference.com/w/cpp/language/history} Many EHMs have special exception types, however \Cpp has the ability to use any type as an exception. These were found to be not very useful and have been pushed aside for classes inheriting from \code{C++}{std::exception}. While \Cpp has a special catch-all syntax @catch(...)@, there is no way to discriminate its exception type, so nothing can be done with the caught value because nothing is known about it. Instead the base exception-type \code{C++}{std::exception} is defined with common functionality (such as the ability to print a message when the exception is raised but not caught) and all Although there is a special catch-all syntax (@catch(...)@) there are no operations that can be performed on the caught value, not even type inspection. Instead the base exception-type \code{C++}{std::exception} defines common functionality (such as the ability to describe the reason the exception was raised) and all exceptions have this functionality. Having a root exception-type seems to be the standard now, as the guaranteed functionality is worth any lost in flexibility from limiting exceptions types to classes. Java~\cite{Java} was the next popular language to use exceptions. Its exception system largely reflects that of \Cpp, except it requires exceptions to be a subtype of \code{Java}{java.lang.Throwable} That trade-off, restricting usable types to gain guaranteed functionality, is almost universal now, as without some common functionality it is almost impossible to actually handle any errors. Java was the next popular language to use exceptions. \todo{cite Java} Its exception system largely reflects that of \Cpp, except that requires you throw a child type of \code{Java}{java.lang.Throwable} and it uses checked exceptions. Checked exceptions are part of a function's interface defining all exceptions it or its called functions raise. Using this information, it is possible to statically verify if a handler exists for all raised exception, \ie no uncaught exceptions. Making exception information explicit, improves clarity and safety, but can slow down programming. For example, programming complexity increases when dealing with high-order methods or an overly specified throws clause. However some of the issues are more programming annoyances, such as writing/updating many exception signatures after adding or remove calls. Java programmers have developed multiple programming hacks'' to circumvent checked exceptions negating the robustness it is suppose to provide. For example, the catch-and-ignore" pattern, where the handler is empty because the exception does not appear relevant to the programmer versus repairing or recovering from the exception. Checked exceptions are part of a function's interface, the exception signature of the function. Every function that could be raised from a function, either directly or because it is not handled from a called function, is given. Using this information, it is possible to statically verify if any given exception is handled and guarantee that no exception will go unhandled. Making exception information explicit improves clarity and safety, but can slow down or restrict programming. For example, programming high-order functions becomes much more complex if the argument functions could raise exceptions. However, as odd it may seem, the worst problems are rooted in the simple inconvenience of writing and updating exception signatures. This has caused Java programmers to develop multiple programming hacks'' to circumvent checked exceptions, negating their advantages. One particularly problematic example is the catch-and-ignore'' pattern, where an empty handler is used to handle an exception without doing any recovery or repair. In theory that could be good enough to properly handle the exception, but more often is used to ignore an exception that the programmer does not feel is worth the effort of handling it, for instance if they do not believe it will ever be raised. If they are incorrect the exception will be silenced, while in a similar situation with unchecked exceptions the exception would at least activate the language's unhandled exception code (usually program abort with an error message). %\subsection Resumption exceptions are less popular, although resumption is as old as termination; hence, few although resumption is as old as termination; hence, few programming languages have implemented them. % http://bitsavers.informatik.uni-stuttgart.de/pdf/xerox/parc/techReports/ %   CSL-79-3_Mesa_Language_Manual_Version_5.0.pdf Mesa~\cite{Mesa} is one programming languages that did. Experience with Mesa is quoted as being one of the reasons resumptions are not Mesa is one programming language that did.\todo{cite Mesa} Experience with Mesa is quoted as being one of the reasons resumptions were not included in the \Cpp standard. % https://en.wikipedia.org/wiki/Exception_handling As a result, resumption has ignored in main-stream programming languages. However, what goes around comes around'' and resumption is being revisited now (like user-level threading). While rejecting resumption might have been the right decision in the past, there are decades of developments in computer science that have changed the situation. Some of these developments, such as functional programming's resumption equivalent, algebraic effects\cite{Zhang19}, are enjoying significant success. A complete reexamination of resumptions is beyond this thesis, but their re-emergence is enough to try them in \CFA. Since then resumptions have been ignored in main-stream programming languages. However, resumption is being revisited in the context of decades of other developments in programming languages. While rejecting resumption may have been the right decision in the past, the situation has changed since then. Some developments, such as the function programming equivalent to resumptions, algebraic effects\cite{Zhang19}, are enjoying success. A complete reexamination of resumptions is beyond this thesis, but there reemergence is enough to try them in \CFA. % Especially considering how much easier they are to implement than % termination exceptions. %\subsection Functional languages tend to use other solutions for their primary EHM, but exception-like constructs still appear. Termination appears in error construct, which marks the result of an expression as an error; thereafter, the result of any expression that tries to use it is also an error, and so on until an appropriate handler is reached. % termination exceptions and how much Peter likes them. %\subsection Functional languages tend to use other solutions for their primary error handling mechanism, but exception-like constructs still appear. Termination appears in the error construct, which marks the result of an expression as an error; then the result of any expression that tries to use it also results in an error, and so on until an appropriate handler is reached. Resumption appears in algebraic effects, where a function dispatches its side-effects to its caller for handling. %\subsection Some programming languages have moved to a restricted kind of EHM called panic". In Rust~\cite{Rust}, a panic is just a program level abort that may be implemented by unwinding the stack like in termination exception handling. More recently exceptions seem to be vanishing from newer programming languages, replaced by panic". In Rust, a panic is just a program level abort that may be implemented by unwinding the stack like in termination exception handling.\todo{cite Rust} % https://doc.rust-lang.org/std/panic/fn.catch_unwind.html In Go~\cite{Go}, a panic is very similar to a termination, except it only supports Go's panic through is very similar to a termination, except it only supports a catch-all by calling \code{Go}{recover()}, simplifying the interface at the cost of flexibility. the cost of flexibility.\todo{cite Go} %\subsection While exception handling's most common use cases are in error handling, here are other ways to handle errors with comparisons to exceptions. here are some other ways to handle errors with comparisons with exceptions. \begin{itemize} \item\emph{Error Codes}: This pattern has a function return an enumeration (or just a set of fixed values) to indicate if an error occurred and possibly which error it was. Error codes mix exceptional and normal values, artificially enlarging the type and/or value range. Some languages address this issue by returning multiple values or a tuple, separating the error code from the function result. However, the main issue with error codes is forgetting to checking them, This pattern has a function return an enumeration (or just a set of fixed values) to indicate if an error has occurred and possibly which error it was. Error codes mix exceptional/error and normal values, enlarging the range of possible return values. This can be addressed with multiple return values (or a tuple) or a tagged union. However, the main issue with error codes is forgetting to check them, which leads to an error being quietly and implicitly ignored. Some new languages have tools that issue warnings, if the error code is discarded to avoid this problem. Checking error codes also results in bloating the main execution path, especially if an error is not dealt with locally and has to be cascaded down the call stack to a higher-level function.. Some new languages and tools will try to issue warnings when an error code is discarded to avoid this problem. Checking error codes also bloats the main execution path, especially if the error is not handled immediately hand has to be passed through multiple functions before it is addressed. \item\emph{Special Return with Global Store}: Some functions only return a boolean indicating success or failure and store the exact reason for the error in a fixed global location. For example, many C routines return non-zero or -1, indicating success or failure, and write error details into the C standard variable @errno@. This approach avoids the multiple results issue encountered with straight error codes but otherwise has many (if not more) of the disadvantages. For example, everything that uses the global location must agree on all possible errors and global variable are unsafe with concurrency. Similar to the error codes pattern but the function itself only returns that there was an error and store the reason for the error in a fixed global location. For example many routines in the C standard library will only return some error value (such as -1 or a null pointer) and the error code is written into the standard variable @errno@. This approach avoids the multiple results issue encountered with straight error codes but otherwise has the same disadvantages and more. Every function that reads or writes to the global store must agree on all possible errors and managing it becomes more complex with concurrency. \item\emph{Return Union}: so that one type can be used everywhere in error handling code. This pattern is very popular in functional or any semi-functional language with primitive support for tagged unions (or algebraic data types). % We need listing Rust/rust to format code snipits from it. This pattern is very popular in any functional or semi-functional language with primitive support for tagged unions (or algebraic data types). % We need listing Rust/rust to format code snippets from it. % Rust's \code{rust}{Result} The main advantage is providing for more information about an error, other than one of a fix-set of ids. While some languages use checked union access to force error-code checking, it is still possible to bypass the checking. The main disadvantage is again significant error code on the main execution path and cascading through called functions. The main advantage is that an arbitrary object can be used to represent an error so it can include a lot more information than a simple error code. The disadvantages include that the it does have to be checked along the main execution and if there aren't primitive tagged unions proper usage can be hard to enforce. \item\emph{Handler Functions}: This pattern implicitly associates functions with errors. On error, the function that produced the error implicitly calls another function to This pattern associates errors with functions. On error, the function that produced the error calls another function to handle it. The handler function can be provided locally (passed in as an argument, either directly as as a field of a structure/object) or globally (a global variable). C++ uses this approach as its fallback system if exception handling fails, \eg \snake{std::terminate_handler} and for a time \snake{std::unexpected_handler} Handler functions work a lot like resumption exceptions, without the dynamic handler search. Therefore, setting setting up the handler can be more complex/expensive, especially if the handle must be passed through multiple function calls, but cheaper to call $O(1)$, and hence, are more suited to frequent exceptional situations. % The exception being global handlers if they are rarely change as the time % in both cases shrinks towards zero. C++ uses this approach as its fallback system if exception handling fails, such as \snake{std::terminate_handler} and, for a time, \snake{std::unexpected_handler}. Handler functions work a lot like resumption exceptions, but without the dynamic search for a handler. Since setting up the handler can be more complex/expensive, especially when the handler has to be passed through multiple layers of function calls, but cheaper (constant time) to call, they are more suited to more frequent (less exceptional) situations. \end{itemize} %\subsection Because of their cost, exceptions are rarely used for hot paths of execution. Therefore, there is an element of self-fulfilling prophecy for implementation techniques to make exceptions cheap to set-up at the cost of expensive usage. This cost differential is less important in higher-level scripting languages, where use of exceptions for other tasks is more common. An iconic example is Python's @StopIteration@ exception that is thrown by an iterator to indicate that it is exhausted, especially when combined with Python's heavy use of the iterator-based for-loop. Hence, there is an element of self-fulfilling prophecy as implementation techniques have been focused on making them cheap to set-up, happily making them expensive to use in exchange. This difference is less important in higher-level scripting languages, where using exception for other tasks is more common. An iconic example is Python's \code{Python}{StopIteration} exception that is thrown by an iterator to indicate that it is exhausted. When paired with Python's iterator-based for-loop this will be thrown every time the end of the loop is reached. \todo{Cite Python StopIteration and for-each loop.} % https://docs.python.org/3/library/exceptions.html#StopIteration
 r1d402be Tests were run in \CFA, C++, Java and Python. In addition there are two sets of tests for \CFA, one for termination and one for resumption exceptions. one for termination and once with resumption. C++ is the most comparable language because both it and \CFA use the same framework, libunwind. In fact, the comparison is almost entirely a quality of implementation. In fact, the comparison is almost entirely in quality of implementation. Specifically, \CFA's EHM has had significantly less time to be optimized and does not generate its own assembly. It does have a slight advantage in that there are some features it handles directly instead of through utility functions, \Cpp has to do some extra bookkeeping to support its utility functions, but otherwise \Cpp should have a significant advantage. Java is a popular language with similar termination semantics, but Java a popular language with similar termination semantics, but it is implemented in a very different environment, a virtual machine with garbage collection. It also implements the @finally@ clause on @try@ blocks allowing for a direct It also implements the finally clause on try blocks allowing for a direct feature-to-feature comparison. As with \Cpp, Java's implementation is mature, optimized and has extra features. As with \Cpp, Java's implementation is mature, has more optimizations and extra features as compared to \CFA. Python is used as an alternative comparison because of the \CFA EHM's current performance goals, which is not to be prohibitively slow while the current performance goals, which is to not be prohibitively slow while the features are designed and examined. Python has similar performance goals for creating quick scripts and its wide use suggests it has achieved those goals. resumption exceptions. Even the older programming languages with resumption seem to be notable only for having resumption. So instead, resumption is compared to its simulation in other programming languages using fixup functions that are explicitly passed for correction or logging purposes. % So instead, resumption is compared to a less similar but much more familiar %feature, termination exceptions. Instead, resumption is compared to its simulation in other programming languages: fixup functions that are explicity passed into a function. All tests are run inside a main loop that repeatedly performs a test. This approach avoids start-up or tear-down time from affecting the timing results. Each test is run a N times (configurable from the command line). The number of times the loop is run is configurable from the command line, the number used in the timing runs is given with the results per test. Tests ran their main loop a million times. The Java tests runs the main loop 1000 times before beginning the actual test to warm-up" the JVM. % All other languages are precompiled or interpreted. Timing is done internally, with time measured immediately before and The exceptions used in these tests are always based off of a base exception. This requirement minimizes performance differences based the base exception for the language. This requirement minimizes performance differences based on the object model used to represent the exception. For example, empty inline assembly blocks are used in \CFA and \Cpp to prevent excessive optimizations while adding no actual work. Each test was run eleven times. The top three and bottom three results were discarded and the remaining five values are averaged. The tests are compiled with gcc-10 for \CFA and g++-10 for \Cpp. Java is compiled with version 11.0.11. Python with version 3.8. The tests were run on: \begin{itemize}[nosep] \item ARM 2280 Kunpeng 920 48-core 2$\times$socket \lstinline{@} 2.6 GHz running Linux v5.11.0-25 \item AMD 6380 Abu Dhabi 16-core 4$\times$socket \lstinline{@} 2.5 GHz running Linux v5.11.0-25 \end{itemize} Two kinds of hardware architecture allows discriminating any implementation and architectural effects. % We don't use catch-alls but if we did: % Catch-alls are done by catching the root exception type (not using \Cpp's % \code{C++}{catch(...)}). When collecting data each test is run eleven times. The top three and bottom three results are discarded and the remaining five values are averaged. The test are run with the latest (still pre-release) \CFA compiler was used, using gcc-10 as a backend. g++-10 is used for \Cpp. Java tests are complied and run with version 11.0.11. Python used version 3.8. The machines used to run the tests are: \todo{Get patch versions for python, gcc and g++.} \begin{itemize}[nosep] \item ARM 2280 Kunpeng 920 48-core 2$\times$socket \lstinline{@} 2.6 GHz running Linux v5.11.0-25 \item AMD 6380 Abu Dhabi 16-core 4$\times$socket \lstinline{@} 2.5 GHz running Linux v5.11.0-25 \end{itemize} Representing the two major families of hardware architecture. \section{Tests} They should provide a guide as to where the EHM's costs are found. \paragraph{Raise and Handle} This group measures the cost of a try statement when exceptions are raised and the stack is unwound (termination) or not unwound (resumption).  Each test has has a repeating function like the following \begin{lstlisting}[language=CFA,{moredelim=**[is][\color{red}]{@}{@}}] \paragraph{Stack Traversal} This group measures the cost of traversing the stack, (and in termination, unwinding it). Inside the main loop is a call to a recursive function. This function calls itself F times before raising an exception. F is configurable from the command line, but is usually 100. This builds up many stack frames, and any contents they may have, before the raise. The exception is always handled at the base of the stack. For example the Empty test for \CFA resumption looks like: \begin{cfa} void unwind_empty(unsigned int frames) { if (frames) { @unwind_empty(frames - 1);@ // AUGMENTED IN OTHER EXPERIMENTS } else throw (empty_exception){&empty_vt}; } \end{lstlisting} which is called N times, where each call recurses to a depth of R (configurable from the command line), an exception is raised, the stack is a unwound, and the exception caught. \begin{itemize}[nosep] \item Empty: For termination, this test measures the cost of raising (stack walking) an exception through empty stack frames from the bottom of the recursion to an empty handler, and unwinding the stack. (see above code) \medskip For resumption, this test measures the same raising cost but does not unwind the stack. For languages without resumption, a fixup function is to the bottom of the recursion and called to simulate a fixup operation at that point. \begin{cfa} void nounwind_fixup(unsigned int frames, void (*raised_rtn)(int &)) { if (frames) { nounwind_fixup(frames - 1, raised_rtn); unwind_empty(frames - 1); } else { int fixup = 17; raised_rtn(fixup); throwResume (empty_exception){&empty_vt}; } } \end{cfa} where the passed fixup function is: \begin{cfa} void raised(int & fixup) { fixup = 42; } \end{cfa} For comparison, a \CFA version passing a function is also included. Other test cases have additional code around the recursive call add something besides simple stack frames to the stack. Note that both termination and resumption will have to traverse over the stack but only termination has to unwind it. \begin{itemize}[nosep] % \item None: % Reuses the empty test code (see below) except that the number of frames % is set to 0 (this is the only test for which the number of frames is not % 100). This isolates the start-up and shut-down time of a throw. \item Empty: The repeating function is empty except for the necessary control code. As other traversal tests add to this, so it is the baseline for the group as the cost comes from traversing over and unwinding a stack frame that has no other interactions with the exception system. \item Destructor: This test measures the cost of raising an exception through non-empty frames, where each frame has an object requiring destruction, from the bottom of the recursion to an empty handler. Hence, there are N destructor calls during unwinding. \medskip This test is not meaningful for resumption because the stack is only unwound as the recursion returns. \begin{cfa} WithDestructor object; unwind_destructor(frames - 1); \end{cfa} The repeating function creates an object with a destructor before calling itself. Comparing this to the empty test gives the time to traverse over and/or unwind a destructor. \item Finally: This test measures the cost of establishing a try block with an empty finally clause on the front side of the recursion and running the empty finally clauses during stack unwinding from the bottom of the recursion to an empty handler. \begin{cfa} try { unwind_finally(frames - 1); } finally {} \end{cfa} \medskip This test is not meaningful for resumption because the stack is only unwound as the recursion returns. The repeating function calls itself inside a try block with a finally clause attached. Comparing this to the empty test gives the time to traverse over and/or unwind a finally clause. \item Other Handler: For termination, this test is like the finally test but the try block has a catch clause for an exception that is not raised, so catch matching is executed during stack unwinding but the match never successes until the catch at the bottom of the recursion. \begin{cfa} try { unwind_other(frames - 1); } catch (not_raised_exception *) {} \end{cfa} \medskip For resumption, this test measures the same raising cost but does not unwind the stack. For languages without resumption, the same fixup function is passed and called. The repeating function calls itself inside a try block with a handler that will not match the raised exception, but is of the same kind of handler. This means that the EHM will have to check each handler, but will continue over all of the until it reaches the base of the stack. Comparing this to the empty test gives the time to traverse over and/or unwind a handler. \end{itemize} \paragraph{Try/Handle/Finally Statement} This group measures just the cost of executing a try statement so \emph{there is no stack unwinding}.  Hence, the program main loops N times around: \begin{cfa} try { } catch (not_raised_exception *) {} \end{cfa} \paragraph{Cross Try Statement} This group of tests measures the cost setting up exception handling if it is not used (because the exceptional case did not occur). Tests repeatedly cross (enter and leave, execute) a try statement but never preform a raise. \begin{itemize}[nosep] \item Handler: The try statement has a handler (catch/resume). The try statement has a handler (of the appropriate kind). \item Finally: The try statement has a finally clause. This group measures the cost of conditional matching. Only \CFA implements the language level conditional match, the other languages mimic with an unconditional" match (it still checks the exception's type) and conditional re-raise if it is not suppose the other languages mimic it with an unconditional" match (it still checks the exception's type) and conditional re-raise if it is not supposed to handle that exception. \begin{center} \begin{tabular}{ll} \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp, Java, Python} \\ There is the pattern shown in \CFA and \Cpp. Java and Python use the same pattern as \Cpp, but with their own syntax. \begin{minipage}{0.45\textwidth} \begin{cfa} try { throw_exception(); } catch (empty_exception * exc; should_catch) { ... } catch (exception_t * e ; should_catch(e)) { ... } \end{cfa} & \begin{cfa} \end{minipage} \begin{minipage}{0.55\textwidth} \begin{lstlisting}[language=C++] try { throw_exception(); } catch (EmptyException & exc) { if (!should_catch) throw; ... } catch (std::exception & e) { if (!should_catch(e)) throw; ... } \end{cfa} \end{tabular} \end{center} \end{lstlisting} \end{minipage} \begin{itemize}[nosep] \item Match All: \end{itemize} \medskip \noindent All omitted test code for other languages is functionally identical to the \CFA tests or simulated, and available online~\cite{CforallExceptionBenchmarks}. \paragraph{Resumption Simulation} A slightly altered version of the Empty Traversal test is used when comparing resumption to fix-up routines. The handler, the actual resumption handler or the fix-up routine, always captures a variable at the base of the loop, and receives a reference to a variable at the raise site, either as a field on the exception or an argument to the fix-up routine. % I don't actually know why that is here but not anywhere else. %\section{Cost in Size} \section{Results} One result not directly related to \CFA but important to keep in mind is that, for exceptions, the standard intuition about which languages should go faster often does not hold. For example, there are a few cases where Python out-performs \CFA, \Cpp and Java. The most likely explanation is that, since exceptions are rarely considered to be the common case, the more optimized languages make that case expense. In addition, languages with high-level representations have a much easier time scanning the stack as there is less to decode. Tables~\ref{t:PerformanceTermination} and~\ref{t:PerformanceResumption} show the test results for termination and resumption, respectively.  In cases where a feature is not supported by a language, the test is skipped for that language (marked N/A).  For some Java experiments it was impossible to measure certain effects because the JIT corrupted the test (marked N/C). No workaround was possible~\cite{Dice21}.  To get experiments in the range of 1--100 seconds, the number of times an experiment is run (N) is varied (N is marked beside each experiment, e.g., 1M $\Rightarrow$ 1 million test iterations). An anomaly exists with gcc nested functions used as thunks for implementing much of the \CFA EHM. If a nested-function closure captures local variables in its lexical scope, performance dropped by a factor of 10.  Specifically, in try statements of the form: \begin{cfa} try { unwind_other(frames - 1); } catch (not_raised_exception *) {} \end{cfa} the try block is hoisted into a nested function and the variable @frames@ is the local parameter to the recursive function, which triggers the anomaly. The workaround is to remove the recursion parameter and make it a global variable that is explicitly decremented outside of the try block (marked with a *''): \begin{cfa} frames -= 1; try { unwind_other(); } catch (not_raised_exception *) {} \end{cfa} To make comparisons fair, a dummy parameter is added and the dummy value passed in the recursion. Note, nested functions in gcc are rarely used (if not completely unknown) and must follow the C calling convention, unlike \Cpp lambdas, so it is not surprising if there are performance issues efficiently capturing closures. % Similarly, if a test does not change between resumption % and termination in \CFA, then only one test is written and the result % was put into the termination column. % Raw Data: % run-algol-a.sat % --------------- % Raise Empty   &  82687046678 &  291616256 &   3252824847 & 15422937623 & 14736271114 \\ % Raise D'tor   & 219933199603 &  297897792 & 223602799362 &         N/A &         N/A \\ % Raise Finally & 219703078448 &  298391745 &          N/A &         ... & 18923060958 \\ % Raise Other   & 296744104920 & 2854342084 & 112981255103 & 15475924808 & 21293137454 \\ % Cross Handler &      9256648 &   13518430 &       769328 &     3486252 &    31790804 \\ % Cross Finally &       769319 &        N/A &          N/A &     2272831 &    37491962 \\ % Match All     &   3654278402 &   47518560 &   3218907794 &  1296748192 &   624071886 \\ % Match None    &   4788861754 &   58418952 &   9458936430 &  1318065020 &   625200906 \\ % % run-algol-thr-c % --------------- % Raise Empty   &   3757606400 &   36472972 &   3257803337 & 15439375452 & 14717808642 \\ % Raise D'tor   &  64546302019 &  102148375 & 223648121635 &         N/A &         N/A \\ % Raise Finally &  64671359172 &  103285005 &          N/A & 15442729458 & 18927008844 \\ % Raise Other   & 294143497130 & 2630130385 & 112969055576 & 15448220154 & 21279953424 \\ % Cross Handler &      9646462 &   11955668 &       769328 &     3453707 &    31864074 \\ % Cross Finally &       773412 &        N/A &          N/A &     2253825 &    37266476 \\ % Match All     &   3719462155 &   43294042 &   3223004977 &  1286054154 &   623887874 \\ % Match None    &   4971630929 &   55311709 &   9481225467 &  1310251289 &   623752624 \\ % % run-algol-04-a % -------------- % Raise Empty   & 0.0 & 0.0 &  3250260945 & 0.0 & 0.0 \\ % Raise D'tor   & 0.0 & 0.0 & 29017675113 & N/A & N/A \\ % Raise Finally & 0.0 & 0.0 &         N/A & 0.0 & 0.0 \\ % Raise Other   & 0.0 & 0.0 & 24411823773 & 0.0 & 0.0 \\ % Cross Handler & 0.0 & 0.0 &      769334 & 0.0 & 0.0 \\ % Cross Finally & 0.0 & N/A &         N/A & 0.0 & 0.0 \\ % Match All     & 0.0 & 0.0 &  3254283504 & 0.0 & 0.0 \\ % Match None    & 0.0 & 0.0 &  9476060146 & 0.0 & 0.0 \\ % run-plg7a-a.sat % --------------- % Raise Empty   &  57169011329 &  296612564 &   2788557155 & 17511466039 & 23324548496 \\ % Raise D'tor   & 150599858014 &  318443709 & 149651693682 &         N/A &         N/A \\ % Raise Finally & 148223145000 &  373325807 &          N/A &         ... & 29074552998 \\ % Raise Other   & 189463708732 & 3017109322 &  85819281694 & 17584295487 & 32602686679 \\ % Cross Handler &      8001654 &   13584858 &      1555995 &     6626775 &    41927358 \\ % Cross Finally &      1002473 &        N/A &          N/A &     4554344 &    51114381 \\ % Match All     &   3162460860 &   37315018 &   2649464591 &  1523205769 &   742374509 \\ % Match None    &   4054773797 &   47052659 &   7759229131 &  1555373654 &   744656403 \\ % % run-plg7a-thr-a % --------------- % Raise Empty   &   3604235388 &   29829965 &   2786931833 & 17576506385 & 23352975105 \\ % Raise D'tor   &  46552380948 &  178709605 & 149834207219 &         N/A &         N/A \\ % Raise Finally &  46265157775 &  177906320 &          N/A & 17493045092 & 29170962959 \\ % Raise Other   & 195659245764 & 2376968982 &  86070431924 & 17552979675 & 32501882918 \\ % Cross Handler &    397031776 &   12503552 &      1451225 &     6658628 &    42304965 \\ % Cross Finally &      1136746 &        N/A &          N/A &     4468799 &    46155817 \\ % Match All     &   3189512499 &   39124453 &   2667795989 &  1525889031 &   733785613 \\ % Match None    &   4094675477 &   48749857 &   7850618572 &  1566713577 &   733478963 \\ % % run-plg7a-04-a % -------------- % 0.0 are unfilled. % Raise Empty   & 0.0 & 0.0 &  2770781479 & 0.0 & 0.0 \\ % Raise D'tor   & 0.0 & 0.0 & 23530084907 & N/A & N/A \\ % Raise Finally & 0.0 & 0.0 &         N/A & 0.0 & 0.0 \\ % Raise Other   & 0.0 & 0.0 & 23816827982 & 0.0 & 0.0 \\ % Cross Handler & 0.0 & 0.0 &     1422188 & 0.0 & 0.0 \\ % Cross Finally & 0.0 & N/A &         N/A & 0.0 & 0.0 \\ % Match All     & 0.0 & 0.0 &  2671989778 & 0.0 & 0.0 \\ % Match None    & 0.0 & 0.0 &  7829059869 & 0.0 & 0.0 \\ \begin{table} % First, introduce the tables. \autoref{t:PerformanceTermination}, \autoref{t:PerformanceResumption} and~\autoref{t:PerformanceFixupRoutines} show the test results. In cases where a feature is not supported by a language, the test is skipped for that language and the result is marked N/A. There are also cases where the feature is supported but measuring its cost is impossible. This happened with Java, which uses a JIT that optimize away the tests and it cannot be stopped.\cite{Dice21} These tests are marked N/C. To get results in a consistent range (1 second to 1 minute is ideal, going higher is better than going low) N, the number of iterations of the main loop in each test, is varied between tests. It is also given in the results and usually have a value in the millions. An anomaly in some results came from \CFA's use of gcc nested functions. These nested functions are used to create closures that can access stack variables in their lexical scope. However, if they do so then they can cause the benchmark's run-time to increase by an order of magnitude. The simplest solution is to make those values global variables instead of function local variables. % Do we know if editing a global inside nested function is a problem? Tests that had to be modified to avoid this problem have been marked with a *'' in the results. % Now come the tables themselves: % You might need a wider window for this. \begin{table}[htb] \centering \caption{Performance Results Termination (sec)} \caption{Termination Performance Results (sec)} \label{t:PerformanceTermination} \begin{tabular}{|r|*{2}{|r r r r|}} \hline & \multicolumn{4}{c||}{AMD}             & \multicolumn{4}{c|}{ARM}      \\ & \multicolumn{4}{c||}{AMD}         & \multicolumn{4}{c|}{ARM}  \\ \cline{2-9} N\hspace{8pt}           & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\ \hline Throw Empty (1M)        & 3.4   & 2.8   & 18.3  & 23.4          & 3.7   & 3.2   & 15.5  & 14.8  \\ Throw D'tor (1M)        & 48.4  & 23.6  & N/A   & N/A           & 64.2  & 29.0  & N/A   & N/A   \\ Throw Finally (1M)      & 3.4*  & N/A   & 17.9  & 29.0          & 4.1*  & N/A   & 15.6  & 19.0  \\ Throw Other (1M)        & 3.6*  & 23.2  & 18.2  & 32.7          & 4.0*  & 24.5  & 15.5  & 21.4  \\ Try/Catch (100M)        & 6.0   & 0.9   & N/C   & 37.4          & 10.0  & 0.8   & N/C   & 32.2  \\ Try/Finally (100M)      & 0.9   & N/A   & N/C   & 44.1          & 0.8   & N/A   & N/C   & 37.3  \\ Match All (10M)         & 32.9  & 20.7  & 13.4  & 4.9           & 36.2  & 24.5  & 12.0  & 3.1   \\ Match None (10M)        & 32.7  & 50.3  & 11.0  & 5.1           & 36.3  & 71.9  & 12.3  & 4.2   \\ N\hspace{8pt}          & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\ \hline Empty Traversal (1M)   & 3.4   & 2.8   & 18.3  & 23.4      & 3.7   & 3.2   & 15.5  & 14.8  \\ D'tor Traversal (1M)   & 48.4  & 23.6  & N/A   & N/A       & 64.2  & 29.0  & N/A   & N/A   \\ Finally Traversal (1M) & 3.4*  & N/A   & 17.9  & 29.0      & 4.1*  & N/A   & 15.6  & 19.0  \\ Other Traversal (1M)   & 3.6*  & 23.2  & 18.2  & 32.7      & 4.0*  & 24.5  & 15.5  & 21.4  \\ Cross Handler (100M)   & 6.0   & 0.9   & N/C   & 37.4      & 10.0  & 0.8   & N/C   & 32.2  \\ Cross Finally (100M)   & 0.9   & N/A   & N/C   & 44.1      & 0.8   & N/A   & N/C   & 37.3  \\ Match All (10M)        & 32.9  & 20.7  & 13.4  & 4.9       & 36.2  & 24.5  & 12.0  & 3.1   \\ Match None (10M)       & 32.7  & 50.3  & 11.0  & 5.1       & 36.3  & 71.9  & 12.3  & 4.2   \\ \hline \end{tabular} \end{table} \begin{table} \begin{table}[htb] \centering \caption{Resumption Performance Results (sec)} \label{t:PerformanceResumption} \begin{tabular}{|r||r||r|} \hline N\hspace{8pt} & AMD     & ARM  \\ \hline Empty Traversal (10M)   & 0.2     & 0.3  \\ D'tor Traversal (10M)   & 1.8     & 1.0  \\ Finally Traversal (10M) & 1.7     & 1.0  \\ Other Traversal (10M)   & 22.6    & 25.9 \\ Cross Handler (100M)    & 8.4     & 11.9 \\ Match All (100M)        & 2.3     & 3.2  \\ Match None (100M)       & 2.9     & 3.9  \\ \hline \end{tabular} \end{table} \begin{table}[htb] \centering \small \caption{Performance Results Resumption (sec)} \label{t:PerformanceResumption} \caption{Resumption/Fixup Routine Comparison (sec)} \label{t:PerformanceFixupRoutines} \setlength{\tabcolsep}{5pt} \begin{tabular}{|r|*{2}{|r r r r|}} \hline & \multicolumn{4}{c||}{AMD}             & \multicolumn{4}{c|}{ARM}      \\ \cline{2-9} N\hspace{8pt}           & \multicolumn{1}{c}{\CFA (R/F)} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} & \multicolumn{1}{c}{\CFA (R/F)} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\ \hline Resume Empty (10M)      & 3.8/3.5       & 14.7  & 2.3   & 176.1 & 0.3/0.1       & 8.9   & 1.2   & 119.9 \\ Resume Other (10M)      & 4.0*/0.1*     & 21.9  & 6.2   & 381.0 & 0.3*/0.1*     & 13.2  & 5.0   & 290.7 \\ Try/Resume (100M)       & 8.8           & N/A   & N/A   & N/A   & 12.3          & N/A   & N/A   & N/A   \\ Match All (10M)         & 0.3           & N/A   & N/A   & N/A   & 0.3           & N/A   & N/A   & N/A   \\ Match None (10M)        & 0.3           & N/A   & N/A   & N/A   & 0.4           & N/A   & N/A   & N/A   \\ \begin{tabular}{|r|*{2}{|r r r r r|}} \hline & \multicolumn{5}{c||}{AMD}     & \multicolumn{5}{c|}{ARM}  \\ \cline{2-11} N\hspace{8pt}       & \multicolumn{1}{c}{Raise} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c||}{Python} & \multicolumn{1}{c}{Raise} & \multicolumn{1}{c}{\CFA} & \multicolumn{1}{c}{\Cpp} & \multicolumn{1}{c}{Java} & \multicolumn{1}{c|}{Python} \\ \hline Resume Empty (10M)  & 3.8  & 3.5  & 14.7  & 2.3   & 176.1 & 0.3  & 0.1  & 8.9   & 1.2   & 119.9 \\ %Resume Other (10M)  & 4.0* & 0.1* & 21.9  & 6.2   & 381.0 & 0.3* & 0.1* & 13.2  & 5.0   & 290.7 \\ \hline \end{tabular} \end{table} As stated, the performance tests are not attempting to compare exception handling across languages.  The only performance requirement is to ensure the \CFA EHM implementation runs in a reasonable amount of time, given its constraints. In general, the \CFA implement did very well. Each of the tests is analysed. % Now discuss the results in the tables. One result not directly related to \CFA but important to keep in mind is that, for exceptions the standard intuition about which languages should go faster often does not hold. For example, there are a few cases where Python out-performs \CFA, \Cpp and Java. The most likely explanation is that, since exceptions are rarely considered to be the common case, the more optimized languages make that case expensive to improve other cases. In addition, languages with high-level representations have a much easier time scanning the stack as there is less to decode. As stated, the performance tests are not attempting to show \CFA has a new competitive way of implementing exception handling. The only performance requirement is to insure the \CFA EHM has reasonable performance for prototyping. Although that may be hard to exactly quantify, we believe it has succeeded in that regard. Details on the different test cases follow. \begin{description} \item[Throw/Resume Empty] For termination, \CFA is close to \Cpp, where other languages have a higher cost. For resumption, \CFA is better than the fixup simulations in the other languages, except Java. The \CFA results on the ARM computer for both resumption and function simulation are particularly low; I have no explanation for this anomaly, except the optimizer has managed to remove part of the experiment. Python has a high cost for passing the lambda during the recursion. \item[Throw D'tor] For termination, \CFA is twice the cost of \Cpp. The higher cost for \CFA must be related to how destructors are handled. \item[Throw Finally] \CFA is better than the other languages with a @finally@ clause, which is the same for termination and resumption. \item[Throw/Resume Other] For termination, \CFA is better than the other languages. For resumption, \CFA is equal to or better the other languages. Again, the \CFA results on the ARM computer for both resumption and function simulation are particularly low. Python has a high cost for passing the lambda during the recursion. \item[Try/Catch/Resume] For termination, installing a try statement is more expressive than \Cpp because the try components are hoisted into local functions.  At runtime, these functions are than passed to libunwind functions to set up the try statement. \Cpp zero-cost try-entry accounts for its performance advantage. For resumption, there are similar costs to termination to set up the try statement but libunwind is not used. \item[Try/Finally] Setting up a try finally is less expensive in \CFA than setting up handlers, and is significantly less than other languages. \item[Throw/Resume Match All] For termination, \CFA is close to the other language simulations. For resumption, the stack unwinding is much faster because it does not use libunwind.  Instead resumption is just traversing a linked list with each node being the next stack frame with the try block. \item[Throw/Resume Match None] The same results as for Match All. \item[Empty Traversal] \CFA is slower than \Cpp, but is still faster than the other languages and closer to \Cpp than other languages. This is to be expected as \CFA is closer to \Cpp than the other languages. \item[D'tor Traversal] Running destructors causes huge slowdown in every language that supports them. \CFA has a higher proportionate slowdown but it is similar to \Cpp's. Considering the amount of work done in destructors is so low the cost likely comes from the change of context required to do that work. \item[Finally Traversal] Speed is similar to Empty Traversal in all languages that support finally clauses. Only Python seems to have a larger than random noise change in its run-time and it is still not large. Despite the similarity between finally clauses and destructors, finally clauses seem to avoid the spike in run-time destructors have. Possibly some optimization removes the cost of changing contexts. \todo{OK, I think the finally clause may have been optimized out.} \item[Other Traversal] For \Cpp, stopping to check if a handler applies seems to be about as expensive as stopping to run a destructor. This results in a significant jump. Other languages experiance a small increase in run-time. The small increase likely comes from running the checks, but they could avoid the spike by not having the same kind of overhead for switching to the check's context. \todo{Could revist Other Traversal, after Finally Traversal.} \item[Cross Handler] Here \CFA falls behind \Cpp by a much more significant margin. This is likely due to the fact \CFA has to insert two extra function calls while \Cpp doesn't have to do execute any other instructions. Python is much further behind. \item[Cross Finally] \CFA's performance now matches \Cpp's from Cross Handler. If the code from the finally clause is being inlined, which is just a asm comment, than there are no additional instructions to execute again when exiting the try statement normally. \item[Conditional Match] Both of the conditional matching tests can be considered on their own, however for evaluating the value of conditional matching itself the comparison of the two sets of results is useful. Consider the massive jump in run-time for \Cpp going from match all to match none, which none of the other languages have. Some strange interaction is causing run-time to more than double for doing twice as many raises. Java and Python avoid this problem and have similar run-time for both tests, possibly through resource reuse or their program representation. However \CFA is built like \Cpp and avoids the problem as well, this matches the pattern of the conditional match which makes the two execution paths much more similar. \end{description} \begin{comment} This observation means that while \CFA does not actually keep up with Python in every case, it is usually no worse than roughly half the speed of \Cpp. This performance is good enough for the prototyping purposes of the project. The test case where \CFA falls short is Raise Other, the case where the stack is unwound including a bunch of non-matching handlers. This slowdown seems to come from missing optimizations. This suggests that the performance issue in Raise Other is just an optimization not being applied. Later versions of gcc may be able to optimize this case further, at least down to the half of \Cpp mark. A \CFA compiler that directly produced assembly could do even better as it would not have to work across some of \CFA's current abstractions, like the try terminate function. Resumption exception handling is also incredibly fast. Often an order of magnitude or two better than the best termination speed. There is a simple explanation for this; traversing a linked list is much faster than examining and unwinding the stack. When resumption does not do as well its when more try statements are used per raise. Updating the internal linked list is not very expensive but it does add up. The relative speed of the Match All and Match None tests (within each language) can also show the effectiveness conditional matching as compared to catch and rethrow. \begin{itemize}[nosep] \item Java and Python get similar values in both tests. Between the interpreted code, a higher level representation of the call stack and exception reuse it it is possible the cost for a second throw can be folded into the first. % Is this due to optimization? \item Both types of \CFA are slightly slower if there is not a match. For termination this likely comes from unwinding a bit more stack through libunwind instead of executing the code normally. For resumption there is extra work in traversing more of the list and running more checks for a matching exceptions. % Resumption is a bit high for that but this is my best theory. \item Then there is \Cpp, which takes 2--3 times longer to catch and rethrow vs. just the catch. This is very high, but it does have to repeat the same process of unwinding the stack and may have to parse the LSDA of the function with the catch and rethrow twice, once before the catch and once after the rethrow. % I spent a long time thinking of what could push it over twice, this is all % I have to explain it. \end{itemize} The difference in relative performance does show that there are savings to be made by performing the check without catching the exception. \end{comment} \begin{comment} From: Dave Dice To: "Peter A. Buhr" Subject: Re: [External] : JIT Date: Mon, 16 Aug 2021 01:21:56 +0000 > On 2021-8-15, at 7:14 PM, Peter A. Buhr wrote: > > My student is trying to measure the cost of installing a try block with a > finally clause in Java. > > We tried the random trick (see below). But if the try block is comment out, the > results are the same. So the program measures the calls to the random number > generator and there is no cost for installing the try block. > > Maybe there is no cost for a try block with an empty finally, i.e., the try is > optimized away from the get-go. There's quite a bit of optimization magic behind the HotSpot curtains for try-finally.  (I sound like the proverbial broken record (:>)). In many cases we can determine that the try block can't throw any exceptions, so we can elide all try-finally plumbing.  In other cases, we can convert the try-finally to normal if-then control flow, in the case where the exception is thrown into the same method.  This makes exceptions _almost cost-free.  If we actually need to "physically" rip down stacks, then things get expensive, impacting both the throw cost, and inhibiting other useful optimizations at the catch point.  Such "true" throws are not just expensive, they're _very expensive.  The extremely aggressive inlining used by the JIT helps, because we can convert cases where a heavy rip-down would normally needed back into simple control flow. Other quirks involve the thrown exception object.  If it's never accessed then we're apply a nice set of optimizations to avoid its construction.  If it's accessed but never escapes the catch frame (common) then we can also cheat. And if we find we're hitting lots of heavy rip-down cases, the JIT will consider recompilation - better inlining -- to see if we can merge the throw and catch into the same physical frame, and shift to simple branches. In your example below, System.out.print() can throw, I believe.  (I could be wrong, but most IO can throw).  Native calls that throw will "unwind" normally in C++ code until they hit the boundary where they reenter java emitted code, at which point the JIT-ed code checks for a potential pending exception.  So in a sense the throw point is implicitly after the call to the native method, so we can usually make those cases efficient. Also, when we're running in the interpreter and warming up, we'll notice that the == 42 case never occurs, and so when we start to JIT the code, we elide the call to System.out.print(), replacing it (and anything else which appears in that if x == 42 block) with a bit of code we call an "uncommon trap".  I'm presuming we encounter 42 rarely.  So if we ever hit the x == 42 case, control hits the trap, which triggers synchronous recompilation of the method, this time with the call to System.out.print() and, because of that, we now to adapt the new code to handle any traps thrown by print().  This is tricky stuff, as we may need to rebuild stack frames to reflect the newly emitted method.  And we have to construct a weird bit of "thunk" code that allows us to fall back directly into the newly emitted "if" block.  So there's a large one-time cost when we bump into the uncommon trap and recompile, and subsequent execution might get slightly slower as the exception could actually be generated, whereas before we hit the trap, we knew the exception could never be raised. Oh, and things also get expensive if we need to actually fill in the stack trace associated with the exception object.  Walking stacks is hellish. Quite a bit of effort was put into all this as some of the specjvm benchmarks showed significant benefit. It's hard to get sensible measurements as the JIT is working against you at every turn.  What's good for the normal user is awful for anybody trying to benchmark.  Also, all the magic results in fairly noisy and less reproducible results. Regards Dave p.s., I think I've mentioned this before, but throwing in C++ is grim as unrelated throws in different threads take common locks, so nothing scales as you might expect. \end{comment} Moving on to resumption there is one general note, resumption is \textit{fast}, the only test where it fell behind termination is Cross Handler. In every other case, the number of iterations had to be increased by a factor of 10 to get the run-time in an approprate range and in some cases resumption still took less time. % I tried \paragraph and \subparagraph, maybe if I could adjust spacing % between paragraphs those would work. \begin{description} \item[Empty Traversal] See above for the general speed-up notes. This result is not surprising as resumption's link list approach means that traversing over stack frames without a resumption handler is $O(1)$. \item[D'tor Traversal] Resumption does have the same spike in run-time that termination has. The run-time is actually very similar to Finally Traversal. As resumption does not unwind the stack both destructors and finally clauses are run while walking down the stack normally. So it follows their performance is similar. \item[Finally Traversal] The increase in run-time fromm Empty Traversal (once adjusted for the number of iterations) roughly the same as for termination. This suggests that the \item[Other Traversal] Traversing across handlers reduces resumption's advantage as it actually has to stop and check each one. Resumption still came out ahead (adjusting for iterations) but by much less than the other cases. \item[Cross Handler] The only test case where resumption could not keep up with termination, although the difference is not as significant as many other cases. It is simply a matter of where the costs come from. Even if \CFA termination is not zero-cost" passing through an empty function still seems to be cheaper than updating global values. \item[Conditional Match] Resumption shows a slight slowdown if the exception is not matched by the first handler, which follows from the fact the second handler now has to be checked. However the difference is not large. \end{description} Finally are the results of the resumption/fixup routine comparison. These results are surprisingly varied, it is possible that creating a closure has more to do with performance than passing the argument through layers of calls. Even with 100 stack frames though, resumption is only about as fast as manually passing a fixup routine. So there is a cost for the additional power and flexibility exceptions provide.