Ignore:
Timestamp:
Sep 7, 2023, 10:05:20 AM (10 months ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
master
Children:
154672d
Parents:
92d8cda
Message:

proofread chapters CFA_concurrency, intro, and conclusion

Location:
doc/theses/colby_parsons_MMAth/text
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/colby_parsons_MMAth/text/CFA_concurrency.tex

    r92d8cda r79b05224  
    11\chapter{Concurrency in \CFA}\label{s:cfa_concurrency}
    22
    3 The groundwork for concurrency in \CFA was laid by Thierry Delisle in his Master's Thesis~\cite{Delisle18}. 
    4 In that work, he introduced generators, coroutines, monitors, and user-level threading. 
    5 Not listed in that work were basic concurrency features needed as building blocks, such as locks, futures, and condition variables, which he also added to \CFA.
     3The groundwork for concurrency in \CFA was laid by Thierry Delisle in his Master's Thesis~\cite{Delisle18}.
     4In that work, he introduced generators, coroutines, monitors, and user-level threading.
     5Not listed in that work were basic concurrency features needed as building blocks, such as locks, futures, and condition variables.
    66
    77\section{Threading Model}\label{s:threading}
    8 \CFA provides user-level threading and supports an $M$:$N$ threading model where $M$ user threads are scheduled on $N$ kernel threads, where both $M$ and $N$ can be explicitly set by the user.
    9 Kernel threads are created by declaring a @processor@ structure.
    10 User-thread types are defined by creating a @thread@ aggregate-type, \ie replace @struct@ with @thread@.
    11 For each thread type a corresponding @main@ routine must be defined, which is where the thread starts running once it is created.
    12 Examples of \CFA  user thread and processor creation are shown in \VRef[Listing]{l:cfa_thd_init}.
    138
     9\CFA provides user-level threading and supports an $M$:$N$ threading model where $M$ user threads are scheduled on $N$ kernel threads and both $M$ and $N$ can be explicitly set by the programmer.
     10Kernel threads are created by declaring processor objects;
     11user threads are created by declaring a thread objects.
     12\VRef[Listing]{l:cfa_thd_init} shows a typical examples of creating a \CFA user-thread type, and then as declaring processor ($N$) and thread objects ($M$).
    1413\begin{cfa}[caption={Example of \CFA user thread and processor creation},label={l:cfa_thd_init}]
    15 @thread@ my_thread {...};                       $\C{// user thread type}$
    16 void @main@( my_thread & this ) {       $\C{// thread start routine}$
     14@thread@ my_thread {                            $\C{// user thread type (like structure}$
     15        ... // arbitrary number of field declarations
     16};
     17void @main@( @my_thread@ & this ) {     $\C{// thread start routine}$
    1718        sout | "Hello threading world";
    1819}
    19 
    20 int main() {
     20int main() {                                            $\C{// program starts with a processor (kernel thread)}$
    2121        @processor@ p[2];                               $\C{// add 2 processors = 3 total with starting processor}$
    2222        {
    23                 my_thread t[2], * t3 = new();   $\C{// create 3 user threads, running in main routine}$
     23                @my_thread@ t[2], * t3 = new(); $\C{// create 3 user threads, running in routine main}$
    2424                ... // execute concurrently
    2525                delete( t3 );                           $\C{// wait for thread to end and deallocate}$
    2626        } // wait for threads to end and deallocate
    27 }
     27} // deallocate additional kernel threads
    2828\end{cfa}
    29 
    30 When processors are added, they are added alongside the existing processor given to each program.
    31 Thus, for $N$ processors, allocate $N-1$ processors.
    32 A thread is implicitly joined at deallocation, either implicitly at block exit for stack allocation or explicitly at @delete@ for heap allocation.
    33 The thread performing the deallocation must wait for the thread to terminate before the deallocation can occur.
     29A thread type is are defined using the aggregate kind @thread@.
     30For each thread type, a corresponding @main@ routine must be defined, which is where the thread starts running once when a thread object are is created.
     31The @processor@ declaration adds addition kernel threads alongside the existing processor given to each program.
     32Thus, for $N$ processors, allocate $N-1$ processors.
     33A thread is implicitly joined at deallocation, either implicitly at block exit for stack allocation or explicitly at @delete@ for heap allocation.
     34The thread performing the deallocation must wait for the thread to terminate before the deallocation can occur.
    3435A thread terminates by returning from the main routine where it starts.
    3536
    36 \section{Existing Concurrency Features}
     37\section{Existing and New Concurrency Features}
     38
    3739\CFA currently provides a suite of concurrency features including futures, locks, condition variables, generators, coroutines, monitors.
    3840Examples of these features are omitted as most of them are the same as their counterparts in other languages.
    3941It is worthwhile to note that all concurrency features added to \CFA are made to be compatible each other.
    40 The laundry list of features above and the ones introduced in this thesis can be used in the same program without issue.
     42The laundry list of features above and the ones introduced in this thesis can be used in the same program without issue, and the features are designed to interact in meaningful ways.
     43For example, a thread can inteact with a monitor, which can interact with a coroutine, which can interact with a generator.
    4144
    4245Solving concurrent problems requires a diverse toolkit.
  • doc/theses/colby_parsons_MMAth/text/conclusion.tex

    r92d8cda r79b05224  
    55% ======================================================================
    66
    7 The goal of this thesis was to expand the concurrent support that \CFA offers to fill in gaps and support language users' ability to write safe and efficient concurrent programs.
    8 The presented features achieves this goal, and provides users with the means to write scalable programs in \CFA through multiple avenues.
    9 Additionally, the tools presented include safety and productivity features from deadlock detection, to detection of common programming errors, easy concurrent shutdown, and toggleable performance statistics.
    10 Programmers often have preferences between computing paradigms and concurrency is no exception.
    11 If users prefer the message passing paradigm of concurrency, \CFA now provides message passing utilities in the form of an actor system and channels.
    12 For shared memory concurrency, the mutex statement provides a safe and easy-to-use interface for mutual exclusion.
    13 The @waituntil@ statement aids in writing concurrent programs in both the message passing and shared memory paradigms of concurrency.
    14 Furthermore, no other language provides a synchronous multiplexing tool polymorphic over resources like \CFA's @waituntil@.
    15 This work successfully provides users with familiar concurrent language support, but with additional value added over similar utilities in other popular languages.
     7The goal of this thesis is to expand concurrent support in \CFA to fill in gaps and increase support for writing safe and efficient concurrent programs.
     8The presented features achieves this goal and provide users with the means to write scalable concurrent programs in \CFA through multiple avenues.
     9Additionally, the tools presented provide safety and productivity features that support detection of deadlock and other common concurrency errors, easy concurrent shutdown, and toggleable performance statistics.
    1610
    17 On overview of the contributions in this thesis include the following:
     11For locking, the mutex statement provides a safe and easy-to-use interface for mutual exclusion.
     12If programmers have a preference for a message-passing paradigm, \CFA now supports it in the form of channels and/or actors.
     13The @waituntil@ statement simplifies writing concurrent programs in both the message passing and shared memory paradigms of concurrency.
     14Finally, no other programming language provides a synchronous multiplexing tool that is polymorphic over resources like \CFA's @waituntil@.
     15This work successfully provides users with familiar concurrent-language support, but with additional value added over similar utilities in other popular languages.
     16
     17On overview of the contributions made in this thesis include the following:
    1818\begin{enumerate}
    19 \item The mutex statement, which provides performant and deadlock-free multiple lock acquisition.
    20 \item Channels with comparable performance to Go, that have safety and productivity features including deadlock detection, and an easy-to-use exception-based channel @close@ routine.
    21 \item An in-memory actor system that achieved the lowest latency message send of systems tested due to the novel copy-queue data structure. The actor system presented has built-in detection of six common actor errors, and it has good performance compared to other systems on all benchmarks.
    22 \item A @waituntil@ statement which tackles the hard problem of allowing a thread to safely synch\-ronously wait for some set of concurrent resources.
     19\item The mutex statement, which provides performant and deadlock-free multi-lock acquisition.
     20\item Channels with comparable performance to Go, which have safety and productivity features including deadlock detection and an easy-to-use exception-based channel @close@ routine.
     21\item An in-memory actor system, which achieves the lowest latency message send of systems tested due to the novel copy-queue data structure.
     22\item As well, the actor system has built-in detection of six common actor errors, with comparable performance to other systems across all benchmarks.
     23\item A @waituntil@ statement, which tackles the hard problem of allowing a thread to safely wait synchronously for an arbitrary set of concurrent resources.
    2324\end{enumerate}
    2425
    25 The features presented are commonly used in conjunction to solve concurrent problems.
    26 The @waituntil@ statement, the @mutex@ statement, and channels will all likely see use in a program where a thread operates as an administrator or server which accepts and distributes work among channels based on some shared state.
    27 The @mutex@ statement sees use across almost all concurrent code in \CFA, since it is used with the stream operator @sout@ to provide thread-safe output.
    28 While not yet implemented, the polymorphic support of the @waituntil@ statement could see use in conjunction with the actor system to enable user threads outside the actor system to wait for work to be done, or for actors to finish.
    29 A user of \CFA does not have to solely subscribe to the message passing or shared memory concurrent paradigm.
    30 As such, channels in \CFA are often used to pass pointers to shared memory that may still need mutual exclusion, requiring the @mutex@ statement to also be used.
     26The add features are now commonly used to solve concurrent problems in \CFA.
     27The @mutex@ statement sees use across almost all concurrent code in \CFA, as it is the simplest mechanism for providing thread-safe input and output.
     28The channels and the @waituntil@ statement see use in programs where a thread operates as a server or administrator, which accepts and distributes work among channels based on some shared state.
     29When implemented, the polymorphic support of the @waituntil@ statement will see use with the actor system to enable user threads outside the actor system to wait for work to be done or for actors to finish.
     30Finally, the new features are often combined, \eg channels pass pointers to shared memory that may still need mutual exclusion, requiring the @mutex@ statement to be used.
    3131
    3232From the novel copy-queue data structure in the actor system and the plethora of user-supporting safety features, all these utilities build upon existing concurrent tooling with value added.
    3333Performance results verify that each new feature is comparable or better than similar features in other programming languages.
    34 All in all, this suite of concurrent tools expands users' ability to easily write safe and performant multi-threaded programs in \CFA.
     34All in all, this suite of concurrent tools expands a \CFA programmers' ability to easily write safe and performant multi-threaded programs.
    3535
    3636\section{Future Work}
     
    3838\subsection{Further Implicit Concurrency}
    3939
    40 This thesis only scratches the surface of implicit concurrency by providing an actor system.
     40This thesis only scratches the surface of implicit concurrency by providing a channels and actors.
    4141There is room for more implicit concurrency tools in \CFA.
    42 User-defined implicit concurrency in the form of annotated loops or recursive concurrent functions exists in many other languages and libraries~\cite{uC++,OpenMP}.
     42User-defined implicit concurrency in the form of annotated loops or recursive concurrent functions exists in other languages and libraries~\cite{uC++,OpenMP}.
    4343Similar implicit concurrency mechanisms could be implemented and expanded on in \CFA.
    4444Additionally, the problem of automatic parallelism of sequential programs via the compiler is an interesting research space that other languages have approached~\cite{wilson94,haskell:parallel} and could be explored in \CFA.
     
    4646\subsection{Advanced Actor Stealing Heuristics}
    4747
    48 In this thesis, two basic victim-selection heuristics are chosen when implementing the work stealing actor system.
    49 Good victim selection is an active area of work stealing research, especially when taking into account NUMA effects and cache locality~\cite{barghi18,wolke17}.
     48In this thesis, two basic victim-selection heuristics are chosen when implementing the work-stealing actor-system.
     49Good victim selection is an active area of work-stealing research, especially when taking into account NUMA effects and cache locality~\cite{barghi18,wolke17}.
    5050The actor system in \CFA is modular and exploration of other victim-selection heuristics for queue stealing in \CFA could be provided by pluggable modules.
    5151Another question in work stealing is: when should a worker thread steal?
    52 Work stealing systems can often be too aggressive when stealing, causing the cost of the steal to be have a negative rather positive effect on performance.
     52Work-stealing systems can often be too aggressive when stealing, causing the cost of the steal to be have a negative rather positive effect on performance.
    5353Given that thief threads often have cycles to spare, there is room for a more nuanced approaches when stealing.
    5454Finally, there is the very difficult problem of blocking and unblocking idle threads for workloads with extreme oscillations in CPU needs.
     
    5656\subsection{Synchronously Multiplexing System Calls}
    5757
    58 There are many tools that try to synchronously wait for or asynchronously check I/O, since improvements in this area pay dividends in many areas of computer science~\cite{linux:select,linux:poll,linux:epoll,linux:iouring}.
     58There are many tools that try to synchronously wait for or asynchronously check I/O.
     59Improvements in this area pay dividends in many areas of I/O based programming~\cite{linux:select,linux:poll,linux:epoll,linux:iouring}.
    5960Research on improving user-space tools to synchronize over I/O and other system calls is an interesting area to explore in the world of concurrent tooling.
    6061Specifically, incorporating I/O into the @waituntil@ to allow a network server to work with multiple kinds of asynchronous I/O interconnects without using tradition event loops.
     
    6970The semantics and safety of these builtins require careful navigation since they require the user to have a deep understanding of concurrent memory-ordering models.
    7071Furthermore, these atomics also often require a user to understand how to fence appropriately to ensure correctness.
    71 All these problems and more could benefit from language support in \CFA.
     72All these problems and more would benefit from language support in \CFA.
    7273Adding good language support for atomics is a difficult problem, which if solved well, would allow for easier and safer writing of low-level concurrent code.
    7374
  • doc/theses/colby_parsons_MMAth/text/intro.tex

    r92d8cda r79b05224  
    55% ======================================================================
    66
    7 Concurrent programs are the wild west of programming because determinism and simple ordering of program operations go out the window. 
    8 To seize the reins and write performant and safe concurrent code, high-level concurrent-language features are needed. 
    9 Like any other craftsmen, programmers are only as good as their tools, and concurrent tooling and features are no exception. 
     7Concurrent programs are the wild west of programming because determinism and simple ordering of program operations go out the window.
     8To seize the reins and write performant and safe concurrent code, high-level concurrent-language features are needed.
     9Like any other craftsmen, programmers are only as good as their tools, and concurrent tooling and features are no exception.
    1010
    11 This thesis presents a suite of high-level concurrent-language features implemented in the new programming-language \CFA.
    12 These features aim to improve the performance of concurrent programs, aid in writing safe programs, and assist user productivity by improving the ease of concurrent programming.
    13 The groundwork for concurrent features in \CFA was implemented by Thierry Delisle~\cite{Delisle18}, who contributed the threading system, coroutines, monitors and other tools.
    14 This thesis builds on top of that foundation by providing a suite of high-level concurrent features.
    15 The features include a @mutex@ statement, channels, a @waituntil@ statement, and an actor system.
    16 All of these features exist in other programming languages in some shape or form, however this thesis extends the original ideas by improving performance, productivity, and safety.
     11This thesis presents a suite of high-level concurrent-language features implemented in the new programming-language \CFA.
     12These features aim to improve the performance of concurrent programs, aid in writing safe programs, and assist user productivity by improving the ease of concurrent programming.
     13The groundwork for concurrent features in \CFA was designed and implemented by Thierry Delisle~\cite{Delisle18}, who contributed the threading system, coroutines, monitors and other basic concurrency tools.
     14This thesis builds on top of that foundation by providing a suite of high-level concurrent features.
     15The features include a @mutex@ statement, channels, a @waituntil@ statement, and an actor system.
     16All of these features exist in other programming languages in some shape or form;
     17however, this thesis extends the original ideas by improving performance, productivity, and safety.
    1718
    1819\section{The Need For Concurrent Features}
    19 Asking a programmer to write a complex concurrent program without any concurrent language features is asking them to undertake a very difficult task.
    20 They would only be able to rely on the atomicity that their hardware provides and would have to build up from there.
    21 This would be like asking a programmer to write a complex sequential program only in assembly.
    22 Both are doable, but would often be easier and less error prone with higher level tooling.
    2320
    24 Concurrent programming has many pitfalls that are unique and do not show up in sequential code:
     21% Asking a programmer to write a complex concurrent program without any concurrent language features is asking them to undertake a very difficult task.
     22% They would only be able to rely on the atomicity that their hardware provides and would have to build up from there.
     23% This would be like asking a programmer to write a complex sequential program only in assembly.
     24% Both are doable, but would often be easier and less error prone with higher level tooling.
     25
     26Concurrent programming has many unique pitfalls that do not appear in sequential programming:
    2527\begin{enumerate}
    26 \item Deadlock, where threads cyclically wait on resources, blocking them indefinitely.
     28\item Race conditions, where thread orderings can result in arbitrary behaviours, resulting in correctness problems.
    2729\item Livelock, where threads constantly attempt a concurrent operation unsuccessfully, resulting in no progress being made.
    28 \item Race conditions, where thread orderings can result in differing behaviours and correctness of a program execution.
    29 \item Starvation, where threads may be deprived of access to some shared resource due to unfairness and never make progress.
     30\item Starvation, where \emph{some} threads constantly attempt a concurrent operation unsuccessfully, resulting in partial progress being made.
     31% where some threads may be deprived access to some shared resource due to unfairness and never make progress.
     32\item Deadlock, where some threads wait for an event that cannot occur, blocking them indefinitely, resulting in no progress being made.
    3033\end{enumerate}
    31 Even with the guiding hand of concurrent tools these pitfalls can still catch unwary programmers, but good language support can prevent, detect, and mitigate these problems.
     34Even with the guiding hand of concurrent tools these pitfalls still catch unwary programmers, but good language support help significantly to prevent, detect, and mitigate these problems.
    3235
    33 \section{A Brief Overview}
     36\section{Thesis Overview}
    3437
    35 The first chapter of this thesis aims to familiarize the reader with the language \CFA.
    36 In this chapter, syntax and features of the \CFA language that appear in this work are discussed The next chapter briefly discusses prior concurrency work in \CFA and how this work builds on top of existing features.
    37 The remaining chapters each introduce a concurrent language feature, discuss prior related work, and present contributions which are then benchmarked against other languages and systems.
    38 The first of these chapters discusses the @mutex@ statement, a language feature that improves ease of use and safety of lock usage.
    39 The @mutex@ statement is compared both in terms of safety and performance with similar tools in \CC and Java.
    40 The following chapter discusses channels, a message passing concurrency primitive that provides an avenue for safe synchronous and asynchronous communication across threads.
    41 Channels in \CFA are compared to Go, which popularized the use of channels in modern concurrent programs.
    42 The following chapter discusses the \CFA actor system.
    43 The \CFA actor system is a close cousin of channels, as it also belongs to the message passing paradigm of concurrency.
    44 However, the actor system provides a great degree of abstraction and ease of scalability, making it useful for a different range of problems than channels.
    45 The actor system in \CFA is compared with a variety of other systems on a suite of benchmarks, where it achieves significant performance gains over other systems due to its design.
    46 The final chapter discusses the \CFA @waituntil@ statement which provides the ability to synchronize while waiting for a resource, such as acquiring a lock, accessing a future, or writing to a channel.
    47 The @waituntil@ statement presented provides greater flexibility and expressibility than similar features in other languages.
    48 All in all, the features presented aim to fill in gaps in the current \CFA concurrent language support, and enable users to write a wider range of complex concurrent programs with ease.
     38Chapter~\ref{s:cfa} of this thesis aims to familiarize the reader with the language \CFA.
     39In this chapter, syntax and features of the \CFA language that appear in this work are discussed.
     40Chapter~\ref{s:cfa_concurrency} briefly discusses prior concurrency work in \CFA, and how the work in this thesis builds on top of this existing framework.
     41Each remaining chapter introduces an additional \CFA concurrent-language feature, which includes discussing prior related work for this feature, extensions over prior features, and using benchmarks to compare the performance this feature with corresponding or similar features in other languages and systems.
     42
     43Chapter~\ref{s:mutexstmt} discusses the @mutex@ statement, a language feature that provides safe and simple lock usage.
     44The @mutex@ statement is compared both in terms of safety and performance with similar mechanisms in \CC and Java.
     45Chapter~\ref{s:channels} discusses channels, a message passing concurrency primitive that provides for safe synchronous and asynchronous communication among threads.
     46Channels in \CFA are compared to Go's channels, which popularized the use of channels in modern concurrent programs.
     47Chapter~\ref{s:actors} discusses the \CFA actor system.
     48An actor system is a close cousin of channels, as it also belongs to the message passing paradigm of concurrency.
     49However, an actor system provides a greater degree of abstraction and ease of scalability, making it useful for a different range of problems than channels.
     50The actor system in \CFA is compared with a variety of other systems on a suite of benchmarks. %, where it achieves significant performance gains over other systems due to its design.
     51Chapter~\ref{s:waituntil} discusses the \CFA @waituntil@ statement, which provides the ability to synchronize while waiting for a resource, such as acquiring a lock, accessing a future, or writing to a channel.
     52The \CFA @waituntil@ statement provides greater flexibility and expressibility than similar features in other languages.
     53All in all, the features presented aim to fill in gaps in the current \CFA concurrent-language support, enabling users to write a wider range of complex concurrent programs with ease.
    4954
    5055\section{Contributions}
    51 This work presents the following contributions:
    52 \begin{enumerate}
    53 \item The @mutex@ statement which:
    54 \begin{itemize}[itemsep=0pt]
    55 \item
    56 provides deadlock-free multiple lock acquisition,
    57 \item
    58 clearly denotes lock acquisition and release,
    59 \item
    60 and has good performance irrespective of lock ordering.
    61 \end{itemize}
    62 \item Channels which:
     56This work presents the following contributions within each of the additional language features:
     57\begin{enumerate}[leftmargin=*]
     58\item The @mutex@ statement that:
     59    \begin{itemize}[itemsep=0pt]
     60    \item
     61    provides deadlock-free multiple lock acquisition,
     62    \item
     63    clearly denotes lock acquisition and release,
     64    \item
     65    and has good performance irrespective of lock ordering.
     66    \end{itemize}
     67\item The channel that:
    6368\begin{itemize}[itemsep=0pt]
    6469    \item
     
    7176    and provides toggle-able statistics for performance tuning.
    7277\end{itemize}
    73 \item An in-memory actor system that:
     78\item The in-memory actor system that:
    7479\begin{itemize}[itemsep=0pt]
    7580    \item
     
    8287    gains performance through static-typed message sends, eliminating the need for dynamic dispatch,
    8388    \item
    84     introduces the copy queue, an array based queue specialized for the actor use case to minimize calls to the memory allocator,
     89    introduces the copy queue, an array-based queue specialized for the actor use-case to minimize calls to the memory allocator,
    8590    \item
    8691    has robust detection of six tricky, but common actor programming errors,
     
    9095    and provides toggle-able statistics for performance tuning.
    9196\end{itemize}
    92 
    93 \item A @waituntil@ statement which:
     97\item The @waituntil@ statement that:
    9498\begin{itemize}[itemsep=0pt]
    9599    \item
    96100    is the only known polymorphic synchronous multiplexing language feature,
    97101    \item
    98     provides greater expressibility of waiting conditions than other languages,
     102    provides greater expressibility for waiting conditions than other languages,
    99103    \item
    100     and achieves comparable performance to similar features in two other languages,
     104    and achieves comparable performance to similar features in two other languages.
    101105\end{itemize}
    102106\end{enumerate}
Note: See TracChangeset for help on using the changeset viewer.