Changeset 512d937c
- Timestamp:
- Mar 24, 2023, 4:52:50 PM (20 months ago)
- Branches:
- ADT, ast-experimental, master
- Children:
- 5fd5de2
- Parents:
- 75d874a
- Location:
- doc/theses/colby_parsons_MMAth
- Files:
-
- 8 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/colby_parsons_MMAth/Makefile
r75d874a r512d937c 13 13 ## Define the text source files. 14 14 15 SOURCES = ${addsuffix .tex, \ 16 text/CFA_intro \ 17 text/actors \ 18 text/frontpgs \ 19 text/CFA_concurrency \ 20 thesis \ 21 text/mutex_stmt \ 15 SOURCES = ${addsuffix .tex, \ 16 text/CFA_intro \ 17 text/actors \ 18 text/frontpgs \ 19 text/CFA_concurrency \ 20 thesis \ 21 text/mutex_stmt \ 22 text/channels \ 22 23 } 23 24 … … 39 40 figures/pykeCFABalance-Multi \ 40 41 figures/nasusCFABalance-Multi \ 42 figures/pyke_Aggregate_Lock_2 \ 43 figures/pyke_Aggregate_Lock_4 \ 44 figures/pyke_Aggregate_Lock_8 \ 45 figures/nasus_Aggregate_Lock_2 \ 46 figures/nasus_Aggregate_Lock_4 \ 47 figures/nasus_Aggregate_Lock_8 \ 41 48 } 42 49 -
doc/theses/colby_parsons_MMAth/glossary.tex
r75d874a r512d937c 15 15 \newacronym{numa}{NUMA}{Non-Uniform Memory Access} 16 16 \newacronym{rtti}{RTTI}{Run-Time Type Information} 17 \newacronym{fcfs}{FCFS}{First Come First Served} -
doc/theses/colby_parsons_MMAth/local.bib
r75d874a r512d937c 21 21 @string{pldi="Programming Language Design and Implementation"} 22 22 23 @inproceedings{barghi18,24 title={Work-stealing, locality-aware actor scheduling},25 author={Barghi, Saman and Karsten, Martin},26 booktitle={2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS)},27 pages={484--494},28 year={2018},29 organization={IEEE}30 }31 32 23 @inproceedings{wolke17, 33 24 title={Locality-guided scheduling in caf}, … … 36 27 pages={11--20}, 37 28 year={2017} 38 }39 40 @mastersthesis{Delisle18,41 author={{Delisle, Thierry}},42 title={Concurrency in C∀},43 year={2018},44 publisher="UWSpace",45 url={http://hdl.handle.net/10012/12888}46 29 } 47 30 … … 53 36 url={http://hdl.handle.net/10012/18941} 54 37 } 38 39 @article{Hoare78, 40 title={Communicating sequential processes}, 41 author={Hoare, Charles Antony Richard}, 42 journal={Communications of the ACM}, 43 volume={21}, 44 number={8}, 45 pages={666--677}, 46 year={1978}, 47 publisher={ACM New York, NY, USA} 48 } -
doc/theses/colby_parsons_MMAth/text/CFA_intro.tex
r75d874a r512d937c 9 9 10 10 \section{References} 11 References in \CFA are similar to references in \CC, however in \CFA references are rebindable, and support multi-level referencing. References in \CFA are a layer of syntactic sugar over pointers to reduce the number of ref/deref operations needed with pointer usage. Some examples of references in \CFA are shown in Listing~\ref{l:cfa_ref}. 12 11 References in \CFA are similar to references in \CC, however in \CFA references are rebindable, and support multi-level referencing. References in \CFA are a layer of syntactic sugar over pointers to reduce the number of ref/deref operations needed with pointer usage. Some examples of references in \CFA are shown in Listing~\ref{l:cfa_ref}. Another related item to note is that the \CFA equivalent of \CC's \code{nullptr} is \code{0p}. 13 12 14 13 \begin{cfacode}[tabsize=3,caption={Example of \CFA references},label={l:cfa_ref}] -
doc/theses/colby_parsons_MMAth/text/actors.tex
r75d874a r512d937c 6 6 7 7 % C_TODO: add citations throughout chapter 8 Actors are a concurrent feature that abstracts threading away from a user, and instead provides \newterm{actors} and \newterm{messages} as building blocks for concurrency. This is a form of what is called \newterm{implicit concurrency}, where programmerswrite concurrent code without having to worry about explicit thread synchronization and mutual exclusion. The study of actors can be broken into two concepts, the \newterm{actor model}, which describes the model of computation and the \newterm{actor system}, which refers to the implementation of the model in practice. Before discussing \CFA's actor system in detail, it is important to first describe the actor model, and the classic approach to implementing an actor system.8 Actors are a concurrent feature that abstracts threading away from a user, and instead provides \newterm{actors} and \newterm{messages} as building blocks for concurrency. Actors are another message passing concurrency feature, similar to channels, but with more abstraction. Actors enter the realm of what is called \newterm{implicit concurrency}, where programmers can write concurrent code without having to worry about explicit thread synchronization and mutual exclusion. The study of actors can be broken into two concepts, the \newterm{actor model}, which describes the model of computation and the \newterm{actor system}, which refers to the implementation of the model in practice. Before discussing \CFA's actor system in detail, it is important to first describe the actor model, and the classic approach to implementing an actor system. 9 9 10 10 \section{The Actor Model} … … 574 574 The benchmark uses input matrices $X$ and $Y$ that are both $3072$ by $3072$ in size. An actor is made for each row of $X$ and is passed via message the information needed to calculate a row of the result matrix $Z$. 575 575 576 Given that the bottleneck of the benchmark is the computation of the result matrix, it follows that the results in Figures~\ref{f:MatrixAMD} and \ref{f:MatrixIntel} are clustered closer than other experiments. In Figure~\ref{f:MatrixAMD} \uC and \CFA have identical performance and in Figure~\ref{f:MatrixIntel} \uC pulls ahead og \CFA after 24 cores likely due to costs associated with work stealing while hyperthreading. As mentioned in \ label{s:executorPerf}, it is hypothesized that CAF performs better in this benchmark compared to others due to its eager work stealing implementation. In Figures~\ref{f:cfaMatrixAMD} and \ref{f:cfaMatrixIntel} there is little negligible performance difference across \CFA stealing heuristics.576 Given that the bottleneck of the benchmark is the computation of the result matrix, it follows that the results in Figures~\ref{f:MatrixAMD} and \ref{f:MatrixIntel} are clustered closer than other experiments. In Figure~\ref{f:MatrixAMD} \uC and \CFA have identical performance and in Figure~\ref{f:MatrixIntel} \uC pulls ahead og \CFA after 24 cores likely due to costs associated with work stealing while hyperthreading. As mentioned in \ref{s:executorPerf}, it is hypothesized that CAF performs better in this benchmark compared to others due to its eager work stealing implementation. In Figures~\ref{f:cfaMatrixAMD} and \ref{f:cfaMatrixIntel} there is little negligible performance difference across \CFA stealing heuristics. 577 577 578 578 \begin{figure} -
doc/theses/colby_parsons_MMAth/text/frontpgs.tex
r75d874a r512d937c 75 75 \begin{center}\textbf{Abstract}\end{center} 76 76 77 Concurrent programs are notoriously hard to program and even harder to debug. Furthermore concurrent programs must be performant, as the introduction of concurrency into a program is often done to achieve some form of speedup. This thesis presents a suite of high level concurrent language features in \CFA, all of which are implemented with the aim of improving the performance, productivity, and safety of concurrent programs. \CFA is a non object-oriented programming language that extends C. The foundation for concurrency in \CFA was laid by Thierry Delisle, who implemented coroutines, user-level threads, and monitors\cite{Delisle18}. This thesis builds upon that groundwork and introduces a suite of concurrent features as its main contribution. The features include a diverse set of polymorphic locks,Go-like channels, mutex statements (similar to \CC scoped locks or Java synchronized statement), an actor system, and a Go-like select statement. The root idea behind these features are not new, but the \CFA implementations improve upon the original ideas in performance, productivity, and safety.77 Concurrent programs are notoriously hard to program and even harder to debug. Furthermore concurrent programs must be performant, as the introduction of concurrency into a program is often done to achieve some form of speedup. This thesis presents a suite of high level concurrent language features in \CFA, all of which are implemented with the aim of improving the performance, productivity, and safety of concurrent programs. \CFA is a non object-oriented programming language that extends C. The foundation for concurrency in \CFA was laid by Thierry Delisle, who implemented coroutines, user-level threads, and monitors\cite{Delisle18}. This thesis builds upon that groundwork and introduces a suite of concurrent features as its main contribution. The features include Go-like channels, mutex statements (similar to \CC scoped locks or Java synchronized statement), an actor system, and a Go-like select statement. The root idea behind these features are not new, but the \CFA implementations improve upon the original ideas in performance, productivity, and safety. 78 78 79 79 \cleardoublepage -
doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex
r75d874a r512d937c 4 4 % ====================================================================== 5 5 % ====================================================================== 6 7 The mutex statement is a concurrent language feature that aims to support easy lock usage. The mutex statement is in the form of a clause and following statement, similar to a loop or conditional statement. In the clause the mutex statement accepts a number of \newterm{lockable} objects, and then locks them for the duration of the following statement. The locks are acquired in a deadlock free manner and released using RAII. The mutex statement provides an avenue for easy lock usage in the common case where locks are used to wrap a critical section. Additionally, it provides the safety guarantee of deadlock-freedom, both by acquiring the locks in a deadlock-free manner, and by ensuring that the locks release on error, or normal program execution via RAII. 8 9 \begin{cfacode}[tabsize=3,caption={\CFA mutex statement usage},label={l:cfa_mutex_ex}] 10 owner_lock lock1, lock2, lock3; 11 int count = 0; 12 mutex( lock1, lock2, lock3 ) { 13 // can use block statement 14 // ... 15 } 16 mutex( lock2, lock3 ) count++; // or inline statement 17 \end{cfacode} 18 19 \section{Other Languages} 20 There are similar concepts to the mutex statement that exist in other languages. Java has a feature called a synchronized statement, which looks identical to \CFA's mutex statement, but it has some differences. The synchronized statement only accepts one item in its clause. Any object can be passed to the synchronized statement in Java since all objects in Java are monitors, and the synchronized statement acquires that object's monitor. In \CC there is a feature in the \code{<mutex>} header called scoped\_lock, which is also similar to the mutex statement. The scoped\_lock is a class that takes in any number of locks in its constructor, and acquires them in a deadlock-free manner. It then releases them when the scoped\_lock object is deallocated, thus using RAII. An example of \CC scoped\_lock usage is shown in Listing~\ref{l:cc_scoped_lock}. 21 22 \begin{cppcode}[tabsize=3,caption={\CC scoped\_lock usage},label={l:cc_scoped_lock}] 23 std::mutex lock1, lock2, lock3; 24 { 25 scoped_lock s( lock1, lock2, lock3 ) 26 // locks are released via raii at end of scope 27 } 28 \end{cppcode} 29 30 \section{\CFA implementation} 31 The \CFA mutex statement can be seen as a combination of the similar featurs in Java and \CC. It can acquire more that one lock in a deadlock-free manner, and releases them via RAII like \CC, however the syntax is identical to the Java synchronized statement. This syntactic choice was made so that the body of the mutex statement is its own scope. Compared to the scoped\_lock, which relies on its enclosing scope, the mutex statement's introduced scope can provide visual clarity as to what code is being protected by the mutex statement, and where the mutual exclusion ends. \CFA's mutex statement and \CC's scoped\_lock both use parametric polymorphism to allow user defined types to work with the feature. \CFA's implementation requires types to support the routines \code{lock()} and \code{unlock()}, whereas \CC requires those routines, plus \code{try_lock()}. The scoped\_lock requires an additional routine since it differs from the mutex statement in how it implements deadlock avoidance. 32 33 The parametric polymorphism allows for locking to be defined for types that may want convenient mutual exclusion. An example is \CFA's \code{sout}. \code{sout} is \CFA's output stream, similar to \CC's \code{cout}. \code{sout} has routines that match the mutex statement trait, so the mutex statement can be used to lock the output stream while producing output. In this case, the mutex statement allows the programmer to acquire mutual exclusion over an object without having to know the internals of the object or what locks it needs to acquire. The ability to do so provides both improves safety and programmer productivity since it abstracts away the concurrent details and provides an interface for optional thread-safety. This is a commonly used feature when producing output from a concurrent context, since producing output is not thread safe by default. This use case is shown in Listing~\ref{l:sout}. 34 35 \begin{cfacode}[tabsize=3,caption={\CFA sout with mutex statement},label={l:sout}] 36 mutex( sout ) 37 sout | "This output is protected by mutual exclusion!"; 38 \end{cfacode} 39 40 \section{Deadlock Avoidance} 41 The mutex statement uses the deadlock prevention technique of lock ordering, where the circular-wait condition of a deadlock cannot occur if all locks are acquired in the same order. The scoped\_lock uses a deadlock avoidance algorithm where all locks after the first are acquired using \code{try_lock} and if any of the attempts to lock fails, all locks so far are released. This repeats until all locks are acquired successfully. The deadlock avoidance algorithm used by scoped\_lock is shown in Listing~\ref{l:cc_deadlock_avoid}. The algorithm presented is taken straight from the \code{<mutex>} header source, with some renaming and comments for clarity. 42 43 \begin{cppcode}[tabsize=3,caption={\CC scoped\_lock deadlock avoidance algorithm},label={l:cc_deadlock_avoid}] 44 int first = 0; // first lock to attempt to lock 45 do { 46 // locks is the array of locks to acquire 47 locks[first].lock(); // lock first lock 48 for (int i = 1; i < Num_Locks; ++i) { // iterate over rest of locks 49 const int idx = (first + i) % Num_Locks; 50 if (!locks[idx].try_lock()) { // try lock each one 51 for (int j = i; j != 0; --j) // release all locks 52 locks[(first + j - 1) % Num_Locks].unlock(); 53 first = idx; // rotate which lock to acquire first 54 break; 55 } 56 } 57 // if first lock is still held then all have been acquired 58 } while (!locks[first].owns_lock()); // is first lock held? 59 \end{cppcode} 60 61 The algorithm in \ref{l:cc_deadlock_avoid} successfully avoids deadlock, however there is a potential livelock scenario. Given two threads $A$ and $B$, who create a scoped\_lock with two locks $L1$ and $L2$, a livelock can form as follows. Thread $A$ creates a scoped\_lock with $L1, L2$ in that order, $B$ creates a scoped lock with the order $L2, L1$. Both threads acquire the first lock in their order and then fail the try\_lock since the other lock is held. They then reset their start lock to be their 2nd lock and try again. This time $A$ has order $L2, L1$, and $B$ has order $L1, L2$. This is identical to the starting setup, but with the ordering swapped among threads. As such if they each acquire their first lock before the other acquires their second, they can livelock indefinitely. 62 The lock ordering algorithm used in the mutex statement in \CFA is both deadlock and livelock free. It sorts the locks based on memory address and then acquires them. For locks fewer than 7, it sorts using hard coded sorting methods that perform the minimum number of swaps for a given number of locks. For 7 or more locks insertion sort is used. These sorting algorithms were chosen since it is rare to have to hold more than a handful of locks at a time. It is worth mentioning that the downside to the sorting approach is that it is not fully compatible with usages of the same locks outside the mutex statement. If more than one lock is held by a mutex statement, if more than one lock is to be held elsewhere, it must be acquired via the mutex statement, or else the required ordering will not occur. Comparitively, if the scoped\_lock is used and the same locks are acquired elsewhere, there is no concern of the scoped\_lock deadlocking, due to its avoidance scheme, but it may livelock. 63 64 \begin{figure} 65 \centering 66 \begin{subfigure}{0.5\textwidth} 67 \centering 68 \scalebox{0.5}{\input{figures/nasus_Aggregate_Lock_2.pgf}} 69 \subcaption{AMD} 70 \end{subfigure}\hfill 71 \begin{subfigure}{0.5\textwidth} 72 \centering 73 \scalebox{0.5}{\input{figures/pyke_Aggregate_Lock_2.pgf}} 74 \subcaption{Intel} 75 \end{subfigure} 76 77 \begin{subfigure}{0.5\textwidth} 78 \centering 79 \scalebox{0.5}{\input{figures/nasus_Aggregate_Lock_4.pgf}} 80 \subcaption{AMD} 81 \end{subfigure}\hfill 82 \begin{subfigure}{0.5\textwidth} 83 \centering 84 \scalebox{0.5}{\input{figures/pyke_Aggregate_Lock_4.pgf}} 85 \subcaption{Intel} 86 \end{subfigure} 87 88 \begin{subfigure}{0.5\textwidth} 89 \centering 90 \scalebox{0.5}{\input{figures/nasus_Aggregate_Lock_8.pgf}} 91 \subcaption{AMD}\label{f:mutex_bench8_AMD} 92 \end{subfigure}\hfill 93 \begin{subfigure}{0.5\textwidth} 94 \centering 95 \scalebox{0.5}{\input{figures/pyke_Aggregate_Lock_8.pgf}} 96 \subcaption{Intel}\label{f:mutex_bench8_Intel} 97 \end{subfigure} 98 \caption{The aggregate lock benchmark comparing \CC scoped\_lock and \CFA mutex statement throughput (higher is better).} 99 \label{f:mutex_bench} 100 \end{figure} 101 102 \section{Performance} 103 Performance is compared between \CC's scoped\_lock and \CFA's mutex statement. Comparison with Java is omitted, since it only takes a single lock. To ensure that the comparison between \CC and \CFA exercises the implementation of each feature, an identical spinlock is implemented in each language using a set of builtin atomics available in both \CFA and \CC. Each feature is evaluated on a benchmark which acquires a fixed number of locks in a random order and then releases them. A baseline is included that acquires the locks directly without a mutex statement or scoped\_lock in a fixed ordering and then releases them. The baseline helps highlight the cost of the deadlock avoidance/prevention algorithms for each implementation. The benchmarks are run for a fixed duration of 10 seconds and then terminate and return the total number of times the group of locks were acquired. Each variation is run 11 times on a variety up to 32 cores and with 2, 4, and 8 locks being acquired. The median is calculated and is plotted alongside the 95\% confidence intervals for each point. 104 105 Figure~\ref{f:mutex_bench} shows the results of the benchmark. The baseline runs for both languages are mostly comparable, except for the 8 locks results in \ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel}, where the \CFA baseline is slower. \CFA's mutex statement achieves throughput that is magnitudes higher than \CC's scoped\_lock. This is likely due to the scoped\_lock deadlock avoidance implementation. Since it uses a retry based mechanism, it can take a long time for threads to progress. Additionally the potential for livelock in the algorithm can result in very little throughput under high contention. It was observed on the AMD machine that with 32 threads and 8 locks the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually. It is likely that shorter bouts of livelock occured in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data. In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} the mutex statement performs better than the baseline. At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort. It is likely that the improvement in throughput compared to baseline is due to the time spent in the insertion sort, which decreases contention on the locks. -
doc/theses/colby_parsons_MMAth/thesis.tex
r75d874a r512d937c 54 54 \setlength{\topmargin}{-0.45in} % move running title into header 55 55 \setlength{\headsep}{0.25in} 56 57 \newsavebox{\myboxA} % used with subfigure 58 \newsavebox{\myboxB} 56 59 57 60 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% … … 112 115 % \input{intro} 113 116 114 \input{CFA_intro}117 % \input{CFA_intro} 115 118 116 \input{CFA_concurrency}119 % \input{CFA_concurrency} 117 120 118 \input{mutex_stmt}121 % \input{mutex_stmt} 119 122 120 \input{actors} 123 \input{channels} 124 125 % \input{actors} 121 126 122 127 \clearpage
Note: See TracChangeset
for help on using the changeset viewer.