Changes in / [c1e66d9:deda7e6]


Ignore:
Files:
13 added
4 deleted
54 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/colby_parsons_MMAth/local.bib

    rc1e66d9 rdeda7e6  
    212212    year        = 2023,
    213213}
     214
     215@inproceedings{Harris02,
     216  title={A practical multi-word compare-and-swap operation},
     217  author={Harris, Timothy L and Fraser, Keir and Pratt, Ian A},
     218  booktitle={Distributed Computing: 16th International Conference, DISC 2002 Toulouse, France, October 28--30, 2002 Proceedings 16},
     219  pages={265--279},
     220  year={2002},
     221  organization={Springer}
     222}
     223
     224@misc{kotlin:channel,
     225  author = "Kotlin Documentation",
     226  title = "Channel",
     227  howpublished = {\url{https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.channels/-channel/}},
     228  note = "[Online; accessed 11-September-2023]"
     229}
  • doc/theses/colby_parsons_MMAth/text/CFA_concurrency.tex

    rc1e66d9 rdeda7e6  
    11\chapter{Concurrency in \CFA}\label{s:cfa_concurrency}
    22
    3 The groundwork for concurrency in \CFA was laid by Thierry Delisle in his Master's Thesis~\cite{Delisle18}. 
    4 In that work, he introduced generators, coroutines, monitors, and user-level threading. 
    5 Not listed in that work were basic concurrency features needed as building blocks, such as locks, futures, and condition variables, which he also added to \CFA.
     3The groundwork for concurrency in \CFA was laid by Thierry Delisle in his Master's Thesis~\cite{Delisle18}.
     4In that work, he introduced generators, coroutines, monitors, and user-level threading.
     5Not listed in that work were basic concurrency features needed as building blocks, such as locks, futures, and condition variables.
    66
    77\section{Threading Model}\label{s:threading}
    8 \CFA provides user-level threading and supports an $M$:$N$ threading model where $M$ user threads are scheduled on $N$ kernel threads, where both $M$ and $N$ can be explicitly set by the user.
    9 Kernel threads are created by declaring a @processor@ structure.
    10 User-thread types are defined by creating a @thread@ aggregate-type, \ie replace @struct@ with @thread@.
    11 For each thread type a corresponding @main@ routine must be defined, which is where the thread starts running once it is created.
    12 Examples of \CFA  user thread and processor creation are shown in \VRef[Listing]{l:cfa_thd_init}.
    138
     9\CFA provides user-level threading and supports an $M$:$N$ threading model where $M$ user threads are scheduled on $N$ kernel threads and both $M$ and $N$ can be explicitly set by the programmer.
     10Kernel threads are created by declaring processor objects;
     11user threads are created by declaring a thread objects.
     12\VRef[Listing]{l:cfa_thd_init} shows a typical examples of creating a \CFA user-thread type, and then as declaring processor ($N$) and thread objects ($M$).
    1413\begin{cfa}[caption={Example of \CFA user thread and processor creation},label={l:cfa_thd_init}]
    15 @thread@ my_thread {...};                       $\C{// user thread type}$
    16 void @main@( my_thread & this ) {       $\C{// thread start routine}$
     14@thread@ my_thread {                            $\C{// user thread type (like structure)}$
     15        ... // arbitrary number of field declarations
     16};
     17void @main@( @my_thread@ & this ) {     $\C{// thread start routine}$
    1718        sout | "Hello threading world";
    1819}
    19 
    20 int main() {
     20int main() {                                            $\C{// program starts with a processor (kernel thread)}$
    2121        @processor@ p[2];                               $\C{// add 2 processors = 3 total with starting processor}$
    2222        {
    23                 my_thread t[2], * t3 = new();   $\C{// create 3 user threads, running in main routine}$
     23                @my_thread@ t[2], * t3 = new(); $\C{// create 2 stack allocated, 1 dynamic allocated user threads}$
    2424                ... // execute concurrently
    25                 delete( t3 );                           $\C{// wait for thread to end and deallocate}$
    26         } // wait for threads to end and deallocate
    27 }
     25                delete( t3 );                           $\C{// wait for t3 to end and deallocate}$
     26    } // wait for threads t[0] and t[1] to end and deallocate
     27} // deallocate additional kernel threads
    2828\end{cfa}
    29 
    30 When processors are added, they are added alongside the existing processor given to each program.
    31 Thus, for $N$ processors, allocate $N-1$ processors.
    32 A thread is implicitly joined at deallocation, either implicitly at block exit for stack allocation or explicitly at @delete@ for heap allocation.
    33 The thread performing the deallocation must wait for the thread to terminate before the deallocation can occur.
     29A thread type is are defined using the aggregate kind @thread@.
     30For each thread type, a corresponding @main@ routine must be defined, which is where the thread starts running once when a thread object are is created.
     31The @processor@ declaration adds addition kernel threads alongside the existing processor given to each program.
     32Thus, for $N$ processors, allocate $N-1$ processors.
     33A thread is implicitly joined at deallocation, either implicitly at block exit for stack allocation or explicitly at @delete@ for heap allocation.
     34The thread performing the deallocation must wait for the thread to terminate before the deallocation can occur.
    3435A thread terminates by returning from the main routine where it starts.
    3536
    36 \section{Existing Concurrency Features}
     37\section{Existing and New Concurrency Features}
     38
    3739\CFA currently provides a suite of concurrency features including futures, locks, condition variables, generators, coroutines, monitors.
    3840Examples of these features are omitted as most of them are the same as their counterparts in other languages.
    3941It is worthwhile to note that all concurrency features added to \CFA are made to be compatible each other.
    40 The laundry list of features above and the ones introduced in this thesis can be used in the same program without issue.
     42The laundry list of features above and the ones introduced in this thesis can be used in the same program without issue, and the features are designed to interact in meaningful ways.
     43For example, a thread can inteact with a monitor, which can interact with a coroutine, which can interact with a generator.
    4144
    4245Solving concurrent problems requires a diverse toolkit.
  • doc/theses/colby_parsons_MMAth/text/CFA_intro.tex

    rc1e66d9 rdeda7e6  
    99\CFA is a layer over C, is transpiled\footnote{Source to source translator.} to C, and is largely considered to be an extension of C.
    1010Beyond C, it adds productivity features, extended libraries, an advanced type-system, and many control-flow/concurrency constructions.
    11 However, \CFA stays true to the C programming style, with most code revolving around @struct@'s and routines, and respects the same rules as C.
     11However, \CFA stays true to the C programming style, with most code revolving around @struct@s and routines, and respects the same rules as C.
    1212\CFA is not object oriented as it has no notion of @this@ (receiver) and no structures with methods, but supports some object oriented ideas including constructors, destructors, and limited nominal inheritance.
    1313While \CFA is rich with interesting features, only the subset pertinent to this work is discussed here.
     
    1717References in \CFA are a layer of syntactic sugar over pointers to reduce the number of syntactic ref/deref operations needed with pointer usage.
    1818Pointers in \CFA differ from C and \CC in their use of @0p@ instead of C's @NULL@ or \CC's @nullptr@.
     19References can contain 0p in \CFA, which is the equivalent of a null reference.
    1920Examples of references are shown in \VRef[Listing]{l:cfa_ref}.
    2021
     
    6465This feature is also implemented in Pascal~\cite{Pascal}.
    6566It can exist as a stand-alone statement or wrap a routine body to expose aggregate fields.
     67If exposed fields share a name, the type system will attempt to disambiguate them based on type.
     68If the type system is unable to disambiguate the fields then the user must qualify those names to avoid a compilation error.
    6669Examples of the @with@ statement are shown in \VRef[Listing]{l:cfa_with}.
    6770
  • doc/theses/colby_parsons_MMAth/text/actors.tex

    rc1e66d9 rdeda7e6  
    77Actors are an indirect concurrent feature that abstracts threading away from a programmer, and instead provides \gls{actor}s and messages as building blocks for concurrency.
    88Hence, actors are in the realm of \gls{impl_concurrency}, where programmers write concurrent code without dealing with explicit thread creation or interaction.
    9 Actor message-passing is similar to channels, but with more abstraction, so there is no shared data to protect, making actors amenable in a distributed environment.
     9Actor message-passing is similar to channels, but with more abstraction, so there is no shared data to protect, making actors amenable to a distributed environment.
    1010Actors are often used for high-performance computing and other data-centric problems, where the ease of use and scalability of an actor system provides an advantage over channels.
    1111
     
    1414
    1515\section{Actor Model}
    16 The \Newterm{actor model} is a concurrent paradigm where computation is broken into units of work called actors, and the data for computation is distributed to actors in the form of messages~\cite{Hewitt73}.
     16The \Newterm{actor model} is a concurrent paradigm where an actor is used as the fundamental building-block for computation, and the data for computation is distributed to actors in the form of messages~\cite{Hewitt73}.
    1717An actor is composed of a \Newterm{mailbox} (message queue) and a set of \Newterm{behaviours} that receive from the mailbox to perform work.
    1818Actors execute asynchronously upon receiving a message and can modify their own state, make decisions, spawn more actors, and send messages to other actors.
     
    2222For example, mutual exclusion and locking are rarely relevant concepts in an actor model, as actors typically only operate on local state.
    2323
    24 An actor does not have a thread.
     24\subsection{Classic Actor System}
     25An implementation of the actor model with a theatre (group) of actors is called an \Newterm{actor system}.
     26Actor systems largely follow the actor model, but can differ in some ways.
     27
     28In an actor system, an actor does not have a thread.
    2529An actor is executed by an underlying \Newterm{executor} (kernel thread-pool) that fairly invokes each actor, where an actor invocation processes one or more messages from its mailbox.
    2630The default number of executor threads is often proportional to the number of computer cores to achieve good performance.
    2731An executor is often tunable with respect to the number of kernel threads and its scheduling algorithm, which optimize for specific actor applications and workloads \see{Section~\ref{s:ActorSystem}}.
    2832
    29 \subsection{Classic Actor System}
    30 An implementation of the actor model with a community of actors is called an \Newterm{actor system}.
    31 Actor systems largely follow the actor model, but can differ in some ways.
    3233While the semantics of message \emph{send} is asynchronous, the implementation may be synchronous or a combination.
    33 The default semantics for message \emph{receive} is \gls{fifo}, so an actor receives messages from its mailbox in temporal (arrival) order;
    34 however, messages sent among actors arrive in any order.
     34The default semantics for message \emph{receive} is \gls{fifo}, so an actor receives messages from its mailbox in temporal (arrival) order.
     35% however, messages sent among actors arrive in any order.
    3536Some actor systems provide priority-based mailboxes and/or priority-based message-selection within a mailbox, where custom message dispatchers search among or within a mailbox(es) with a predicate for specific kinds of actors and/or messages.
    36 Some actor systems provide a shared mailbox where multiple actors receive from a common mailbox~\cite{Akka}, which is contrary to the no-sharing design of the basic actor-model (and requires additional locking).
    37 For non-\gls{fifo} service, some notion of fairness (eventual progress) must exist, otherwise messages have a high latency or starve, \ie never received.
    38 Finally, some actor systems provide multiple typed-mailboxes, which then lose the actor-\lstinline{become} mechanism \see{Section~\ref{s:SafetyProductivity}}.
     37Some actor systems provide a shared mailbox where multiple actors receive from a common mailbox~\cite{Akka}, which is contrary to the no-sharing design of the basic actor-model (and may require additional locking).
     38For non-\gls{fifo} service, some notion of fairness (eventual progress) should exist, otherwise messages have a high latency or starve, \ie are never received.
     39% Finally, some actor systems provide multiple typed-mailboxes, which then lose the actor-\lstinline{become} mechanism \see{Section~\ref{s:SafetyProductivity}}.
    3940%While the definition of the actor model provides no restrictions on message ordering, actor systems tend to guarantee that messages sent from a given actor $i$ to actor $j$ arrive at actor $j$ in the order they were sent.
    4041Another way an actor system varies from the model is allowing access to shared global-state.
     
    6061Figure \ref{f:inverted_actor} shows an actor system designed as \Newterm{message-centric}, where a set of messages are scheduled and run on underlying executor threads~\cite{uC++,Nigro21}.
    6162This design is \Newterm{inverted} because actors belong to a message queue, whereas in the classic approach a message queue belongs to each actor.
    62 Now a message send must queries the actor to know which message queue to post the message.
     63Now a message send must query the actor to know which message queue to post the message to.
    6364Again, the simplest design has a single global queue of messages accessed by the executor threads, but this approach has the same contention problem by the executor threads.
    6465Therefore, the messages (mailboxes) are sharded and executor threads schedule each message, which points to its corresponding actor.
     
    176177        @actor | str_msg | int_msg;@                    $\C{// cascade sends}$
    177178        @actor | int_msg;@                                              $\C{// send}$
    178         @actor | finished_msg;@                                 $\C{// send => terminate actor (deallocation deferred)}$
     179        @actor | finished_msg;@                                 $\C{// send => terminate actor (builtin Poison-Pill)}$
    179180        stop_actor_system();                                    $\C{// waits until actors finish}\CRT$
    180181} // deallocate actor, int_msg, str_msg
     
    492493Each executor thread iterates over its own message queues until it finds one with messages.
    493494At this point, the executor thread atomically \gls{gulp}s the queue, meaning it moves the contents of message queue to a local queue of the executor thread.
     495Gulping moves the contents of the message queue as a batch rather than removing individual elements.
    494496An example of the queue gulping operation is shown in the right side of Figure \ref{f:gulp}, where an executor thread gulps queue 0 and begins to process it locally.
    495497This step allows the executor thread to process the local queue without any atomics until the next gulp.
     
    523525
    524526Since the copy queue is an array, envelopes are allocated first on the stack and then copied into the copy queue to persist until they are no longer needed.
    525 For many workloads, the copy queues grow in size to facilitate the average number of messages in flight and there are no further dynamic allocations.
     527For many workloads, the copy queues reallocate and grow in size to facilitate the average number of messages in flight and there are no further dynamic allocations.
    526528The downside of this approach is that more storage is allocated than needed, \ie each copy queue is only partially full.
    527529Comparatively, the individual envelope allocations of a list-based queue mean that the actor system always uses the minimum amount of heap space and cleans up eagerly.
     
    562564To ensure sequential actor execution and \gls{fifo} message delivery in a message-centric system, stealing requires finding and removing \emph{all} of an actor's messages, and inserting them consecutively in another message queue.
    563565This operation is $O(N)$ with a non-trivial constant.
    564 The only way for work stealing to become practical is to shard each worker's message queue, which also reduces contention, and to steal queues to eliminate queue searching.
     566The only way for work stealing to become practical is to shard each worker's message queue \see{Section~\ref{s:executor}}, which also reduces contention, and to steal queues to eliminate queue searching.
    565567
    566568Given queue stealing, the goal of the presented stealing implementation is to have an essentially zero-contention-cost stealing mechanism.
     
    576578
    577579The outline for lazy-stealing by a thief is: select a victim, scan its queues once, and return immediately if a queue is stolen.
    578 The thief then assumes it normal operation of scanning over its own queues looking for work, where stolen work is placed at the end of the scan.
     580The thief then assumes its normal operation of scanning over its own queues looking for work, where stolen work is placed at the end of the scan.
    579581Hence, only one victim is affected and there is a reasonable delay between stealing events as the thief scans its ready queue looking for its own work before potentially stealing again.
    580582This lazy examination by the thief has a low perturbation cost for victims, while still finding work in a moderately loaded system.
     
    636638% Note that a thief never exceeds its $M$/$N$ worker range because it is always exchanging queues with other workers.
    637639If no appropriate victim mailbox is found, no swap is attempted.
     640Note that since the mailbox checks happen non-atomically, the thieves effectively guess which mailbox is ripe for stealing.
     641The thief may read stale data and can end up stealing an ineligible or empty mailbox.
     642This is not a correctness issue and is addressed in Section~\ref{s:steal_prob}, but the steal will likely not be productive.
     643These unproductive steals are uncommon, but do occur with some frequency, and are a tradeoff that is made to achieve minimal victim contention.
    638644
    639645\item
     
    644650\end{enumerate}
    645651
    646 \subsection{Stealing Problem}
     652\subsection{Stealing Problem}\label{s:steal_prob}
    647653Each queue access (send or gulp) involving any worker (thief or victim) is protected using spinlock @mutex_lock@.
    648654However, to achieve the goal of almost zero contention for the victim, it is necessary that the thief does not acquire any queue spinlocks in the stealing protocol.
     
    703709None of the work-stealing actor-systems examined in this work perform well on the repeat benchmark.
    704710Hence, for all non-pathological cases, the claim is made that this stealing mechanism has a (probabilistically) zero-victim-cost in practice.
     711Future work on the work stealing system could include a backoff mechanism after failed steals to further address the pathological cases.
    705712
    706713\subsection{Queue Pointer Swap}\label{s:swap}
     
    709716The \gls{cas} is a read-modify-write instruction available on most modern architectures.
    710717It atomically compares two memory locations, and if the values are equal, it writes a new value into the first memory location.
    711 A software implementation of \gls{cas} is:
     718A sequential specification of \gls{cas} is:
    712719\begin{cfa}
    713720// assume this routine executes atomically
     
    755762
    756763Either a true memory/memory swap instruction or a \gls{dcas} would provide the ability to atomically swap two memory locations, but unfortunately neither of these instructions are supported on the architectures used in this work.
     764There are lock-free implemetions of DCAS, or more generally K-word CAS (also known as MCAS or CASN)~\cite{Harris02} and LLX/SCX~\cite{Brown14} that can be used to provide the desired atomic swap capability.
     765However, these lock-free implementations were not used as it is advantageous in the work stealing case to let an attempted atomic swap fail instead of retrying.
    757766Hence, a novel atomic swap specific to the actor use case is simulated, called \gls{qpcas}.
     767Note that this swap is \emph{not} lock-free.
    758768The \gls{qpcas} is effectively a \gls{dcas} special cased in a few ways:
    759769\begin{enumerate}
     
    766776\end{cfa}
    767777\item
    768 The values swapped are never null pointers, so a null pointer can be used as an intermediate value during the swap.
     778The values swapped are never null pointers, so a null pointer can be used as an intermediate value during the swap. As such, null is effectively used as a lock for the swap.
    769779\end{enumerate}
    770780Figure~\ref{f:qpcasImpl} shows the \CFA pseudocode for the \gls{qpcas}.
     
    862872The concurrent proof of correctness is shown through the existence of an invariant.
    863873The invariant states when a queue pointer is set to @0p@ by a thief, then the next write to the pointer can only be performed by the same thief.
     874This is effictively a mutual exclusion condition for the later write.
    864875To show that this invariant holds, it is shown that it is true at each step of the swap.
    865876\begin{itemize}
     
    10111022The intuition behind this heuristic is that the slowest worker receives help via work stealing until it becomes a thief, which indicates that it has caught up to the pace of the rest of the workers.
    10121023This heuristic should ideally result in lowered latency for message sends to victim workers that are overloaded with work.
     1024It must be acknowledged that this linear search could cause a lot of cache coherence traffic.
     1025Future work on this heuristic could include introducing a search that has less impact on caching.
    10131026A negative side-effect of this heuristic is that if multiple thieves steal at the same time, they likely steal from the same victim, which increases the chance of contention.
    10141027However, given that workers have multiple queues, often in the tens or hundreds of queues, it is rare for two thieves to attempt stealing from the same queue.
     
    10281041\CFA's actor system comes with a suite of safety and productivity features.
    10291042Most of these features are only present in \CFA's debug mode, and hence, have zero-cost in no-debug mode.
    1030 The suit of features include the following.
     1043The suite of features include the following.
    10311044\begin{itemize}
    10321045\item Static-typed message sends:
    1033 If an actor does not support receiving a given message type, the receive call is rejected at compile time, allowing unsupported messages to never be sent to an actor.
     1046If an actor does not support receiving a given message type, the receive call is rejected at compile time, preventing unsupported messages from being sent to an actor.
    10341047
    10351048\item Detection of message sends to Finished/Destroyed/Deleted actors:
     
    10421055
    10431056\item When an executor is configured, $M >= N$.
    1044 That is, each worker must receive at least one mailbox queue, otherwise the worker spins and never does any work.
     1057That is, each worker must receive at least one mailbox queue, since otherwise a worker cannot receive any work without a queue pull messages from.
    10451058
    10461059\item Detection of unsent messages:
     
    11001113\begin{list}{\arabic{enumi}.}{\usecounter{enumi}\topsep=5pt\parsep=5pt\itemsep=0pt}
    11011114\item
    1102 Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (48 process\-ing units) 2.2GHz, running Linux v5.8.0--59--generic
    1103 \item
    1104 Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units) 2.0 GHz, running Linux v5.8.0--55--generic
     1115Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (96 process\-ing units), running Linux v5.8.0--59--generic
     1116\item
     1117Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units), running Linux v5.8.0--55--generic
    11051118\end{list}
    11061119
     
    11121125All benchmarks are run 5 times and the median is taken.
    11131126Error bars showing the 95\% confidence intervals appear on each point in the graphs.
     1127The confidence intervals are calculated using bootstrapping to avoid normality assumptions.
    11141128If the confidence bars are small enough, they may be obscured by the data point.
    11151129In this section, \uC is compared to \CFA frequently, as the actor system in \CFA is heavily based off of \uC's actor system.
  • doc/theses/colby_parsons_MMAth/text/channels.tex

    rc1e66d9 rdeda7e6  
    2020Neither Go nor \CFA channels have the restrictions of the early channel-based concurrent systems.
    2121
    22 Other popular languages and libraries that provide channels include C++ Boost~\cite{boost:channel}, Rust~\cite{rust:channel}, Haskell~\cite{haskell:channel}, and OCaml~\cite{ocaml:channel}.
     22Other popular languages and libraries that provide channels include C++ Boost~\cite{boost:channel}, Rust~\cite{rust:channel}, Haskell~\cite{haskell:channel}, OCaml~\cite{ocaml:channel}, and Kotlin~\cite{kotlin:channel}.
    2323Boost channels only support asynchronous (non-blocking) operations, and Rust channels are limited to only having one consumer per channel.
    2424Haskell channels are unbounded in size, and OCaml channels are zero-size.
    2525These restrictions in Haskell and OCaml are likely due to their functional approach, which results in them both using a list as the underlying data structure for their channel.
    2626These languages and libraries are not discussed further, as their channel implementation is not comparable to the bounded-buffer style channels present in Go and \CFA.
     27Kotlin channels are comparable to Go and \CFA, but unfortunately they were not identified as a comparator until after presentation of this thesis and are omitted due to time constraints.
    2728
    2829\section{Producer-Consumer Problem}
     
    3132In the problem, threads interact with a buffer in two ways: producing threads insert values into the buffer and consuming threads remove values from the buffer.
    3233In general, a buffer needs protection to ensure a producer only inserts into a non-full buffer and a consumer only removes from a non-empty buffer (synchronization).
    33 As well, a buffer needs protection from concurrent access by multiple producers or consumers attempting to insert or remove simultaneously (MX).
     34As well, a buffer needs protection from concurrent access by multiple producers or consumers attempting to insert or remove simultaneously, which is often provided by MX.
    3435
    3536\section{Channel Size}\label{s:ChannelSize}
     
    4142Fixed sized (bounded) implies the communication is mostly asynchronous, \ie the producer can proceed up to the buffer size and vice versa for the consumer with respect to removal, at which point the producer/consumer would wait.
    4243\item
    43 Infinite sized (unbounded) implies the communication is asynchronous, \ie the producer never waits but the consumer waits when the buffer is empty.
    44 Since memory is finite, all unbounded buffers are ultimately bounded;
    45 this restriction must be part of its implementation.
     44Infinite sized (unbounded) implies the communication is asymmetrically asynchronous, \ie the producer never waits but the consumer waits when the buffer is empty.
    4645\end{enumerate}
    4746
     
    5049However, like MX, a buffer should ensure every value is eventually removed after some reasonable bounded time (no long-term starvation).
    5150The simplest way to prevent starvation is to implement the buffer as a queue, either with a cyclic array or linked nodes.
     51While \gls{fifo} is not required for producer-consumer problem correctness, it is a desired property in channels as it provides predictable and often relied upon channel ordering behaviour to users.
    5252
    5353\section{First-Come First-Served}
    54 As pointed out, a bounded buffer requires MX among multiple producers or consumers.
     54As pointed out, a bounded buffer implementation often provides MX among multiple producers or consumers.
    5555This MX should be fair among threads, independent of the \gls{fifo} buffer being fair among values.
    5656Fairness among threads is called \gls{fcfs} and was defined by Lamport~\cite[p.~454]{Lamport74}.
     
    6666
    6767\section{Channel Implementation}\label{s:chan_impl}
    68 Currently, only the Go and Erlang programming languages provide user-level threading where the primary communication mechanism is channels.
    69 Both Go and Erlang have user-level threading and preemptive scheduling, and both use channels for communication.
    70 Go provides multiple homogeneous channels; each have a single associated type.
     68The programming languages Go, Kotlin, and Erlang provide user-level threading where the primary communication mechanism is channels.
     69These languages have user-level threading and preemptive scheduling, and both use channels for communication.
     70Go and Kotlin provide multiple homogeneous channels; each have a single associated type.
    7171Erlang, which is closely related to actor systems, provides one heterogeneous channel per thread (mailbox) with a typed receive pattern.
    72 Go encourages users to communicate via channels, but provides them as an optional language feature.
     72Go and Kotlin encourage users to communicate via channels, but provides them as an optional language feature.
    7373On the other hand, Erlang's single heterogeneous channel is a fundamental part of the threading system design; using it is unavoidable.
    74 Similar to Go, \CFA's channels are offered as an optional language feature.
     74Similar to Go and Kotlin, \CFA's channels are offered as an optional language feature.
    7575
    7676While iterating on channel implementation, experiments were conducted that varied the producer-consumer algorithm and lock type used inside the channel.
     
    8383The Go channel implementation utilizes cooperation among threads to achieve good performance~\cite{go:chan}.
    8484This cooperation only occurs when producers or consumers need to block due to the buffer being full or empty.
     85After a producer blocks it must wait for a consumer to signal it and vice versa.
     86The consumer or producer that signals a blocked thread is called the signalling thread.
    8587In these cases, a blocking thread stores their relevant data in a shared location and the signalling thread completes the blocking thread's operation before waking them;
    8688\ie the blocking thread has no work to perform after it unblocks because the signalling threads has done this work.
     
    8890First, each thread interacting with the channel only acquires and releases the internal channel lock once.
    8991As a result, contention on the internal lock is decreased; only entering threads compete for the lock since unblocking threads do not reacquire the lock.
    90 The other advantage of Go's wait-morphing approach is that it eliminates the bottleneck of waiting for signalled threads to run.
     92The other advantage of Go's wait-morphing approach is that it eliminates the need to wait for signalled threads to run.
    9193Note that the property of acquiring/releasing the lock only once can also be achieved with a different form of cooperation, called \Newterm{baton passing}.
    9294Baton passing occurs when one thread acquires a lock but does not release it, and instead signals a thread inside the critical section, conceptually ``passing'' the mutual exclusion from the signalling thread to the signalled thread.
     
    9496the wait-morphing approach has threads cooperate by completing the signalled thread's operation, thus removing a signalled thread's need for mutual exclusion after unblocking.
    9597While baton passing is useful in some algorithms, it results in worse channel performance than the Go approach.
    96 In the baton-passing approach, all threads need to wait for the signalled thread to reach the front of the ready queue, context switch, and run before other operations on the channel can proceed, since the signalled thread holds mutual exclusion;
     98In the baton-passing approach, all threads need to wait for the signalled thread to unblock and run before other operations on the channel can proceed, since the signalled thread holds mutual exclusion;
    9799in the wait-morphing approach, since the operation is completed before the signal, other threads can continue to operate on the channel without waiting for the signalled thread to run.
    98100
     
    154156Thus, improperly handled \gls{toctou} issues with channels often result in deadlocks as threads performing the termination may end up unexpectedly blocking in their attempt to help other threads exit the system.
    155157
     158\subsubsection{Go Channel Close}
    156159Go channels provide a set of tools to help with concurrent shutdown~\cite{go:chan} using a @close@ operation in conjunction with the \Go{select} statement.
    157160The \Go{select} statement is discussed in \ref{s:waituntil}, where \CFA's @waituntil@ statement is compared with the Go \Go{select} statement.
     
    175178Hence, due to Go's asymmetric approach to channel shutdown, separate synchronization between producers and consumers of a channel has to occur during shutdown.
    176179
    177 \paragraph{\CFA channels} have access to an extensive exception handling mechanism~\cite{Beach21}.
     180\subsubsection{\CFA Channel Close}
     181\CFA channels have access to an extensive exception handling mechanism~\cite{Beach21}.
    178182As such \CFA uses an exception-based approach to channel shutdown that is symmetric for both producers and consumers, and supports graceful shutdown.
    179183
  • doc/theses/colby_parsons_MMAth/text/conclusion.tex

    rc1e66d9 rdeda7e6  
    55% ======================================================================
    66
    7 The goal of this thesis was to expand the concurrent support that \CFA offers to fill in gaps and support language users' ability to write safe and efficient concurrent programs.
    8 The presented features achieves this goal, and provides users with the means to write scalable programs in \CFA through multiple avenues.
    9 Additionally, the tools presented include safety and productivity features from deadlock detection, to detection of common programming errors, easy concurrent shutdown, and toggleable performance statistics.
    10 Programmers often have preferences between computing paradigms and concurrency is no exception.
    11 If users prefer the message passing paradigm of concurrency, \CFA now provides message passing utilities in the form of an actor system and channels.
    12 For shared memory concurrency, the mutex statement provides a safe and easy-to-use interface for mutual exclusion.
    13 The @waituntil@ statement aids in writing concurrent programs in both the message passing and shared memory paradigms of concurrency.
    14 Furthermore, no other language provides a synchronous multiplexing tool polymorphic over resources like \CFA's @waituntil@.
    15 This work successfully provides users with familiar concurrent language support, but with additional value added over similar utilities in other popular languages.
     7The goal of this thesis is to expand concurrent support in \CFA to fill in gaps and increase support for writing safe and efficient concurrent programs.
     8The presented features achieve this goal and provide users with the means to write scalable concurrent programs in \CFA through multiple avenues.
     9Additionally, the tools presented provide safety and productivity features including: detection of deadlock and other common concurrency errors, easy concurrent shutdown, and toggleable performance statistics.
    1610
    17 On overview of the contributions in this thesis include the following:
     11For locking, the mutex statement provides a safe and easy-to-use interface for mutual exclusion.
     12If programmers prefer the message-passing paradigm, \CFA now supports it in the form of channels and actors.
     13The @waituntil@ statement simplifies writing concurrent programs in both the message-passing and shared-memory paradigms of concurrency.
     14Finally, no other programming language provides a synchronous multiplexing tool that is polymorphic over resources like \CFA's @waituntil@.
     15This work successfully provides users with familiar concurrent-language support, but with additional value added over similar utilities in other popular languages.
     16
     17On overview of the contributions made in this thesis include the following:
    1818\begin{enumerate}
    19 \item The mutex statement, which provides performant and deadlock-free multiple lock acquisition.
    20 \item Channels with comparable performance to Go, that have safety and productivity features including deadlock detection, and an easy-to-use exception-based channel @close@ routine.
    21 \item An in-memory actor system that achieved the lowest latency message send of systems tested due to the novel copy-queue data structure. The actor system presented has built-in detection of six common actor errors, and it has good performance compared to other systems on all benchmarks.
    22 \item A @waituntil@ statement which tackles the hard problem of allowing a thread to safely synch\-ronously wait for some set of concurrent resources.
     19\item The mutex statement, which provides performant and deadlock-free multi-lock acquisition.
     20\item Channels with comparable performance to Go, which have safety and productivity features including deadlock detection and an easy-to-use exception-based channel @close@ routine.
     21\item An in-memory actor system, which achieves the lowest latency message send of systems tested due to the novel copy-queue data structure.
     22\item As well, the actor system has built-in detection of six common actor errors, with excellent performance compared to other systems across all benchmarks presented in this thesis.
     23\item A @waituntil@ statement, which tackles the hard problem of allowing a thread wait synchronously for an arbitrary set of concurrent resources.
    2324\end{enumerate}
    2425
    25 The features presented are commonly used in conjunction to solve concurrent problems.
    26 The @waituntil@ statement, the @mutex@ statement, and channels will all likely see use in a program where a thread operates as an administrator or server which accepts and distributes work among channels based on some shared state.
    27 The @mutex@ statement sees use across almost all concurrent code in \CFA, since it is used with the stream operator @sout@ to provide thread-safe output.
    28 While not yet implemented, the polymorphic support of the @waituntil@ statement could see use in conjunction with the actor system to enable user threads outside the actor system to wait for work to be done, or for actors to finish.
    29 A user of \CFA does not have to solely subscribe to the message passing or shared memory concurrent paradigm.
    30 As such, channels in \CFA are often used to pass pointers to shared memory that may still need mutual exclusion, requiring the @mutex@ statement to also be used.
     26The added features are now commonly used to solve concurrent problems in \CFA.
     27The @mutex@ statement sees use across almost all concurrent code in \CFA, as it is the simplest mechanism for providing thread-safe input and output.
     28The channels and the @waituntil@ statement see use in programs where a thread operates as a server or administrator, which accepts and distributes work among channels based on some shared state.
     29When implemented, the polymorphic support of the @waituntil@ statement will see use with the actor system to enable user threads outside the actor system to wait for work to be done or for actors to finish.
     30Finally, the new features are often combined, \eg channels pass pointers to shared memory that may still need mutual exclusion, requiring the @mutex@ statement to be used.
    3131
    3232From the novel copy-queue data structure in the actor system and the plethora of user-supporting safety features, all these utilities build upon existing concurrent tooling with value added.
    3333Performance results verify that each new feature is comparable or better than similar features in other programming languages.
    34 All in all, this suite of concurrent tools expands users' ability to easily write safe and performant multi-threaded programs in \CFA.
     34All in all, this suite of concurrent tools expands a \CFA programmer's ability to easily write safe and performant multi-threaded programs.
    3535
    3636\section{Future Work}
     
    4040This thesis only scratches the surface of implicit concurrency by providing an actor system.
    4141There is room for more implicit concurrency tools in \CFA.
    42 User-defined implicit concurrency in the form of annotated loops or recursive concurrent functions exists in many other languages and libraries~\cite{uC++,OpenMP}.
     42User-defined implicit concurrency in the form of annotated loops or recursive concurrent functions exists in other languages and libraries~\cite{uC++,OpenMP}.
    4343Similar implicit concurrency mechanisms could be implemented and expanded on in \CFA.
    4444Additionally, the problem of automatic parallelism of sequential programs via the compiler is an interesting research space that other languages have approached~\cite{wilson94,haskell:parallel} and could be explored in \CFA.
     
    4646\subsection{Advanced Actor Stealing Heuristics}
    4747
    48 In this thesis, two basic victim-selection heuristics are chosen when implementing the work stealing actor system.
    49 Good victim selection is an active area of work stealing research, especially when taking into account NUMA effects and cache locality~\cite{barghi18,wolke17}.
     48In this thesis, two basic victim-selection heuristics are chosen when implementing the work-stealing actor-system.
     49Good victim selection is an active area of work-stealing research, especially when taking into account NUMA effects and cache locality~\cite{barghi18,wolke17}.
    5050The actor system in \CFA is modular and exploration of other victim-selection heuristics for queue stealing in \CFA could be provided by pluggable modules.
    5151Another question in work stealing is: when should a worker thread steal?
    52 Work stealing systems can often be too aggressive when stealing, causing the cost of the steal to be have a negative rather positive effect on performance.
     52Work-stealing systems can often be too aggressive when stealing, causing the cost of the steal to be have a negative rather positive effect on performance.
    5353Given that thief threads often have cycles to spare, there is room for a more nuanced approaches when stealing.
    5454Finally, there is the very difficult problem of blocking and unblocking idle threads for workloads with extreme oscillations in CPU needs.
     
    5656\subsection{Synchronously Multiplexing System Calls}
    5757
    58 There are many tools that try to synchronously wait for or asynchronously check I/O, since improvements in this area pay dividends in many areas of computer science~\cite{linux:select,linux:poll,linux:epoll,linux:iouring}.
     58There are many tools that try to synchronously wait for or asynchronously check I/O.
     59Improvements in this area pay dividends in many areas of I/O based programming~\cite{linux:select,linux:poll,linux:epoll,linux:iouring}.
    5960Research on improving user-space tools to synchronize over I/O and other system calls is an interesting area to explore in the world of concurrent tooling.
    6061Specifically, incorporating I/O into the @waituntil@ to allow a network server to work with multiple kinds of asynchronous I/O interconnects without using tradition event loops.
     
    6970The semantics and safety of these builtins require careful navigation since they require the user to have a deep understanding of concurrent memory-ordering models.
    7071Furthermore, these atomics also often require a user to understand how to fence appropriately to ensure correctness.
    71 All these problems and more could benefit from language support in \CFA.
     72All these problems and more would benefit from language support in \CFA.
    7273Adding good language support for atomics is a difficult problem, which if solved well, would allow for easier and safer writing of low-level concurrent code.
    7374
  • doc/theses/colby_parsons_MMAth/text/intro.tex

    rc1e66d9 rdeda7e6  
    55% ======================================================================
    66
    7 Concurrent programs are the wild west of programming because determinism and simple ordering of program operations go out the window. 
    8 To seize the reins and write performant and safe concurrent code, high-level concurrent-language features are needed. 
    9 Like any other craftsmen, programmers are only as good as their tools, and concurrent tooling and features are no exception. 
     7Concurrent programs are the wild west of programming because determinism and simple ordering of program operations go out the window.
     8To seize the reins and write performant and safe concurrent code, high-level concurrent-language features are needed.
     9Like any other craftsmen, programmers are only as good as their tools, and concurrent tooling and features are no exception.
    1010
    11 This thesis presents a suite of high-level concurrent-language features implemented in the new programming-language \CFA.
    12 These features aim to improve the performance of concurrent programs, aid in writing safe programs, and assist user productivity by improving the ease of concurrent programming.
    13 The groundwork for concurrent features in \CFA was implemented by Thierry Delisle~\cite{Delisle18}, who contributed the threading system, coroutines, monitors and other tools.
    14 This thesis builds on top of that foundation by providing a suite of high-level concurrent features.
    15 The features include a @mutex@ statement, channels, a @waituntil@ statement, and an actor system.
    16 All of these features exist in other programming languages in some shape or form, however this thesis extends the original ideas by improving performance, productivity, and safety.
     11This thesis presents a suite of high-level concurrent-language features implemented in the new programming-language \CFA.
     12These features aim to improve the performance of concurrent programs, aid in writing safe programs, and assist user productivity by improving the ease of concurrent programming.
     13The groundwork for concurrent features in \CFA was designed and implemented by Thierry Delisle~\cite{Delisle18}, who contributed the threading system, coroutines, monitors and other basic concurrency tools.
     14This thesis builds on top of that foundation by providing a suite of high-level concurrent features.
     15The features include a @mutex@ statement, channels, a @waituntil@ statement, and an actor system.
     16All of these features exist in other programming languages in some shape or form;
     17however, this thesis extends the original ideas by improving performance, productivity, and safety.
    1718
    1819\section{The Need For Concurrent Features}
    19 Asking a programmer to write a complex concurrent program without any concurrent language features is asking them to undertake a very difficult task.
    20 They would only be able to rely on the atomicity that their hardware provides and would have to build up from there.
    21 This would be like asking a programmer to write a complex sequential program only in assembly.
    22 Both are doable, but would often be easier and less error prone with higher level tooling.
    2320
    24 Concurrent programming has many pitfalls that are unique and do not show up in sequential code:
     21% Asking a programmer to write a complex concurrent program without any concurrent language features is asking them to undertake a very difficult task.
     22% They would only be able to rely on the atomicity that their hardware provides and would have to build up from there.
     23% This would be like asking a programmer to write a complex sequential program only in assembly.
     24% Both are doable, but would often be easier and less error prone with higher level tooling.
     25
     26Concurrent programming has many unique pitfalls that do not appear in sequential programming:
    2527\begin{enumerate}
    26 \item Deadlock, where threads cyclically wait on resources, blocking them indefinitely.
     28\item Race conditions, where thread orderings can result in arbitrary behaviours, resulting in correctness problems.
    2729\item Livelock, where threads constantly attempt a concurrent operation unsuccessfully, resulting in no progress being made.
    28 \item Race conditions, where thread orderings can result in differing behaviours and correctness of a program execution.
    29 \item Starvation, where threads may be deprived of access to some shared resource due to unfairness and never make progress.
     30\item Starvation, where \emph{some} threads constantly attempt a concurrent operation unsuccessfully, resulting in partial progress being made.
     31\item Deadlock, where some threads wait for an event that cannot occur, blocking them indefinitely, resulting in no progress being made.
    3032\end{enumerate}
    31 Even with the guiding hand of concurrent tools these pitfalls can still catch unwary programmers, but good language support can prevent, detect, and mitigate these problems.
     33Even with the guiding hand of concurrent tools these pitfalls still catch unwary programmers, but good language support helps significantly to prevent, detect, and mitigate these problems.
    3234
    33 \section{A Brief Overview}
     35\section{Thesis Overview}
    3436
    35 The first chapter of this thesis aims to familiarize the reader with the language \CFA.
    36 In this chapter, syntax and features of the \CFA language that appear in this work are discussed The next chapter briefly discusses prior concurrency work in \CFA and how this work builds on top of existing features.
    37 The remaining chapters each introduce a concurrent language feature, discuss prior related work, and present contributions which are then benchmarked against other languages and systems.
    38 The first of these chapters discusses the @mutex@ statement, a language feature that improves ease of use and safety of lock usage.
    39 The @mutex@ statement is compared both in terms of safety and performance with similar tools in \CC and Java.
    40 The following chapter discusses channels, a message passing concurrency primitive that provides an avenue for safe synchronous and asynchronous communication across threads.
    41 Channels in \CFA are compared to Go, which popularized the use of channels in modern concurrent programs.
    42 The following chapter discusses the \CFA actor system.
    43 The \CFA actor system is a close cousin of channels, as it also belongs to the message passing paradigm of concurrency.
    44 However, the actor system provides a great degree of abstraction and ease of scalability, making it useful for a different range of problems than channels.
    45 The actor system in \CFA is compared with a variety of other systems on a suite of benchmarks, where it achieves significant performance gains over other systems due to its design.
    46 The final chapter discusses the \CFA @waituntil@ statement which provides the ability to synchronize while waiting for a resource, such as acquiring a lock, accessing a future, or writing to a channel.
    47 The @waituntil@ statement presented provides greater flexibility and expressibility than similar features in other languages.
    48 All in all, the features presented aim to fill in gaps in the current \CFA concurrent language support, and enable users to write a wider range of complex concurrent programs with ease.
     37Chapter~\ref{s:cfa} of this thesis aims to familiarize the reader with the language \CFA.
     38In this chapter, syntax and features of the \CFA language that appear in this work are discussed.
     39Chapter~\ref{s:cfa_concurrency} briefly discusses prior concurrency work in \CFA, and how the work in this thesis builds on top of the existing framework.
     40Each remaining chapter introduces an additional \CFA concurrent-language feature, which includes discussing prior related work for the feature, extensions over prior features, and uses benchmarks to compare the performance the feature with corresponding or similar features in other languages and systems.
     41
     42Chapter~\ref{s:mutexstmt} discusses the @mutex@ statement, a language feature that provides safe and simple lock usage.
     43The @mutex@ statement is compared both in terms of safety and performance with similar mechanisms in \CC and Java.
     44Chapter~\ref{s:channels} discusses channels, a message passing concurrency primitive that provides for safe synchronous and asynchronous communication among threads.
     45Channels in \CFA are compared to Go's channels, which popularized the use of channels in modern concurrent programs.
     46Chapter~\ref{s:actors} discusses the \CFA actor system.
     47An actor system is a close cousin of channels, as it also belongs to the message passing paradigm of concurrency.
     48However, an actor system provides a greater degree of abstraction and ease of scalability, making it useful for a different range of problems than channels.
     49The actor system in \CFA is compared with a variety of other systems on a suite of benchmarks.
     50Chapter~\ref{s:waituntil} discusses the \CFA @waituntil@ statement, which provides the ability to synchronize while waiting for a resource, such as acquiring a lock, accessing a future, or writing to a channel.
     51The \CFA @waituntil@ statement provides greater flexibility and expressibility than similar features in other languages.
     52All in all, the features presented aim to fill in gaps in the current \CFA concurrent-language support, enabling users to write a wider range of complex concurrent programs with ease.
    4953
    5054\section{Contributions}
    51 This work presents the following contributions:
    52 \begin{enumerate}
    53 \item The @mutex@ statement which:
    54 \begin{itemize}[itemsep=0pt]
    55 \item
    56 provides deadlock-free multiple lock acquisition,
    57 \item
    58 clearly denotes lock acquisition and release,
    59 \item
    60 and has good performance irrespective of lock ordering.
    61 \end{itemize}
    62 \item Channels which:
     55This work presents the following contributions within each of the additional language features:
     56\begin{enumerate}[leftmargin=*]
     57\item The @mutex@ statement that:
     58    \begin{itemize}[itemsep=0pt]
     59    \item
     60    provides deadlock-free multiple lock acquisition,
     61    \item
     62    clearly denotes lock acquisition and release,
     63    \item
     64    and has good performance irrespective of lock ordering.
     65    \end{itemize}
     66\item The channel that:
    6367\begin{itemize}[itemsep=0pt]
    6468    \item
     
    7175    and provides toggle-able statistics for performance tuning.
    7276\end{itemize}
    73 \item An in-memory actor system that:
     77\item The in-memory actor system that:
    7478\begin{itemize}[itemsep=0pt]
    7579    \item
     
    8286    gains performance through static-typed message sends, eliminating the need for dynamic dispatch,
    8387    \item
    84     introduces the copy queue, an array based queue specialized for the actor use case to minimize calls to the memory allocator,
     88    introduces the copy queue, an array-based queue specialized for the actor use-case to minimize calls to the memory allocator,
    8589    \item
    8690    has robust detection of six tricky, but common actor programming errors,
     
    9094    and provides toggle-able statistics for performance tuning.
    9195\end{itemize}
    92 
    93 \item A @waituntil@ statement which:
     96\item The @waituntil@ statement that:
    9497\begin{itemize}[itemsep=0pt]
    9598    \item
    9699    is the only known polymorphic synchronous multiplexing language feature,
    97100    \item
    98     provides greater expressibility of waiting conditions than other languages,
     101    provides greater expressibility for waiting conditions than other languages,
    99102    \item
    100     and achieves comparable performance to similar features in two other languages,
     103    and achieves comparable performance to similar features in two other languages.
    101104\end{itemize}
    102105\end{enumerate}
  • doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex

    rc1e66d9 rdeda7e6  
    8383\end{figure}
    8484
    85 Like Java, \CFA monitors have \Newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling of other MX functions.
     85Like Java, \CFA monitors have \Newterm{multi-acquire} (reentrant locking) semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling of other MX functions.
    8686For robustness, \CFA monitors ensure the monitor lock is released regardless of how an acquiring function ends, normal or exceptional, and returning a shared variable is safe via copying before the lock is released.
    8787Monitor objects can be passed through multiple helper functions without acquiring mutual exclusion, until a designated function associated with the object is called.
     
    104104}
    105105\end{cfa}
    106 The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments. It it important to note that \CFA monitors do not attempt to solve the nested monitor problem~\cite{Lister77}.
     106The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments via resource ordering.
     107It it important to note that \CFA monitors do not attempt to solve the nested monitor problem~\cite{Lister77}.
    107108
    108109\section{\lstinline{mutex} statement}
     
    165166In detail, the mutex statement has a clause and statement block, similar to a conditional or loop statement.
    166167The clause accepts any number of lockable objects (like a \CFA MX function prototype), and locks them for the duration of the statement.
    167 The locks are acquired in a deadlock free manner and released regardless of how control-flow exits the statement.
     168The locks are acquired in a deadlock-free manner and released regardless of how control-flow exits the statement.
     169Note that this deadlock-freedom has some limitations \see{\VRef{s:DeadlockAvoidance}}.
    168170The mutex statement provides easy lock usage in the common case of lexically wrapping a CS.
    169171Examples of \CFA mutex statement are shown in \VRef[Listing]{l:cfa_mutex_ex}.
     
    210212Like Java, \CFA introduces a new statement rather than building from existing language features, although \CFA has sufficient language features to mimic \CC RAII locking.
    211213This syntactic choice makes MX explicit rather than implicit via object declarations.
    212 Hence, it is easier for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code.
     214Hence, it is easy for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code; similar scanning can be done with Java's @synchronized@.
    213215Furthermore, concurrent safety is provided across an entire program for the complex operation of acquiring multiple locks in a deadlock-free manner.
    214216Unlike Java, \CFA's mutex statement and \CC's @scoped_lock@ both use parametric polymorphism to allow user defined types to work with this feature.
     
    231233thread$\(_2\)$ : sout | "uvw" | "xyz";
    232234\end{cfa}
    233 any of the outputs can appear, included a segment fault due to I/O buffer corruption:
     235any of the outputs can appear:
    234236\begin{cquote}
    235237\small\tt
     
    260262mutex( sout ) { // acquire stream lock for sout for block duration
    261263        sout | "abc";
    262         mutex( sout ) sout | "uvw" | "xyz"; // OK because sout lock is recursive
     264        sout | "uvw" | "xyz";
    263265        sout | "def";
    264266} // implicitly release sout lock
    265267\end{cfa}
    266 The inner lock acquire is likely to occur through a function call that does a thread-safe print.
    267268
    268269\section{Deadlock Avoidance}\label{s:DeadlockAvoidance}
     
    309310For fewer than 7 locks ($2^3-1$), the sort is unrolled performing the minimum number of compare and swaps for the given number of locks;
    310311for 7 or more locks, insertion sort is used.
    311 Since it is extremely rare to hold more than 6 locks at a time, the algorithm is fast and executes in $O(1)$ time.
    312 Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution.
     312It is assumed to be rare to hold more than 6 locks at a time.
     313For 6 or fewer locks the algorithm is fast and executes in $O(1)$ time.
     314Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution, as long as lifetimes of objects are appropriately managed.
     315For example, deleting a lock and allocating another one could give the new lock the same address as the deleted one, however deleting a lock in use by another thread is a programming error irrespective of the usage of the @mutex@ statement.
    313316
    314317The downside to the sorting approach is that it is not fully compatible with manual usages of the same locks outside the @mutex@ statement, \ie the lock are acquired without using the @mutex@ statement.
     
    338341\end{cquote}
    339342Comparatively, if the @scoped_lock@ is used and the same locks are acquired elsewhere, there is no concern of the @scoped_lock@ deadlocking, due to its avoidance scheme, but it may livelock.
    340 The convenience and safety of the @mutex@ statement, \ie guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario versus combining manual locking with the mutex statement.
     343The convenience and safety of the @mutex@ statement, \ie guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating most deadlock scenarios versus combining manual locking with the mutex statement.
    341344Both \CC and the \CFA do not provide any deadlock guarantees for nested @scoped_lock@s or @mutex@ statements.
    342345To do so would require solving the nested monitor problem~\cite{Lister77}, which currently does not have any practical solutions.
     
    344347\section{Performance}
    345348Given the two multi-acquisition algorithms in \CC and \CFA, each with differing advantages and disadvantages, it interesting to compare their performance.
    346 Comparison with Java is not possible, since it only takes a single lock.
     349Comparison with Java was not conducted, since the synchronized statement only takes a single object and does not provide deadlock avoidance or prevention.
    347350
    348351The comparison starts with a baseline that acquires the locks directly without a mutex statement or @scoped_lock@ in a fixed ordering and then releases them.
     
    356359Each variation is run 11 times on 2, 4, 8, 16, 24, 32 cores and with 2, 4, and 8 locks being acquired.
    357360The median is calculated and is plotted alongside the 95\% confidence intervals for each point.
     361The confidence intervals are calculated using bootstrapping to avoid normality assumptions.
    358362
    359363\begin{figure}
     
    388392}
    389393\end{cfa}
    390 \caption{Deadlock avoidance benchmark pseudocode}
     394\caption{Deadlock avoidance benchmark \CFA pseudocode}
    391395\label{l:deadlock_avoid_pseudo}
    392396\end{figure}
     
    396400% sudo dmidecode -t system
    397401\item
    398 Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units) 2.0 GHz, TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler
     402Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units), TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler
    399403\item
    400 Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (48 processing units) 2.2GHz, TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler
     404Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (96 processing units), TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler
    401405\end{list}
    402406%The hardware architectures are different in threading (multithreading vs hyper), cache structure (MESI or MESIF), NUMA layout (QPI vs HyperTransport), memory model (TSO vs WO), and energy/thermal mechanisms (turbo-boost).
     
    411415For example, on the AMD machine with 32 threads and 8 locks, the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually.
    412416It is likely that shorter bouts of livelock occurred in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data.
    413 In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} there is the counter-intuitive result of the mutex statement performing better than the baseline.
     417In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} there is the counter-intuitive result of the @mutex@ statement performing better than the baseline.
    414418At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort, which should decrease performance.
    415419The hard coded sort is branch-free and constant-time and was verified to be faster than insertion sort for 6 or fewer locks.
    416 It is likely the increase in throughput compared to baseline is due to the delay spent in the insertion sort, which decreases contention on the locks.
    417 
     420Part of the difference in throughput compared to baseline is due to the delay spent in the insertion sort, which decreases contention on the locks.
     421This was verified to be part of the difference in throughput by experimenting with varying NCS delays in the baseline; however it only comprises a small portion of difference.
     422It is possible that the baseline is slowed down or the @mutex@ is sped up by other factors that are not easily identifiable.
    418423
    419424\begin{figure}
  • doc/theses/colby_parsons_MMAth/text/waituntil.tex

    rc1e66d9 rdeda7e6  
    168168Go's @select@ has the same exclusive-or semantics as the ALT primitive from Occam and associated code blocks for each clause like ALT and Ada.
    169169However, unlike Ada and ALT, Go does not provide guards for the \lstinline[language=go]{case} clauses of the \lstinline[language=go]{select}.
    170 As such, the exponential blowup can be seen comparing Go and \uC in Figure~\label{f:AdaMultiplexing}.
     170As such, the exponential blowup can be seen comparing Go and \uC in Figure~\ref{f:AdaMultiplexing}.
    171171Go also provides a timeout via a channel and a @default@ clause like Ada @else@ for asynchronous multiplexing.
    172172
     
    519519In following example, either channel @C1@ or @C2@ must be satisfied but nothing can be done for at least 1 or 3 seconds after the channel read, respectively.
    520520\begin{cfa}[deletekeywords={timeout}]
    521 waituntil( i << C1 ); and waituntil( timeout( 1`s ) );
    522 or waituntil( i << C2 ); and waituntil( timeout( 3`s ) );
     521waituntil( i << C1 ){} and waituntil( timeout( 1`s ) ){}
     522or waituntil( i << C2 ){} and waituntil( timeout( 3`s ) ){}
    523523\end{cfa}
    524524If only @C2@ is satisfied, \emph{both} timeout code-blocks trigger because 1 second occurs before 3 seconds.
     
    542542Now the unblocked WUT is guaranteed to have a satisfied resource and its code block can safely executed.
    543543The insertion circumvents the channel buffer via the wait-morphing in the \CFA channel implementation \see{Section~\ref{s:chan_impl}}, allowing @waituntil@ channel unblocking to not be special-cased.
     544Note that all channel operations are fair and no preference is given between @waituntil@ and direct channel operations when unblocking.
    544545
    545546Furthermore, if both @and@ and @or@ operators are used, the @or@ operations stop behaving like exclusive-or due to the race among channel operations, \eg:
  • libcfa/prelude/extras.c

    rc1e66d9 rdeda7e6  
    33#include <uchar.h>                                      // char16_t, char32_t
    44#include <wchar.h>                                      // wchar_t
    5 #include <stdlib.h>                                     // malloc, free, exit, atexit, abort
     5#include <stdlib.h>                                     // malloc, free, getenv, exit, atexit, abort, printf
    66#include <stdio.h>                                      // printf
     7#include <string.h>                                     // strlen, strcmp, strncmp
  • libcfa/prelude/extras.regx2

    rc1e66d9 rdeda7e6  
    11extern void \*malloc[^;]*;
    22extern void free[^;]*;
     3extern char \*getenv[^;]*;
    34extern void exit[^;]*;
    45extern int atexit[^;]*;
    56extern void abort[^;]*;
    67extern int printf[^;]*;
     8int strcmp[^;]*;
     9int strncmp[^;]*;
     10size_t strlen[^;]*;
  • libcfa/src/Makefile.am

    rc1e66d9 rdeda7e6  
    1111## Created On       : Sun May 31 08:54:01 2015
    1212## Last Modified By : Peter A. Buhr
    13 ## Last Modified On : Wed Aug 30 21:22:45 2023
    14 ## Update Count     : 263
     13## Last Modified On : Mon Sep 18 17:06:56 2023
     14## Update Count     : 264
    1515###############################################################################
    1616
     
    118118        concurrency/mutex_stmt.hfa \
    119119        concurrency/channel.hfa \
    120         concurrency/actor.hfa 
     120        concurrency/actor.hfa
    121121
    122122inst_thread_headers_src = \
     
    130130        concurrency/mutex.hfa \
    131131        concurrency/select.hfa \
    132         concurrency/thread.hfa
     132        concurrency/thread.hfa \
     133        concurrency/cofor.hfa
    133134
    134135thread_libsrc = ${inst_thread_headers_src} ${inst_thread_headers_src:.hfa=.cfa} \
  • libcfa/src/clock.hfa

    rc1e66d9 rdeda7e6  
    1010// Created On       : Thu Apr 12 14:36:06 2018
    1111// Last Modified By : Peter A. Buhr
    12 // Last Modified On : Sun Apr 18 08:12:16 2021
    13 // Update Count     : 28
     12// Last Modified On : Sat Sep  9 14:07:17 2023
     13// Update Count     : 30
    1414//
    1515
     
    9191        // discontinuous jumps when the OS is not running the kernal thread. A duration is returned because the value is
    9292        // relative and cannot be converted to real-time (wall-clock) time.
    93         Duration processor() {                                                          // non-monotonic duration of kernel thread
     93        Duration processor_cpu() {                                                      // non-monotonic duration of kernel thread
    9494                timespec ts;
    9595                clock_gettime( CLOCK_THREAD_CPUTIME_ID, &ts );
    9696                return (Duration){ ts };
    97         } // processor
     97        } // processor_cpu
    9898
    9999        // Program CPU-time watch measures CPU time consumed by all processors (kernel threads) in the UNIX process.  This
    100100        // watch is affected by discontinuous jumps when the OS is not running the kernel threads. A duration is returned
    101101        // because the value is relative and cannot be converted to real-time (wall-clock) time.
    102         Duration program() {                                                            // non-monotonic duration of program CPU
     102        Duration program_cpu() {                                                        // non-monotonic duration of program CPU
    103103                timespec ts;
    104104                clock_gettime( CLOCK_PROCESS_CPUTIME_ID, &ts );
    105105                return (Duration){ ts };
    106         } // program
     106        } // program_cpu
    107107
    108108        // Monotonic duration from machine boot and including system suspension. This watch is unaffected by discontinuous
  • libcfa/src/collections/string.cfa

    rc1e66d9 rdeda7e6  
    157157// Comparison
    158158
    159 bool ?==?(const string & s, const string & other) {
    160     return *s.inner == *other.inner;
    161 }
    162 
    163 bool ?!=?(const string & s, const string & other) {
    164     return *s.inner != *other.inner;
    165 }
    166 
    167 bool ?==?(const string & s, const char * other) {
    168     return *s.inner == other;
    169 }
    170 
    171 bool ?!=?(const string & s, const char * other) {
    172     return *s.inner != other;
    173 }
     159int  cmp (const string &s1, const string &s2) { return cmp(*s1.inner ,  *s2.inner); }
     160bool ?==?(const string &s1, const string &s2) { return     *s1.inner == *s2.inner ; }
     161bool ?!=?(const string &s1, const string &s2) { return     *s1.inner != *s2.inner ; }
     162bool ?>? (const string &s1, const string &s2) { return     *s1.inner >  *s2.inner ; }
     163bool ?>=?(const string &s1, const string &s2) { return     *s1.inner >= *s2.inner ; }
     164bool ?<=?(const string &s1, const string &s2) { return     *s1.inner <= *s2.inner ; }
     165bool ?<? (const string &s1, const string &s2) { return     *s1.inner <  *s2.inner ; }
     166
     167int  cmp (const string &s1, const char*   s2) { return cmp(*s1.inner ,   s2      ); }
     168bool ?==?(const string &s1, const char*   s2) { return     *s1.inner ==  s2       ; }
     169bool ?!=?(const string &s1, const char*   s2) { return     *s1.inner !=  s2       ; }
     170bool ?>? (const string &s1, const char*   s2) { return     *s1.inner >   s2       ; }
     171bool ?>=?(const string &s1, const char*   s2) { return     *s1.inner >=  s2       ; }
     172bool ?<=?(const string &s1, const char*   s2) { return     *s1.inner <=  s2       ; }
     173bool ?<? (const string &s1, const char*   s2) { return     *s1.inner <   s2       ; }
     174
     175int  cmp (const char*   s1, const string &s2) { return cmp( s1       ,  *s2.inner); }
     176bool ?==?(const char*   s1, const string &s2) { return      s1       == *s2.inner ; }
     177bool ?!=?(const char*   s1, const string &s2) { return      s1       != *s2.inner ; }
     178bool ?>? (const char*   s1, const string &s2) { return      s1       >  *s2.inner ; }
     179bool ?>=?(const char*   s1, const string &s2) { return      s1       >= *s2.inner ; }
     180bool ?<=?(const char*   s1, const string &s2) { return      s1       <= *s2.inner ; }
     181bool ?<? (const char*   s1, const string &s2) { return      s1       <  *s2.inner ; }
     182
    174183
    175184////////////////////////////////////////////////////////
  • libcfa/src/collections/string.hfa

    rc1e66d9 rdeda7e6  
    116116
    117117// Comparisons
    118 bool ?==?(const string & s, const string & other);
    119 bool ?!=?(const string & s, const string & other);
    120 bool ?==?(const string & s, const char * other);
    121 bool ?!=?(const string & s, const char * other);
     118int  cmp (const string &, const string &);
     119bool ?==?(const string &, const string &);
     120bool ?!=?(const string &, const string &);
     121bool ?>? (const string &, const string &);
     122bool ?>=?(const string &, const string &);
     123bool ?<=?(const string &, const string &);
     124bool ?<? (const string &, const string &);
     125
     126int  cmp (const string &, const char*);
     127bool ?==?(const string &, const char*);
     128bool ?!=?(const string &, const char*);
     129bool ?>? (const string &, const char*);
     130bool ?>=?(const string &, const char*);
     131bool ?<=?(const string &, const char*);
     132bool ?<? (const string &, const char*);
     133
     134int  cmp (const char*, const string &);
     135bool ?==?(const char*, const string &);
     136bool ?!=?(const char*, const string &);
     137bool ?>? (const char*, const string &);
     138bool ?>=?(const char*, const string &);
     139bool ?<=?(const char*, const string &);
     140bool ?<? (const char*, const string &);
     141
    122142
    123143// Slicing
  • libcfa/src/collections/string_res.cfa

    rc1e66d9 rdeda7e6  
    637637// Comparisons
    638638
    639 
    640 bool ?==?(const string_res &s1, const string_res &s2) {
    641     return ByteCmp( s1.Handle.s, 0, s1.Handle.lnth, s2.Handle.s, 0, s2.Handle.lnth) == 0;
    642 }
    643 
    644 bool ?!=?(const string_res &s1, const string_res &s2) {
    645     return !(s1 == s2);
    646 }
    647 bool ?==?(const string_res &s, const char* other) {
    648     string_res sother = other;
    649     return s == sother;
    650 }
    651 bool ?!=?(const string_res &s, const char* other) {
    652     return !(s == other);
    653 }
     639int cmp(const string_res &s1, const string_res &s2) {
     640    // return 0;
     641    int ans1 = memcmp(s1.Handle.s, s2.Handle.s, min(s1.Handle.lnth, s2.Handle.lnth));
     642    if (ans1 != 0) return ans1;
     643    return s1.Handle.lnth - s2.Handle.lnth;
     644}
     645
     646bool ?==?(const string_res &s1, const string_res &s2) { return cmp(s1, s2) == 0; }
     647bool ?!=?(const string_res &s1, const string_res &s2) { return cmp(s1, s2) != 0; }
     648bool ?>? (const string_res &s1, const string_res &s2) { return cmp(s1, s2) >  0; }
     649bool ?>=?(const string_res &s1, const string_res &s2) { return cmp(s1, s2) >= 0; }
     650bool ?<=?(const string_res &s1, const string_res &s2) { return cmp(s1, s2) <= 0; }
     651bool ?<? (const string_res &s1, const string_res &s2) { return cmp(s1, s2) <  0; }
     652
     653int cmp (const string_res &s1, const char* s2) {
     654    string_res s2x = s2;
     655    return cmp(s1, s2x);
     656}
     657
     658bool ?==?(const string_res &s1, const char* s2) { return cmp(s1, s2) == 0; }
     659bool ?!=?(const string_res &s1, const char* s2) { return cmp(s1, s2) != 0; }
     660bool ?>? (const string_res &s1, const char* s2) { return cmp(s1, s2) >  0; }
     661bool ?>=?(const string_res &s1, const char* s2) { return cmp(s1, s2) >= 0; }
     662bool ?<=?(const string_res &s1, const char* s2) { return cmp(s1, s2) <= 0; }
     663bool ?<? (const string_res &s1, const char* s2) { return cmp(s1, s2) <  0; }
     664
     665int cmp (const char* s1, const string_res & s2) {
     666    string_res s1x = s1;
     667    return cmp(s1x, s2);
     668}
     669
     670bool ?==?(const char* s1, const string_res &s2) { return cmp(s1, s2) == 0; }
     671bool ?!=?(const char* s1, const string_res &s2) { return cmp(s1, s2) != 0; }
     672bool ?>? (const char* s1, const string_res &s2) { return cmp(s1, s2) >  0; }
     673bool ?>=?(const char* s1, const string_res &s2) { return cmp(s1, s2) >= 0; }
     674bool ?<=?(const char* s1, const string_res &s2) { return cmp(s1, s2) <= 0; }
     675bool ?<? (const char* s1, const string_res &s2) { return cmp(s1, s2) <  0; }
     676
    654677
    655678
  • libcfa/src/collections/string_res.hfa

    rc1e66d9 rdeda7e6  
    142142
    143143// Comparisons
    144 bool ?==?(const string_res &s, const string_res &other);
    145 bool ?!=?(const string_res &s, const string_res &other);
    146 bool ?==?(const string_res &s, const char* other);
    147 bool ?!=?(const string_res &s, const char* other);
     144int  cmp (const string_res &, const string_res &);
     145bool ?==?(const string_res &, const string_res &);
     146bool ?!=?(const string_res &, const string_res &);
     147bool ?>? (const string_res &, const string_res &);
     148bool ?>=?(const string_res &, const string_res &);
     149bool ?<=?(const string_res &, const string_res &);
     150bool ?<? (const string_res &, const string_res &);
     151
     152int  cmp (const string_res &, const char*);
     153bool ?==?(const string_res &, const char*);
     154bool ?!=?(const string_res &, const char*);
     155bool ?>? (const string_res &, const char*);
     156bool ?>=?(const string_res &, const char*);
     157bool ?<=?(const string_res &, const char*);
     158bool ?<? (const string_res &, const char*);
     159
     160int  cmp (const char*, const string_res &);
     161bool ?==?(const char*, const string_res &);
     162bool ?!=?(const char*, const string_res &);
     163bool ?>? (const char*, const string_res &);
     164bool ?>=?(const char*, const string_res &);
     165bool ?<=?(const char*, const string_res &);
     166bool ?<? (const char*, const string_res &);
    148167
    149168// String search
  • libcfa/src/common.hfa

    rc1e66d9 rdeda7e6  
    6969        T min( T v1, T v2 ) { return v1 < v2 ? v1 : v2; }
    7070
    71         forall( T, Ts... | { T min( T, T ); T min( T, Ts ); } )
    72         T min( T v1, T v2, Ts vs ) { return min( min( v1, v2 ), vs ); }
     71        forall( T, Ts... | { T min( T, T ); T min( T, T, Ts ); } )
     72        T min( T v1, T v2, T v3, Ts vs ) { return min( min( v1, v2 ), v3, vs ); }
    7373
    7474        forall( T | { int ?>?( T, T ); } )
    7575        T max( T v1, T v2 ) { return v1 > v2 ? v1 : v2; }
    7676
    77         forall( T, Ts... | { T max( T, T ); T max( T, Ts ); } )
    78         T max( T v1, T v2, Ts vs ) { return max( max( v1, v2 ), vs ); }
     77        forall( T, Ts... | { T max( T, T ); T max( T, T, Ts ); } )
     78        T max( T v1, T v2, T v3, Ts vs ) { return max( max( v1, v2 ), v3, vs ); }
    7979
    8080        forall( T | { T min( T, T ); T max( T, T ); } )
  • libcfa/src/concurrency/coroutine.cfa

    rc1e66d9 rdeda7e6  
    1010// Created On       : Mon Nov 28 12:27:26 2016
    1111// Last Modified By : Peter A. Buhr
    12 // Last Modified On : Thu Feb 16 15:34:46 2023
    13 // Update Count     : 24
     12// Last Modified On : Mon Sep 18 21:47:12 2023
     13// Update Count     : 25
    1414//
    1515
     
    364364// resume non local exception at receiver (i.e. enqueue in ehm buffer)
    365365forall(exceptT *, T & | ehm_resume_at( exceptT, T ))
    366 void resumeAt( T & receiver, exceptT & ex )  libcfa_public {
     366void resumeAt( T & receiver, exceptT & ex ) libcfa_public {
    367367    coroutine$ * cor = get_coroutine( receiver );
    368368    nonlocal_exception * nl_ex = alloc();
  • libcfa/src/concurrency/kernel/cluster.hfa

    rc1e66d9 rdeda7e6  
    3131
    3232// warn normally all ints
    33 #define warn_large_before warnf( !strict || old_avg < 33_000_000_000, "Suspiciously large previous average: %'llu (%llx), %'" PRId64 "ms \n", old_avg, old_avg, program()`ms )
    34 #define warn_large_after warnf( !strict || ret < 33_000_000_000, "Suspiciously large new average after %'" PRId64 "ms cputime: %'llu (%llx) from %'llu-%'llu (%'llu, %'llu) and %'llu\n", program()`ms, ret, ret, currtsc, intsc, new_val, new_val / 1000000, old_avg )
     33#define warn_large_before warnf( !strict || old_avg < 33_000_000_000, "Suspiciously large previous average: %'llu (%llx), %'" PRId64 "ms \n", old_avg, old_avg, program_cpu()`ms )
     34#define warn_large_after warnf( !strict || ret < 33_000_000_000, "Suspiciously large new average after %'" PRId64 "ms cputime: %'llu (%llx) from %'llu-%'llu (%'llu, %'llu) and %'llu\n", program_cpu()`ms, ret, ret, currtsc, intsc, new_val, new_val / 1000000, old_avg )
    3535
    3636// 8X linear factor is just 8 * x
     
    4242static inline __readyQ_avg_t __to_readyQ_avg(unsigned long long intsc) { if(unlikely(0 == intsc)) return 0.0; else return log2((__readyQ_avg_t)intsc); }
    4343
    44 #define warn_large_before warnf( !strict || old_avg < 35.0, "Suspiciously large previous average: %'lf, %'" PRId64 "ms \n", old_avg, program()`ms )
    45 #define warn_large_after warnf( !strict || ret < 35.3, "Suspiciously large new average after %'" PRId64 "ms cputime: %'lf from %'llu-%'llu (%'llu, %'llu) and %'lf\n", program()`ms, ret, currtsc, intsc, new_val, new_val / 1000000, old_avg ); \
     44#define warn_large_before warnf( !strict || old_avg < 35.0, "Suspiciously large previous average: %'lf, %'" PRId64 "ms \n", old_avg, program_cpu()`ms )
     45#define warn_large_after warnf( !strict || ret < 35.3, "Suspiciously large new average after %'" PRId64 "ms cputime: %'lf from %'llu-%'llu (%'llu, %'llu) and %'lf\n", program_cpu()`ms, ret, currtsc, intsc, new_val, new_val / 1000000, old_avg ); \
    4646verify(ret >= 0)
    4747
  • libcfa/src/heap.cfa

    rc1e66d9 rdeda7e6  
    1010// Created On       : Tue Dec 19 21:58:35 2017
    1111// Last Modified By : Peter A. Buhr
    12 // Last Modified On : Wed Aug  2 18:48:30 2023
    13 // Update Count     : 1614
     12// Last Modified On : Mon Sep 11 11:21:10 2023
     13// Update Count     : 1615
    1414//
    1515
     
    691691        return stats;
    692692} // collectStats
     693
     694static inline void clearStats() {
     695        lock( mgrLock );
     696
     697        // Zero the heap master and all active thread heaps.
     698        HeapStatisticsCtor( heapMaster.stats );
     699        for ( Heap * heap = heapMaster.heapManagersList; heap; heap = heap->nextHeapManager ) {
     700                HeapStatisticsCtor( heap->stats );
     701        } // for
     702
     703        unlock( mgrLock );
     704} // clearStats
    693705#endif // __STATISTICS__
    694706
     
    15561568
    15571569
     1570        // Zero the heap master and all active thread heaps.
     1571        void malloc_stats_clear() {
     1572                #ifdef __STATISTICS__
     1573                clearStats();
     1574                #else
     1575                #define MALLOC_STATS_MSG "malloc_stats statistics disabled.\n"
     1576                if ( write( STDERR_FILENO, MALLOC_STATS_MSG, sizeof( MALLOC_STATS_MSG ) - 1 /* size includes '\0' */ ) == -1 ) {
     1577                        abort( "**** Error **** write failed in malloc_stats" );
     1578                } // if
     1579                #endif // __STATISTICS__
     1580        } // malloc_stats_clear
     1581
     1582
    15581583        // Changes the file descriptor where malloc_stats() writes statistics.
    15591584        int malloc_stats_fd( int fd __attribute__(( unused )) ) libcfa_public {
  • libcfa/src/heap.hfa

    rc1e66d9 rdeda7e6  
    1010// Created On       : Tue May 26 11:23:55 2020
    1111// Last Modified By : Peter A. Buhr
    12 // Last Modified On : Tue Oct  4 19:08:55 2022
    13 // Update Count     : 23
     12// Last Modified On : Mon Sep 11 11:18:18 2023
     13// Update Count     : 24
    1414//
    1515
     
    4343        size_t malloc_mmap_start();                                                     // crossover allocation size from sbrk to mmap
    4444        size_t malloc_unfreed();                                                        // heap unfreed size (bytes)
     45        void malloc_stats_clear();                                                      // clear heap statistics
    4546} // extern "C"
    4647
  • libcfa/src/iostream.cfa

    rc1e66d9 rdeda7e6  
     1
    12//
    23// Cforall Version 1.0.0 Copyright (C) 2015 University of Waterloo
     
    976977                if ( f.flags.ignore ) { fmtstr[1] = '*'; start += 1; }
    977978                // no maximum width necessary because text ignored => width is read width
    978                 if ( f.wd != -1 ) { start += sprintf( &fmtstr[start], "%d", f.wd ); }
     979                if ( f.wd != -1 ) {
     980                        // wd is buffer bytes available (for input chars + null terminator)
     981                        // rwd is count of input chars
     982                        int rwd = f.flags.rwd ? f.wd : (f.wd - 1);
     983                        start += sprintf( &fmtstr[start], "%d", rwd );
     984                }
    979985
    980986                if ( ! scanset ) {
     
    993999                } // if
    9941000
    995                 int check = f.wd - 1;
     1001                int check = f.wd - 2;
    9961002                if ( ! f.flags.rwd ) f.s[check] = '\0';                 // insert sentinel
    9971003                len = fmt( is, fmtstr, f.s );
  • src/AST/Util.cpp

    rc1e66d9 rdeda7e6  
    104104        }
    105105        assertf( false, "Member not found." );
     106}
     107
     108template<typename node_t>
     109void oneOfExprOrType( const node_t * node ) {
     110        if ( node->expr ) {
     111                assertf( node->expr && !node->type, "Exactly one of expr or type should be set." );
     112        } else {
     113                assertf( !node->expr && node->type, "Exactly one of expr or type should be set." );
     114        }
    106115}
    107116
     
    152161        }
    153162
     163        void previsit( const SizeofExpr * node ) {
     164                previsit( (const ParseNode *)node );
     165                oneOfExprOrType( node );
     166        }
     167
     168        void previsit( const AlignofExpr * node ) {
     169                previsit( (const ParseNode *)node );
     170                oneOfExprOrType( node );
     171        }
     172
    154173        void previsit( const StructInstType * node ) {
    155174                previsit( (const Node *)node );
     
    181200/// referring to is in scope by the structural rules of code.
    182201// Any escapes marked with a bug should be removed once the bug is fixed.
     202// This is a separate pass because of it changes the visit pattern and
     203// must always be run on the entire translation unit.
    183204struct InScopeCore : public ast::WithShortCircuiting {
    184205        ScopedSet<DeclWithType const *> typedDecls;
  • src/ControlStruct/MultiLevelExit.cpp

    rc1e66d9 rdeda7e6  
    1010// Created On       : Mon Nov  1 13:48:00 2021
    1111// Last Modified By : Andrew Beach
    12 // Last Modified On : Wed Sep  6 12:00:00 2023
    13 // Update Count     : 35
     12// Last Modified On : Fri Sep  8 17:04:00 2023
     13// Update Count     : 36
    1414//
    1515
     
    2727
    2828namespace {
     29
     30/// The return context is used to remember if returns are allowed and if
     31/// not, why not. It is the nearest local control flow blocking construct.
     32enum ReturnContext {
     33        MayReturn,
     34        InTryWithHandler,
     35        InResumeHandler,
     36        InTerminateHandler,
     37        InFinally,
     38};
    2939
    3040class Entry {
     
    126136        void previsit( const TryStmt * );
    127137        void postvisit( const TryStmt * );
     138        void previsit( const CatchClause * );
    128139        void previsit( const FinallyClause * );
    129140
     
    134145        vector<Entry> enclosing_control_structures;
    135146        Label break_label;
    136         bool inFinally;
     147        ReturnContext ret_context;
    137148
    138149        template<typename LoopNode>
     
    144155                const list<ptr<Stmt>> & kids, bool caseClause );
    145156
     157        void enterSealedContext( ReturnContext );
     158
    146159        template<typename UnaryPredicate>
    147160        auto findEnclosingControlStructure( UnaryPredicate pred ) {
     
    157170MultiLevelExitCore::MultiLevelExitCore( const LabelToStmt & lt ) :
    158171        target_table( lt ), break_label( CodeLocation(), "" ),
    159         inFinally( false )
     172        ret_context( ReturnContext::MayReturn )
    160173{}
    161174
     
    488501
    489502void MultiLevelExitCore::previsit( const ReturnStmt * stmt ) {
    490         if ( inFinally ) {
    491                 SemanticError( stmt->location, "'return' may not appear in a finally clause" );
    492         }
     503        char const * context;
     504        switch ( ret_context ) {
     505        case ReturnContext::MayReturn:
     506                return;
     507        case ReturnContext::InTryWithHandler:
     508                context = "try statement with a catch clause";
     509                break;
     510        case ReturnContext::InResumeHandler:
     511                context = "catchResume clause";
     512                break;
     513        case ReturnContext::InTerminateHandler:
     514                context = "catch clause";
     515                break;
     516        case ReturnContext::InFinally:
     517                context = "finally clause";
     518                break;
     519        default:
     520                assert(0);
     521        }
     522        SemanticError( stmt->location, toString( "'return' may not appear in a ", context ) );
    493523}
    494524
     
    500530                GuardAction([this](){ enclosing_control_structures.pop_back(); } );
    501531        }
     532
     533        // Try statements/try blocks are only sealed with a termination handler.
     534        for ( auto clause : stmt->handlers ) {
     535                if ( ast::Terminate == clause->kind ) {
     536                        return enterSealedContext( ReturnContext::InTryWithHandler );
     537                }
     538        }
    502539}
    503540
     
    512549}
    513550
     551void MultiLevelExitCore::previsit( const CatchClause * clause ) {
     552        ReturnContext context = ( ast::Terminate == clause->kind )
     553                ? ReturnContext::InTerminateHandler : ReturnContext::InResumeHandler;
     554        enterSealedContext( context );
     555}
     556
    514557void MultiLevelExitCore::previsit( const FinallyClause * ) {
    515         GuardAction([this, old = std::move( enclosing_control_structures)](){ enclosing_control_structures = std::move(old); });
    516         enclosing_control_structures = vector<Entry>();
    517         GuardValue( inFinally ) = true;
     558        enterSealedContext( ReturnContext::InFinally );
    518559}
    519560
     
    617658}
    618659
     660void MultiLevelExitCore::enterSealedContext( ReturnContext enter_context ) {
     661        GuardAction([this, old = std::move(enclosing_control_structures)](){ enclosing_control_structures = std::move(old); });
     662        enclosing_control_structures = vector<Entry>();
     663        GuardValue( ret_context ) = enter_context;
     664}
     665
    619666} // namespace
    620667
  • src/GenPoly/GenPoly.cc

    rc1e66d9 rdeda7e6  
    4848                }
    4949
    50                 bool hasPolyParams( const std::vector<ast::ptr<ast::Expr>> & params, const ast::TypeSubstitution * env) {
    51                         for (auto &param : params) {
    52                                 auto paramType = param.strict_as<ast::TypeExpr>();
    53                                 if (isPolyType(paramType->type, env)) return true;
     50                bool hasPolyParams( const std::vector<ast::ptr<ast::Expr>> & params, const ast::TypeSubstitution * env ) {
     51                        for ( auto &param : params ) {
     52                                auto paramType = param.as<ast::TypeExpr>();
     53                                assertf( paramType, "Aggregate parameters should be type expressions" );
     54                                if ( isPolyType( paramType->type, env ) ) return true;
    5455                        }
    5556                        return false;
     
    6263                                assertf(paramType, "Aggregate parameters should be type expressions");
    6364                                if ( isPolyType( paramType->get_type(), tyVars, env ) ) return true;
     65                        }
     66                        return false;
     67                }
     68
     69                bool hasPolyParams( const std::vector<ast::ptr<ast::Expr>> & params, const TypeVarMap & typeVars, const ast::TypeSubstitution * env ) {
     70                        for ( auto & param : params ) {
     71                                auto paramType = param.as<ast::TypeExpr>();
     72                                assertf( paramType, "Aggregate parameters should be type expressions" );
     73                                if ( isPolyType( paramType->type, typeVars, env ) ) return true;
    6474                        }
    6575                        return false;
     
    185195        }
    186196
    187         const ast::Type * isPolyType(const ast::Type * type, const TyVarMap & tyVars, const ast::TypeSubstitution * env) {
    188                 type = replaceTypeInst( type, env );
    189 
    190                 if ( auto typeInst = dynamic_cast< const ast::TypeInstType * >( type ) ) {
    191                         if ( tyVars.contains( typeInst->typeString() ) ) return type;
    192                 } else if ( auto arrayType = dynamic_cast< const ast::ArrayType * >( type ) ) {
    193                         return isPolyType( arrayType->base, env );
    194                 } else if ( auto structType = dynamic_cast< const ast::StructInstType* >( type ) ) {
    195                         if ( hasPolyParams( structType->params, env ) ) return type;
    196                 } else if ( auto unionType = dynamic_cast< const ast::UnionInstType* >( type ) ) {
    197                         if ( hasPolyParams( unionType->params, env ) ) return type;
    198                 }
    199                 return nullptr;
    200         }
    201 
    202197const ast::Type * isPolyType( const ast::Type * type,
    203198                const TypeVarMap & typeVars, const ast::TypeSubstitution * subst ) {
     
    207202                if ( typeVars.contains( *inst ) ) return type;
    208203        } else if ( auto array = dynamic_cast< const ast::ArrayType * >( type ) ) {
    209                 return isPolyType( array->base, subst );
     204                return isPolyType( array->base, typeVars, subst );
    210205        } else if ( auto sue = dynamic_cast< const ast::StructInstType * >( type ) ) {
    211                 if ( hasPolyParams( sue->params, subst ) ) return type;
     206                if ( hasPolyParams( sue->params, typeVars, subst ) ) return type;
    212207        } else if ( auto sue = dynamic_cast< const ast::UnionInstType * >( type ) ) {
    213                 if ( hasPolyParams( sue->params, subst ) ) return type;
     208                if ( hasPolyParams( sue->params, typeVars, subst ) ) return type;
    214209        }
    215210        return nullptr;
  • tests/.expect/minmax.txt

    rc1e66d9 rdeda7e6  
    2020double                  4. 3.1  max 4.
    2121long double             4. 3.1  max 4.
     22
     233 arguments
     242 3 4   min 2   max 4
     254 2 3   min 2   max 4
     263 4 2   min 2   max 4
     274 arguments
     283 2 5 4 min 2   max 5
     295 3 4 2 min 2   max 5
  • tests/concurrency/actors/dynamic.cfa

    rc1e66d9 rdeda7e6  
    99struct derived_actor { inline actor; };
    1010struct derived_msg {
    11     inline message;
    12     int cnt;
     11        inline message;
     12        int cnt;
    1313};
    1414
    1515void ?{}( derived_msg & this, int cnt ) {
    16     ((message &) this){ Delete };
    17     this.cnt = cnt;
     16        set_allocation( this, Delete );
     17        this.cnt = cnt;
    1818}
    1919void ?{}( derived_msg & this ) { ((derived_msg &)this){ 0 }; }
    2020
    2121allocation receive( derived_actor & receiver, derived_msg & msg ) {
    22     if ( msg.cnt >= Times ) {
    23         sout | "Done";
    24         return Delete;
    25     }
    26     derived_msg * d_msg = alloc();
    27     (*d_msg){ msg.cnt + 1 };
    28     derived_actor * d_actor = alloc();
    29     (*d_actor){};
    30     *d_actor | *d_msg;
    31     return Delete;
     22        if ( msg.cnt >= Times ) {
     23                sout | "Done";
     24                return Delete;
     25        }
     26        derived_msg * d_msg = alloc();
     27        (*d_msg){ msg.cnt + 1 };
     28        derived_actor * d_actor = alloc();
     29        (*d_actor){};
     30        *d_actor | *d_msg;
     31        return Delete;
    3232}
    3333
    3434int main( int argc, char * argv[] ) {
    35     switch ( argc ) {
     35        switch ( argc ) {
    3636          case 2:
    3737                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    38                         Times = atoi( argv[1] );
    39                         if ( Times < 1 ) goto Usage;
     38                        Times = ato( argv[1] );
     39                        if ( Times < 1 ) fallthru default;
    4040                } // if
    4141          case 1:                                                                                       // use defaults
    4242                break;
    4343          default:
    44           Usage:
    45                 sout | "Usage: " | argv[0] | " [ times (> 0) ]";
    46                 exit( EXIT_FAILURE );
     44                exit | "Usage: " | argv[0] | " [ times (> 0) ]";
    4745        } // switch
    4846
    49     printf("starting\n");
     47        sout | "starting";
    5048
    51     executor e{ 0, 1, 1, false };
    52     start_actor_system( e );
     49        executor e{ 0, 1, 1, false };
     50        start_actor_system( e );
    5351
    54     printf("started\n");
     52        sout | "started";
    5553
    56     derived_msg * d_msg = alloc();
    57     (*d_msg){};
    58     derived_actor * d_actor = alloc();
    59     (*d_actor){};
    60     *d_actor | *d_msg;
     54        derived_msg * d_msg = alloc();
     55        (*d_msg){};
     56        derived_actor * d_actor = alloc();
     57        (*d_actor){};
     58        *d_actor | *d_msg;
    6159
    62     printf("stopping\n");
     60        sout | "stopping";
    6361
    64     stop_actor_system();
     62        stop_actor_system();
    6563
    66     printf("stopped\n");
    67 
    68     return 0;
     64        sout | "stopped";
    6965}
  • tests/concurrency/actors/executor.cfa

    rc1e66d9 rdeda7e6  
    1010static int ids = 0;
    1111struct d_actor {
    12     inline actor;
    13     d_actor * gstart;
    14     int id, rounds, recs, sends;
     12        inline actor;
     13        d_actor * gstart;
     14        int id, rounds, recs, sends;
    1515};
    1616void ?{}( d_actor & this ) with(this) {
    17     id = ids++;
    18     gstart = (&this + (id / Set * Set - id)); // remember group-start array-element
    19     rounds = Set * Rounds;      // send at least one message to each group member
    20     recs = 0;
    21     sends = 0;
     17        id = ids++;
     18        gstart = (&this + (id / Set * Set - id)); // remember group-start array-element
     19        rounds = Set * Rounds;  // send at least one message to each group member
     20        recs = 0;
     21        sends = 0;
    2222}
    2323
     
    2525
    2626allocation receive( d_actor & this, d_msg & msg ) with( this ) {
    27     if ( recs == rounds ) return Finished;
    28     if ( recs % Batch == 0 ) {
    29         for ( i; Batch ) {
    30             gstart[sends % Set] | shared_msg;
    31             sends += 1;
    32         }
    33     }
    34     recs += 1;
    35     return Nodelete;
     27        if ( recs == rounds ) return Finished;
     28        if ( recs % Batch == 0 ) {
     29                for ( i; Batch ) {
     30                        gstart[sends % Set] | shared_msg;
     31                        sends += 1;
     32                }
     33        }
     34        recs += 1;
     35        return Nodelete;
    3636}
    3737
    3838int main( int argc, char * argv[] ) {
    39     switch ( argc ) {
     39        switch ( argc ) {
    4040          case 7:
    4141                if ( strcmp( argv[6], "d" ) != 0 ) {                    // default ?
    42                         BufSize = atoi( argv[6] );
    43                         if ( BufSize < 0 ) goto Usage;
     42                        BufSize = ato( argv[6] );
     43                        if ( BufSize < 0 ) fallthru default;
    4444                } // if
    4545          case 6:
    4646                if ( strcmp( argv[5], "d" ) != 0 ) {                    // default ?
    47                         Batch = atoi( argv[5] );
    48                         if ( Batch < 1 ) goto Usage;
     47                        Batch = ato( argv[5] );
     48                        if ( Batch < 1 ) fallthru default;
    4949                } // if
    5050          case 5:
    5151                if ( strcmp( argv[4], "d" ) != 0 ) {                    // default ?
    52                         Processors = atoi( argv[4] );
    53                         if ( Processors < 1 ) goto Usage;
     52                        Processors = ato( argv[4] );
     53                        if ( Processors < 1 ) fallthru default;
    5454                } // if
    5555          case 4:
    5656                if ( strcmp( argv[3], "d" ) != 0 ) {                    // default ?
    57                         Rounds = atoi( argv[3] );
    58                         if ( Rounds < 1 ) goto Usage;
     57                        Rounds = ato( argv[3] );
     58                        if ( Rounds < 1 ) fallthru default;
    5959                } // if
    6060          case 3:
    6161                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    62                         Set = atoi( argv[2] );
    63                         if ( Set < 1 ) goto Usage;
     62                        Set = ato( argv[2] );
     63                        if ( Set < 1 ) fallthru default;
    6464                } // if
    6565          case 2:
    6666                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    67                         Actors = atoi( argv[1] );
    68                         if ( Actors < 1 || Actors <= Set || Actors % Set != 0 ) goto Usage;
     67                        Actors = ato( argv[1] );
     68                        if ( Actors < 1 || Actors <= Set || Actors % Set != 0 ) fallthru default;
    6969                } // if
    7070          case 1:                                                                                       // use defaults
    7171                break;
    7272          default:
    73           Usage:
    74                 sout | "Usage: " | argv[0]
    75              | " [ actors (> 0 && > set && actors % set == 0 ) | 'd' (default " | Actors
     73                exit | "Usage: " | argv[0]
     74                         | " [ actors (> 0 && > set && actors % set == 0 ) | 'd' (default " | Actors
    7675                         | ") ] [ set (> 0) | 'd' (default " | Set
    7776                         | ") ] [ rounds (> 0) | 'd' (default " | Rounds
     
    8079                         | ") ] [ buffer size (>= 0) | 'd' (default " | BufSize
    8180                         | ") ]" ;
    82                 exit( EXIT_FAILURE );
    8381        } // switch
    8482
    85     executor e{ Processors, Processors, Processors == 1 ? 1 : Processors * 512, true };
     83        executor e{ Processors, Processors, Processors == 1 ? 1 : Processors * 512, true };
    8684
    87     printf("starting\n");
     85        sout | "starting";
    8886
    89     start_actor_system( e );
     87        start_actor_system( e );
    9088
    91     printf("started\n");
     89        sout | "started";
    9290
    93     d_actor actors[ Actors ];
     91        d_actor actors[ Actors ];
    9492
    9593        for ( i; Actors ) {
     
    9795        } // for
    9896
    99     printf("stopping\n");
     97        sout | "stopping";
    10098
    101     stop_actor_system();
     99        stop_actor_system();
    102100
    103     printf("stopped\n");
    104 
    105     return 0;
     101        sout | "stopped";
    106102}
  • tests/concurrency/actors/inherit.cfa

    rc1e66d9 rdeda7e6  
    1818
    1919allocation handle() {
    20     return Finished;
     20        return Finished;
    2121}
    2222
     
    2727
    2828int main() {
    29     sout | "Start";
    30     {
    31         start_actor_system();
    32         D_msg * dm = alloc();
    33         (*dm){};
    34         D_msg2 * dm2 = alloc();
    35         (*dm2){};
    36         Server2 * s = alloc();
    37         (*s){};
    38         Server2 * s2 = alloc();
    39         (*s2){};
    40         *s | *dm;
    41         *s2 | *dm2;
    42         stop_actor_system();
    43     }
    44     {
    45         start_actor_system();
    46         Server s[2];
    47         D_msg * dm = alloc();
    48         (*dm){};
    49         D_msg2 * dm2 = alloc();
    50         (*dm2){};
    51         s[0] | *dm;
    52         s[1] | *dm2;
    53         stop_actor_system();
    54     }
    55     sout | "Finished";
     29        sout | "Start";
     30        {
     31                start_actor_system();
     32                D_msg * dm = alloc();
     33                (*dm){};
     34                D_msg2 * dm2 = alloc();
     35                (*dm2){};
     36                Server2 * s = alloc();
     37                (*s){};
     38                Server2 * s2 = alloc();
     39                (*s2){};
     40                *s | *dm;
     41                *s2 | *dm2;
     42                stop_actor_system();
     43        }
     44        {
     45                start_actor_system();
     46                Server s[2];
     47                D_msg * dm = alloc();
     48                (*dm){};
     49                D_msg2 * dm2 = alloc();
     50                (*dm2){};
     51                s[0] | *dm;
     52                s[1] | *dm2;
     53                stop_actor_system();
     54        }
     55        sout | "Finished";
    5656}
  • tests/concurrency/actors/inline.cfa

    rc1e66d9 rdeda7e6  
    33
    44struct d_actor {
    5     inline actor;
     5        inline actor;
    66};
    77struct msg_wrapper {
    8     int b;
    9     inline message;
     8        int b;
     9        inline message;
    1010};
    1111void ^?{}( msg_wrapper & this ) { sout | "msg_wrapper dtor"; }
    1212
    1313struct d_msg {
    14     int m;
    15     inline msg_wrapper;
     14        int m;
     15        inline msg_wrapper;
    1616};
    1717void ?{}( d_msg & this, int m, int b ) { this.m = m; this.b = b; set_allocation( this, Delete ); }
     
    1919
    2020allocation receive( d_actor &, d_msg & msg ) {
    21     sout | msg.m;
    22     sout | msg.b;
    23     return Finished;
     21        sout | msg.m;
     22        sout | msg.b;
     23        return Finished;
    2424}
    2525
    2626struct d_msg2 {
    27     int m;
    28     inline msg_wrapper;
     27        int m;
     28        inline msg_wrapper;
    2929};
    3030void ^?{}( d_msg2 & this ) { sout | "d_msg2 dtor";}
    3131
    3232allocation receive( d_actor &, d_msg2 & msg ) {
    33     sout | msg.m;
    34     return Finished;
     33        sout | msg.m;
     34        return Finished;
    3535}
    3636
    3737int main() {
    38     processor p;
    39     {
    40         start_actor_system();                                // sets up executor
    41         d_actor da;
    42         d_msg * dm = alloc();
    43         (*dm){ 42, 2423 };
    44         da | *dm;
    45         stop_actor_system();                                // waits until actors finish
    46     }
    47     {
    48         start_actor_system();                                // sets up executor
    49         d_actor da;
    50         d_msg2 dm{ 29079 };
    51         set_allocation( dm, Nodelete );
    52         msg_wrapper * mw = &dm;
    53         message * mg = &dm;
    54         virtual_dtor * v = &dm;
    55         da | dm;
    56         stop_actor_system();                                // waits until actors finish
    57     }
     38        processor p;
     39        {
     40                start_actor_system();                                                           // sets up executor
     41                d_actor da;
     42                d_msg * dm = alloc();
     43                (*dm){ 42, 2423 };
     44                da | *dm;
     45                stop_actor_system();                                                            // waits until actors finish
     46        }
     47        {
     48                start_actor_system();                                                           // sets up executor
     49                d_actor da;
     50                d_msg2 dm{ 29079 };
     51                set_allocation( dm, Nodelete );
     52                msg_wrapper * mw = &dm;
     53                message * mg = &dm;
     54                virtual_dtor * v = &dm;
     55                da | dm;
     56                stop_actor_system();                                                            // waits until actors finish
     57        }
    5858}
  • tests/concurrency/actors/pingpong.cfa

    rc1e66d9 rdeda7e6  
    1010
    1111struct p_msg {
    12     inline message;
    13     size_t count;
     12        inline message;
     13        size_t count;
    1414};
    15 static inline void ?{}( p_msg & this ) { ((message &)this){}; this.count = 0; }
     15//static inline void ?{}( p_msg & this ) { ((message &)this){}; this.count = 0; }
     16static inline void ?{}( p_msg & this ) { this.count = 0; }
    1617
    1718ping * pi;
     
    2021
    2122allocation receive( ping & receiver, p_msg & msg ) {
    22     msg.count++;
    23     if ( msg.count > times ) return Finished;
     23        msg.count++;
     24        if ( msg.count > times ) return Finished;
    2425
    25     allocation retval = Nodelete;
    26     if ( msg.count == times ) retval = Finished;
    27     *po | msg;
    28     return retval;
     26        allocation retval = Nodelete;
     27        if ( msg.count == times ) retval = Finished;
     28        *po | msg;
     29        return retval;
    2930}
    3031
    3132allocation receive( pong & receiver, p_msg & msg ) {
    32     msg.count++;
    33     if ( msg.count > times ) return Finished;
    34    
    35     allocation retval = Nodelete;
    36     if ( msg.count == times ) retval = Finished;
    37     *pi | msg;
    38     return retval;
     33        msg.count++;
     34        if ( msg.count > times ) return Finished;
     35       
     36        allocation retval = Nodelete;
     37        if ( msg.count == times ) retval = Finished;
     38        *pi | msg;
     39        return retval;
    3940}
    4041
     
    4243
    4344int main( int argc, char * argv[] ) {
    44     printf("start\n");
     45        sout | "start";
    4546
    46     processor p[Processors - 1];
     47        processor p[Processors - 1];
    4748
    48     start_actor_system( Processors ); // test passing number of processors
     49        start_actor_system( Processors ); // test passing number of processors
     50        ping pi_actor;
     51        pong po_actor;
     52        po = &po_actor;
     53        pi = &pi_actor;
     54        p_msg m;
     55        pi_actor | m;
     56        stop_actor_system();
    4957
    50     ping pi_actor;
    51     pong po_actor;
    52     po = &po_actor;
    53     pi = &pi_actor;
    54     p_msg m;
    55     pi_actor | m;
    56     stop_actor_system();
    57 
    58     printf("end\n");
    59     return 0;
     58        sout | "end";
    6059}
  • tests/concurrency/actors/poison.cfa

    rc1e66d9 rdeda7e6  
    1111
    1212int main() {
    13     sout | "Start";
     13        sout | "Start";
    1414
    15     sout | "Finished";
    16     {
    17         start_actor_system();
    18         Server s[10];
    19         for ( i; 10 ) {
    20             s[i] | finished_msg;
    21         }
    22         stop_actor_system();
    23     }
     15        sout | "Finished";
     16        {
     17                start_actor_system();
     18                Server s[10];
     19                for ( i; 10 ) {
     20                        s[i] | finished_msg;
     21                }
     22                stop_actor_system();
     23        }
    2424
    25     sout | "Delete";
    26     {
    27         start_actor_system();
    28         for ( i; 10 ) {
    29             Server * s = alloc();
    30             (*s){};
    31             (*s) | delete_msg;
    32         }
    33         stop_actor_system();
    34     }
     25        sout | "Delete";
     26        {
     27                start_actor_system();
     28                for ( i; 10 ) {
     29                        Server * s = alloc();
     30                        (*s){};
     31                        (*s) | delete_msg;
     32                }
     33                stop_actor_system();
     34        }
    3535
    36     sout | "Destroy";
    37     {
    38         start_actor_system();
    39         Server s[10];
    40         for ( i; 10 )
    41             s[i] | destroy_msg;
    42         stop_actor_system();
    43         for ( i; 10 )
    44             if (s[i].val != 777)
    45                 sout | "Error: dtor not called correctly.";
    46     }
     36        sout | "Destroy";
     37        {
     38                start_actor_system();
     39                Server s[10];
     40                for ( i; 10 )
     41                        s[i] | destroy_msg;
     42                stop_actor_system();
     43                for ( i; 10 )
     44                        if (s[i].val != 777)
     45                                sout | "Error: dtor not called correctly.";
     46        }
    4747
    48     sout | "Done";
    49     return 0;
     48        sout | "Done";
    5049}
  • tests/concurrency/actors/static.cfa

    rc1e66d9 rdeda7e6  
    99struct derived_actor { inline actor; };
    1010struct derived_msg {
    11     inline message;
    12     int cnt;
     11        inline message;
     12        int cnt;
    1313};
    1414
    1515void ?{}( derived_msg & this, int cnt ) {
    16     ((message &) this){ Nodelete };
    17     this.cnt = cnt;
     16        set_allocation( this, Nodelete );
     17        this.cnt = cnt;
    1818}
    1919void ?{}( derived_msg & this ) { ((derived_msg &)this){ 0 }; }
    2020
    2121allocation receive( derived_actor & receiver, derived_msg & msg ) {
    22     if ( msg.cnt >= Times ) {
    23         sout | "Done";
    24         return Finished;
    25     }
    26     msg.cnt++;
    27     receiver | msg;
    28     return Nodelete;
     22        if ( msg.cnt >= Times ) {
     23                sout | "Done";
     24                return Finished;
     25        }
     26        msg.cnt++;
     27        receiver | msg;
     28        return Nodelete;
    2929}
    3030
    3131int main( int argc, char * argv[] ) {
    32     switch ( argc ) {
     32        switch ( argc ) {
    3333          case 2:
    3434                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    35                         Times = atoi( argv[1] );
    36                         if ( Times < 1 ) goto Usage;
     35                        Times = ato( argv[1] );
     36                        if ( Times < 1 ) fallthru default;
    3737                } // if
    3838          case 1:                                                                                       // use defaults
    3939                break;
    4040          default:
    41           Usage:
    42                 sout | "Usage: " | argv[0] | " [ times (> 0) ]";
    43                 exit( EXIT_FAILURE );
     41                exit | "Usage: " | argv[0] | " [ times (> 0) ]";
    4442        } // switch
    4543
    46     printf("starting\n");
     44        sout | "starting";
    4745
    48     executor e{ 0, 1, 1, false };
    49     start_actor_system( e );
     46        executor e{ 0, 1, 1, false };
     47        start_actor_system( e );
    5048
    51     printf("started\n");
     49        sout | "started";
    5250
    53     derived_msg msg;
     51        derived_msg msg;
    5452
    55     derived_actor actor;
     53        derived_actor actor;
    5654
    57     actor | msg;
     55        actor | msg;
    5856
    59     printf("stopping\n");
     57        sout | "stopping";
    6058
    61     stop_actor_system();
     59        stop_actor_system();
    6260
    63     printf("stopped\n");
    64 
    65     return 0;
     61        sout | "stopped";
    6662}
  • tests/concurrency/actors/types.cfa

    rc1e66d9 rdeda7e6  
    99
    1010struct derived_actor {
    11     inline actor;
    12     int counter;
     11        inline actor;
     12        int counter;
    1313};
    1414static inline void ?{}( derived_actor & this ) { ((actor &)this){}; this.counter = 0; }
    1515
    1616struct d_msg {
    17     inline message;
    18     int num;
     17        inline message;
     18        int num;
    1919};
    2020
    2121// this isn't a valid receive routine since int is not a message type
    2222allocation receive( derived_actor & receiver, int i ) with( receiver ) {
    23     mutex(sout) sout | i;
    24     counter++;
    25     if ( counter == 2 ) return Finished;
    26     return Nodelete;
     23        mutex(sout) sout | i;
     24        counter++;
     25        if ( counter == 2 ) return Finished;
     26        return Nodelete;
    2727}
    2828
    2929allocation receive( derived_actor & receiver, d_msg & msg ) {
    30     return receive( receiver, msg.num );
     30        return receive( receiver, msg.num );
    3131}
    3232
    3333struct derived_actor2 {
    34     struct nested { int i; }; // testing nested before inline
    35     inline actor;
     34        struct nested { int i; }; // testing nested before inline
     35        inline actor;
    3636};
    3737
    3838allocation receive( derived_actor2 & receiver, d_msg & msg ) {
    39     mutex(sout) sout | msg.num;
    40     return Finished;
     39        mutex(sout) sout | msg.num;
     40        return Finished;
    4141}
    4242
     
    4444struct derived_actor4 { inline derived_actor3; };
    4545struct d_msg2 {
    46     inline message;
    47     int num;
     46        inline message;
     47        int num;
    4848};
    4949
    5050allocation receive( derived_actor3 & receiver, d_msg & msg ) {
    51     mutex(sout) sout | msg.num;
    52     if ( msg.num == -1 ) return Nodelete;
    53     return Finished;
     51        mutex(sout) sout | msg.num;
     52        if ( msg.num == -1 ) return Nodelete;
     53        return Finished;
    5454}
    5555
    5656allocation receive( derived_actor3 & receiver, d_msg2 & msg ) {
    57     mutex(sout) sout | msg.num;
    58     return Finished;
     57        mutex(sout) sout | msg.num;
     58        return Finished;
    5959}
    6060
     
    6262
    6363int main( int argc, char * argv[] ) {
    64     printf("start\n");
     64        sout | "start";
    6565
    66     processor p[Processors - 1];
     66        processor p[Processors - 1];
    6767
    68     printf("basic test\n");
    69     start_actor_system( Processors ); // test passing number of processors
    70     derived_actor a;
    71     d_msg b, c;
    72     b.num = 1;
    73     c.num = 2;
    74     a | b | c;
    75     stop_actor_system();
     68        sout | "basic test";
     69        start_actor_system( Processors ); // test passing number of processors
     70        derived_actor a;
     71        d_msg b, c;
     72        b.num = 1;
     73        c.num = 2;
     74        a | b | c;
     75        stop_actor_system();
    7676
    77     printf("same message and different actors test\n");
    78     start_actor_system(); // let system detect # of processors
    79     derived_actor2 d_ac2_0, d_ac2_1;
    80     d_msg d_ac2_msg;
    81     d_ac2_msg.num = 3;
    82     d_ac2_0 | d_ac2_msg;
    83     d_ac2_1 | d_ac2_msg;
    84     stop_actor_system();
     77        sout | "same message and different actors test";
     78        start_actor_system(); // let system detect # of processors
     79        derived_actor2 d_ac2_0, d_ac2_1;
     80        d_msg d_ac2_msg;
     81        d_ac2_msg.num = 3;
     82        d_ac2_0 | d_ac2_msg;
     83        d_ac2_1 | d_ac2_msg;
     84        stop_actor_system();
    8585
    86    
    87     {
    88         printf("same message and different actor types test\n");
    89         executor e{ 0, Processors, Processors == 1 ? 1 : Processors * 4, false };
    90         start_actor_system( e ); // pass an explicit executor
    91         derived_actor2 d_ac2_2;
    92         derived_actor3 d_ac3_0;
    93         d_msg d_ac23_msg;
    94         d_ac23_msg.num = 4;
    95         d_ac3_0 | d_ac23_msg;
    96         d_ac2_2 | d_ac23_msg;
    97         stop_actor_system();
    98     } // RAII to clean up executor
     86       
     87        {
     88                sout | "same message and different actor types test";
     89                executor e{ 0, Processors, Processors == 1 ? 1 : Processors * 4, false };
     90                start_actor_system( e ); // pass an explicit executor
     91                derived_actor2 d_ac2_2;
     92                derived_actor3 d_ac3_0;
     93                d_msg d_ac23_msg;
     94                d_ac23_msg.num = 4;
     95                d_ac3_0 | d_ac23_msg;
     96                d_ac2_2 | d_ac23_msg;
     97                stop_actor_system();
     98        } // RAII to clean up executor
    9999
    100     {
    101         printf("different message types, one actor test\n");
    102         executor e{ 1, Processors, Processors == 1 ? 1 : Processors * 4, true };
    103         start_actor_system( Processors );
    104         derived_actor3 a3;
    105         d_msg b1;
    106         d_msg2 c2;
    107         b1.num = -1;
    108         c2.num = 5;
    109         a3 | b1 | c2;
    110         stop_actor_system();
    111     } // RAII to clean up executor
     100        {
     101                sout | "different message types, one actor test";
     102                executor e{ 1, Processors, Processors == 1 ? 1 : Processors * 4, true };
     103                start_actor_system( Processors );
     104                derived_actor3 a3;
     105                d_msg b1;
     106                d_msg2 c2;
     107                b1.num = -1;
     108                c2.num = 5;
     109                a3 | b1 | c2;
     110                stop_actor_system();
     111        } // RAII to clean up executor
    112112
    113     {
    114         printf("nested inheritance actor test\n");
    115         executor e{ 1, Processors, Processors == 1 ? 1 : Processors * 4, true };
    116         start_actor_system( Processors );
    117         derived_actor4 a4;
    118         d_msg b1;
    119         d_msg2 c2;
    120         b1.num = -1;
    121         c2.num = 5;
    122         a4 | b1 | c2;
    123         stop_actor_system();
    124     } // RAII to clean up executor
     113        {
     114                sout | "nested inheritance actor test";
     115                executor e{ 1, Processors, Processors == 1 ? 1 : Processors * 4, true };
     116                start_actor_system( Processors );
     117                derived_actor4 a4;
     118                d_msg b1;
     119                d_msg2 c2;
     120                b1.num = -1;
     121                c2.num = 5;
     122                a4 | b1 | c2;
     123                stop_actor_system();
     124        } // RAII to clean up executor
    125125
    126     printf("end\n");
    127     return 0;
     126        sout | "end";
    128127}
  • tests/concurrency/channels/barrier.cfa

    rc1e66d9 rdeda7e6  
    88
    99size_t total_operations = 0;
    10 int Processors = 1, Tasks = 5, BarrierSize = 2;
     10ssize_t Processors = 1, Tasks = 5, BarrierSize = 2;             // must be signed
    1111
    1212typedef channel( int ) Channel;
     
    6565          case 3:
    6666                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    67                         BarrierSize = atoi( argv[2] );
    68             if ( Processors < 1 ) goto Usage;
     67                        BarrierSize = ato( argv[2] );
     68            if ( Processors < 1 ) fallthru default;
    6969                } // if
    7070          case 2:
    7171                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    72                         Processors = atoi( argv[1] );
    73                         if ( Processors < 1 ) goto Usage;
     72                        Processors = ato( argv[1] );
     73                        if ( Processors < 1 ) fallthru default;
    7474                } // if
    7575          case 1:                                                                                       // use defaults
    7676                break;
    7777          default:
    78           Usage:
    79                 sout | "Usage: " | argv[0]
     78                exit | "Usage: " | argv[0]
    8079             | " [ processors (> 0) | 'd' (default " | Processors
    8180                         | ") ] [ BarrierSize (> 0) | 'd' (default " | BarrierSize
    8281                         | ") ]" ;
    83                 exit( EXIT_FAILURE );
    8482        } // switch
    8583    if ( Tasks < BarrierSize )
  • tests/concurrency/channels/big_elems.cfa

    rc1e66d9 rdeda7e6  
    22#include "parallel_harness.hfa"
    33
    4 size_t Processors = 10, Channels = 10, Producers = 40, Consumers = 40, ChannelSize = 128;
     4ssize_t Processors = 10, Channels = 10, Producers = 40, Consumers = 40, ChannelSize = 128;
    55
    66int main() {
  • tests/concurrency/channels/churn.cfa

    rc1e66d9 rdeda7e6  
    77#include <time.hfa>
    88
    9 size_t Processors = 1, Channels = 4, Producers = 2, Consumers = 2, ChannelSize = 128;
     9ssize_t Processors = 1, Channels = 4, Producers = 2, Consumers = 2, ChannelSize = 128;
    1010
    1111owner_lock o;
     
    9090      case 4:
    9191                if ( strcmp( argv[3], "d" ) != 0 ) {                    // default ?
    92                         if ( atoi( argv[3] ) < 1 ) goto Usage;
    93                         ChannelSize = atoi( argv[3] );
     92                        ChannelSize = ato( argv[3] );
     93                        if ( ChannelSize < 1 ) fallthru default;
    9494                } // if
    9595      case 3:
    9696                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    97                         if ( atoi( argv[2] ) < 1 ) goto Usage;
    98                         Channels = atoi( argv[2] );
     97                        Channels = ato( argv[2] );
     98                        if ( Channels < 1 ) fallthru default;
    9999                } // if
    100100      case 2:
    101101                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    102                         if ( atoi( argv[1] ) < 1 ) goto Usage;
    103                         Processors = atoi( argv[1] );
     102                        Processors = ato( argv[1] );
     103                        if ( Processors < 1 ) fallthru default;
    104104                } // if
    105105          case 1:                                                                                       // use defaults
    106106                break;
    107107          default:
    108           Usage:
    109                 sout | "Usage: " | argv[0]
     108                exit | "Usage: " | argv[0]
    110109             | " [ processors > 0 | d ]"
    111110             | " [ producers > 0 | d ]"
    112111             | " [ consumers > 0 | d ]"
    113112             | " [ channels > 0 | d ]";
    114                 exit( EXIT_FAILURE );
    115113    }
    116114    processor p[Processors - 1];
  • tests/concurrency/channels/contend.cfa

    rc1e66d9 rdeda7e6  
    127127          case 3:
    128128                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    129                         ChannelSize = atoi( argv[2] );
     129                        ChannelSize = ato( argv[2] );
     130                        if ( ChannelSize < 1 ) fallthru default;
    130131                } // if
    131132          case 2:
    132133                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    133                         Processors = atoi( argv[1] );
    134                         if ( Processors < 1 ) goto Usage;
     134                        Processors = ato( argv[1] );
     135                        if ( Processors < 1 ) fallthru default;
    135136                } // if
    136137          case 1:                                                                                       // use defaults
    137138                break;
    138139          default:
    139           Usage:
    140                 sout | "Usage: " | argv[0]
     140                exit | "Usage: " | argv[0]
    141141             | " [ processors (> 0) | 'd' (default " | Processors
    142142                         | ") ] [ channel size (>= 0) | 'd' (default " | ChannelSize
    143143                         | ") ]" ;
    144                 exit( EXIT_FAILURE );
    145144        } // switch
     145
    146146    test(Processors, Channels, Producers, Consumers, ChannelSize);
    147147}
  • tests/concurrency/channels/daisy_chain.cfa

    rc1e66d9 rdeda7e6  
    88
    99size_t total_operations = 0;
    10 size_t Processors = 1, Tasks = 4;
     10ssize_t Processors = 1, Tasks = 4;                                              // must be signed
    1111
    1212owner_lock o;
     
    3737          case 3:
    3838                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    39                         Tasks = atoi( argv[2] );
    40             if ( Tasks < 1 ) goto Usage;
     39                        Tasks = ato( argv[2] );
     40            if ( Tasks < 1 ) fallthru default;
    4141                } // if
    4242          case 2:
    4343                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    44                         Processors = atoi( argv[1] );
    45                         if ( Processors < 1 ) goto Usage;
     44                        Processors = ato( argv[1] );
     45                        if ( Processors < 1 ) fallthru default;
    4646                } // if
    4747          case 1:                                                                                       // use defaults
    4848                break;
    4949          default:
    50           Usage:
    51                 sout | "Usage: " | argv[0]
     50                exit | "Usage: " | argv[0]
    5251             | " [ processors (> 0) | 'd' (default " | Processors
    5352                         | ") ] [ channel size (>= 0) | 'd' (default " | Tasks
    5453                         | ") ]" ;
    55                 exit( EXIT_FAILURE );
    5654        } // switch
    5755    processor proc[Processors - 1];
     
    7169    // sout | total_operations;
    7270    sout | "done";
    73 
    74     return 0;
    7571}
  • tests/concurrency/channels/hot_potato.cfa

    rc1e66d9 rdeda7e6  
    88
    99size_t total_operations = 0;
    10 size_t Processors = 1, Tasks = 4;
     10ssize_t Processors = 1, Tasks = 4;                                              // must be signed
    1111
    1212owner_lock o;
     
    3838}
    3939
    40 
    4140int main( int argc, char * argv[] ) {
    4241    switch ( argc ) {
    4342          case 3:
    4443                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    45                         Tasks = atoi( argv[2] );
    46             if ( Tasks < 1 ) goto Usage;
     44                        Tasks = ato( argv[2] );
     45            if ( Tasks < 1 ) fallthru default;
    4746                } // if
    4847          case 2:
    4948                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    50                         Processors = atoi( argv[1] );
    51                         if ( Processors < 1 ) goto Usage;
     49                        Processors = ato( argv[1] );
     50                        if ( Processors < 1 ) fallthru default;
    5251                } // if
    5352          case 1:                                                                                       // use defaults
    5453                break;
    5554          default:
    56           Usage:
    57                 sout | "Usage: " | argv[0]
     55                exit | "Usage: " | argv[0]
    5856             | " [ processors (> 0) | 'd' (default " | Processors
    5957                         | ") ] [ channel size (>= 0) | 'd' (default " | Tasks
    6058                         | ") ]" ;
    61                 exit( EXIT_FAILURE );
    6259        } // switch
     60
    6361    processor proc[Processors - 1];
    6462
  • tests/concurrency/channels/pub_sub.cfa

    rc1e66d9 rdeda7e6  
    8787          case 3:
    8888                if ( strcmp( argv[2], "d" ) != 0 ) {                    // default ?
    89                         Tasks = atoi( argv[2] );
    90             if ( Tasks < 1 ) goto Usage;
     89                        Tasks = ato( argv[2] );
     90            if ( Tasks < 1 ) fallthru default;
    9191                } // if
    9292          case 2:
    9393                if ( strcmp( argv[1], "d" ) != 0 ) {                    // default ?
    94                         Processors = atoi( argv[1] );
    95                         if ( Processors < 1 ) goto Usage;
     94                        Processors = ato( argv[1] );
     95                        if ( Processors < 1 ) fallthru default;
    9696                } // if
    9797          case 1:                                                                                       // use defaults
    9898                break;
    9999          default:
    100           Usage:
    101                 sout | "Usage: " | argv[0]
     100                exit | "Usage: " | argv[0]
    102101             | " [ processors (> 0) | 'd' (default " | Processors
    103102                         | ") ] [ Tasks (> 0) | 'd' (default " | Tasks
    104103                         | ") ]" ;
    105                 exit( EXIT_FAILURE );
    106104        } // switch
    107105    BarrierSize = Tasks;
  • tests/concurrency/examples/matrixSum.cfa

    rc1e66d9 rdeda7e6  
    1010// Created On       : Mon Oct  9 08:29:28 2017
    1111// Last Modified By : Peter A. Buhr
    12 // Last Modified On : Wed Feb 20 08:37:53 2019
    13 // Update Count     : 16
     12// Last Modified On : Fri Sep  8 19:05:34 2023
     13// Update Count     : 19
    1414//
    1515
    1616#include <fstream.hfa>
    17 #include <kernel.hfa>
    1817#include <thread.hfa>
    1918
     
    3534
    3635int main() {
    37         /* const */ int rows = 10, cols = 1000;
     36        const int rows = 10, cols = 1000;
    3837        int matrix[rows][cols], subtotals[rows], total = 0;
    3938        processor p;                                                                            // add kernel thread
    4039
    41         for ( r; rows ) {
     40        for ( r; rows ) {                                                                       // initialize
    4241                for ( c; cols ) {
    4342                        matrix[r][c] = 1;
    4443                } // for
    4544        } // for
     45
    4646        Adder * adders[rows];
    4747        for ( r; rows ) {                                                                       // start threads to sum rows
    4848                adders[r] = &(*malloc()){ matrix[r], cols, subtotals[r] };
    49 //              adders[r] = new( matrix[r], cols, &subtotals[r] );
     49                // adders[r] = new( matrix[r], cols, subtotals[r] );
    5050        } // for
     51
    5152        for ( r; rows ) {                                                                       // wait for threads to finish
    5253                delete( adders[r] );
     
    5758
    5859// Local Variables: //
    59 // tab-width: 4 //
    6060// compile-command: "cfa matrixSum.cfa" //
    6161// End: //
  • tests/concurrency/unified_locking/locks.cfa

    rc1e66d9 rdeda7e6  
    11#include <stdio.h>
    2 #include "locks.hfa"
     2#include <locks.hfa>
    33#include <stdlib.hfa>
    44#include <thread.hfa>
  • tests/concurrency/unified_locking/pthread_locks.cfa

    rc1e66d9 rdeda7e6  
    11#include <stdio.h>
    2 #include "locks.hfa"
     2#include <locks.hfa>
    33#include <stdlib.hfa>
    44#include <thread.hfa>
  • tests/concurrency/unified_locking/test_debug.cfa

    rc1e66d9 rdeda7e6  
    1 #include "locks.hfa"
     1#include <locks.hfa>
    22
    33fast_block_lock f;
  • tests/concurrency/unified_locking/thread_test.cfa

    rc1e66d9 rdeda7e6  
    11#include <stdio.h>
    2 #include "locks.hfa"
     2#include <locks.hfa>
    33#include <stdlib.hfa>
    44#include <thread.hfa>
  • tests/concurrency/waituntil/locks.cfa

    rc1e66d9 rdeda7e6  
    7373    printf("done\n");
    7474}
     75
  • tests/io/.expect/manipulatorsInput.arm64.txt

    rc1e66d9 rdeda7e6  
     1pre1 "123456", canary ok
     2pre2a "1234567", exception occurred, canary ok
     3pre2b "89", canary ok
    141 yyyyyyyyyyyyyyyyyyyy
    252 abcxxx
  • tests/io/.expect/manipulatorsInput.x64.txt

    rc1e66d9 rdeda7e6  
     1pre1 "123456", canary ok
     2pre2a "1234567", exception occurred, canary ok
     3pre2b "89", canary ok
    141 yyyyyyyyyyyyyyyyyyyy
    252 abcxxx
  • tests/io/.expect/manipulatorsInput.x86.txt

    rc1e66d9 rdeda7e6  
     1pre1 "123456", canary ok
     2pre2a "1234567", exception occurred, canary ok
     3pre2b "89", canary ok
    141 yyyyyyyyyyyyyyyyyyyy
    252 abcxxx
  • tests/io/.in/manipulatorsInput.txt

    rc1e66d9 rdeda7e6  
     1123456
     2123456789
    13abc
    24abc
  • tests/io/manipulatorsInput.cfa

    rc1e66d9 rdeda7e6  
    1515
    1616int main() {
     17        {
     18                // Upfront checks to ensure buffer safety.  Once these pass, the simpler `wdi(sizeof(s),s)`
     19                // usage, as in the scanf alignment cases below, is justified.
     20                struct {
     21                        char buf[8];
     22                        char canary;
     23                } data;
     24                static_assert( sizeof(data.buf) == 8 );
     25                static_assert( &data.buf[8] == &data.canary );  // canary comes right after buf
     26
     27                void rep(const char* casename) {
     28                        data.canary = 42;
     29                        bool caught = false;
     30                        try {
     31                                sin | wdi( sizeof(data.buf), data.buf );
     32                        } catch (cstring_length*) {
     33                                caught = true;
     34                        }
     35                        printf( "%s \"%s\"", casename, data.buf );
     36                        if ( caught ) {
     37                                printf(", exception occurred");
     38                        }
     39                        if ( data.canary == 42 ) {
     40                                printf(", canary ok");
     41                        } else {
     42                                printf(", canary overwritten to %d", data.canary);
     43                        }
     44                        printf("\n");
     45                }
     46
     47                rep("pre1");
     48                rep("pre2a");
     49                rep("pre2b");
     50                scanf("\n");  // next test does not start with %s so does not tolerate leading whitespace
     51        }
    1752        {
    1853                char s[] = "yyyyyyyyyyyyyyyyyyyy";
  • tests/minmax.cfa

    rc1e66d9 rdeda7e6  
    4545        sout | "double\t\t\t"                           | 4.0 | 3.1 | "\tmax" | max( 4.0, 3.1 );
    4646        sout | "long double\t\t"                        | 4.0l | 3.1l | "\tmax" | max( 4.0l, 3.1l );
     47
     48        sout | nl;
     49
     50        sout | "3 arguments";
     51        sout | 2 | 3 | 4 | "\tmin" | min(2, 3, 4) | "\tmax" | max(2, 3, 4);
     52        sout | 4 | 2 | 3 | "\tmin" | min(4, 2, 3) | "\tmax" | max(4, 2, 3);
     53        sout | 3 | 4 | 2 | "\tmin" | min(3, 4, 2) | "\tmax" | max(3, 4, 2);
     54
     55        sout | "4 arguments";
     56        sout | 3 | 2 | 5 | 4 | "\tmin" | min(3, 2, 5, 4) | "\tmax" | max(3, 2, 5, 4);
     57        sout | 5 | 3 | 4 | 2 | "\tmin" | min(5, 3, 4, 2) | "\tmax" | max(5, 3, 4, 2);
    4758} // main
    4859
Note: See TracChangeset for help on using the changeset viewer.