# Changeset 3fc0f2a

Ignore:
Timestamp:
Jun 19, 2019, 11:50:35 AM (4 years ago)
Branches:
arm-eh, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
17a0ede2
Parents:
c829320 (diff), 5aa4656 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Files:
3 edited

### Legend:

Unmodified
 rc829320 For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@. \newpage The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@. The following has monitor parameter types that are composed of multiple objects. Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@. \newpage \begin{cfa} void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$ \end{cfa} The bulk-acquire semantics allow @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@. In the calls to @bar@ and @baz@, the monitors are acquired in opposite order, possibly resulting in deadlock. The calls to @bar@ and @baz@ acquired the monitors in opposite order, possibly resulting in deadlock. However, this case is the simplest instance of the \emph{nested-monitor problem}~\cite{Lister77}, where monitors are acquired in sequence versus bulk. Detecting the nested-monitor problem requires dynamic tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}. % It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74} % \end{cquote} Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of barging. Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging. Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging. Figure~\ref{f:MonitorScheduling} shows internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}). Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}). External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order. Internal threads block on condition queues via @wait@ and they reenter from the condition in FIFO order. Internal threads block on condition queues via @wait@ and they reenter from the condition in FIFO order, or they block on urgent via @signal_block@ or @waitfor@ and reenter implicit when the monitor becomes empty, \ie, the thread in the monitor exits or waits. There are three signalling mechanisms to unblock waiting threads to enter the monitor. Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion so only can proceed. Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion so only one can proceed. For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line). Multiple signals move multiple signallees to urgent, until the condition is empty. Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent. External scheduling requires urgent to be a stack, because the signaller excepts to execute immediately after the specified monitor call has exited or waited. Internal scheduling behaves the same for an urgent stack or queue, except for signalling multiple threads, where the threads unblock from urgent in reverse order from signalling. If the restart order is important, multiple signalling by a signal thread can be transformed into shared signalling among threads, where each thread signals the next thread. Hence, \CFA uses an urgent stack. Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling. If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread. We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next. Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling. \begin{figure} \end{figure} Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@. Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@. The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list. The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer. External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion. If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer. Threads making calls to functions that are currently excluded block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor. Calls threads to functions that are currently excluded block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor. Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@. The writer does a similar action for each reader or writer using the resource. Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@. External scheduling allows waiting for events from other threads while restricting unrelated events. External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor. The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels. While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features. Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number. Putting loops around the @wait@s does not correct the problem; the solution must be restructured to account for barging. the simple solution must be restructured to account for barging. \begin{figure} the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently. The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor. Blocking signalling is the reverse, where the waiter is providing the cooperation for the signalling thread; Blocking signal is the reverse, where the waiter is providing the cooperation for the signalling thread; the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter. The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state. While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric. Finally, a signaller, \newpage \begin{cfa} void baz( M & mutex m1, M & mutex m2 ) { } \end{cfa} must have acquired at least the same locks as the waiting thread signalled from the condition queue. must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging. Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn, m1, m2 )@. The \emph{conditional-expression} of a @when@ may call a function, but the function must not block or context switch. If there are multiple acceptable mutex calls, selection occurs top-to-bottom (prioritized) among the @waitfor@ clauses, whereas some programming languages with similar mechanisms accept nondeterministically for this case, \eg Go \lstinline[morekeywords=select]@select@. If some accept guards are true and there are no outstanding calls to these members, the acceptor is accept-blocked until a call to one of these members is made. If some accept guards are true and there are no outstanding calls to these members, the acceptor is blocked until a call to one of these members is made. If there is a @timeout@ clause, it provides an upper bound on waiting. If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead. However, the basic @waitfor@ semantics do not support this functionality, since using an object after its destructor is called is undefined. Therefore, to make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the call to the destructor is placed on the urgent queue and the acceptor continues execution, which throws an exception to the acceptor and then the caller is unblocked from the urgent queue to deallocate the object. Accepting the destructor is an idiomatic way to terminate a thread in \CFA. Accepting the destructor is the idiomatic way to terminate a thread in \CFA. In the object-oriented scenario, the type and all its operators are always present at compilation (even separate compilation), so it is possible to number the operations in a bit mask and use an $O(1)$ compare with a similar bit mask created for the operations specified in a @waitfor@. In \CFA, monitor functions can be statically added/removed in translation units, so it is impossible to apply an $O(1)$ approach. However, in \CFA, monitor functions can be statically added/removed in translation units, making a fast subset check difficult. \begin{cfa} monitor M { ... }; // common type, included in .h file void g( M & mutex m ) { waitfor( f, m ); } translation unit 2 void f( M & mutex m ); void f( M & mutex m ); $\C{// replacing f and g for type M in this translation unit}$ void g( M & mutex m ); void h( M & mutex m ) { waitfor( f, m ) or waitfor( g, m ); } void h( M & mutex m ) { waitfor( f, m ) or waitfor( g, m ); } $\C{// extending type M in this translation unit}$ \end{cfa} The @waitfor@ statements in each translation unit cannot form a unique bit-mask because the monitor type does not carry that information. Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array, Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array. Then, the same implementation approach used for the urgent stack is used for the calling queue. Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (usually short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers. External scheduling, like internal scheduling, becomes significantly more complex for multi-monitor semantics. Even in the simplest case, new semantics needs to be established. Even in the simplest case, new semantics need to be established. \begin{cfa} monitor M { ... }; For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc. However, we strongly advocate using high-level concurrency mechanisms whenever possible. Some of these low-level mechanism are used in the \CFA runtime, but we strongly advocate using high-level mechanisms whenever possible. \label{s:RuntimeStructureCluster} A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the threads from its own ready queue (like an OS). A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads). The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults. The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors. However, the scheduler is pluggable, supporting alternative schedulers, such as multi-queue multi-server, with work-stealing/sharing. However, the scheduler is pluggable, supporting alternative schedulers, such as multi-queue multi-server, with work-stealing/sharing across the virtual processors. If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another. No automatic load balancing among clusters is performed by \CFA. \label{s:RuntimeStructureProcessor} A virtual processor is implemented by a kernel thread (\eg UNIX process), which is subsequently scheduled for execution on a hardware processor by the underlying operating system. A virtual processor is implemented by a kernel thread (\eg UNIX process), which are scheduled for execution on a hardware processor by the underlying operating system. Programs may use more virtual processors than hardware processors. On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel. (It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.) The \CFA runtime attempts to block unused processors and unblock processors as the system load increases; balancing the workload with processors is difficult. balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicaton workload do next. Preemption occurs on virtual processors rather than user threads, via operating-system interrupts. Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads. \subsection{Preemption} Nondeterministic preemption provides fairness from long running threads, and forces concurrent programmers to write more robust programs, rather than relying on section of code between cooperative scheduling to be atomic, Nondeterministic preemption provides fairness from long running threads, and forces concurrent programmers to write more robust programs, rather than relying on section of code between cooperative scheduling to be atomic. A separate reason for not supporting preemption is that it significantly complicates the runtime system. Preemption is normally handled by setting a count-down timer on each virtual processor. There are two versions of the \CFA runtime kernel: debug and non-debug. The debugging version has many runtime checks and internal assertions, \eg stack (non-writable) guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows. After a program is debugged, the non-debugging version can be used to decrease space and increase performance. After a program is debugged, the non-debugging version can be used to significantly decrease space and increase performance. The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low. \newpage \begin{multicols}{2} \lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{}{}} One solution is to offer various tuning options, allowing the scheduler to be adjusted to the requirements of the workload. However, to be truly flexible, a pluggable scheduler is necessary. Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion. Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion~\cite{Buhr00b}. \paragraph{Non-Blocking I/O} \section{Acknowledgements} The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz and Andrew Beach on the features described in this paper. The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach and Michael Brooks on the features described in this paper. Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}). %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.