# Changeset 6d43cc57

Ignore:
Timestamp:
Jun 22, 2018, 1:51:56 PM (3 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, with_gc
Children:
3d56d15b
Parents:
999c700
Message:

more updates

Location:
doc/papers/concurrency
Files:
4 edited

Unmodified
Added
Removed
• ## doc/papers/concurrency/Makefile

 r999c700 FIGURES = ${addsuffix .tex, \ monitor \ ext_monitor \ int_monitor \ dependency \ PICTURES =${addsuffix .pstex, \ monitor \ ext_monitor \ system \ monitor_structs \ dvips ${Build}/$< -o $@${BASE}.dvi : Makefile ${Build}${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS}${PROGRAMS} ${PICTURES}${FIGURES} ${SOURCES} \ annex/local.bib ../../bibliography/pl.bib${BASE}.dvi : Makefile ${BASE}.out.ps WileyNJD-AMA.bst${GRAPHS} ${PROGRAMS}${PICTURES} ${FIGURES}${SOURCES} \ annex/local.bib ../../bibliography/pl.bib | ${Build} # Must have *.aux file containing citations for bibtex if [ ! -r${basename $@}.aux ] ; then${LaTeX} ${basename$@}.tex ; fi ${BibTeX}${Build}/${basename$@} -${BibTeX}${Build}/${basename$@} # Some citations reference others so run again to resolve these citations ${LaTeX}${basename $@}.tex${BibTeX} ${Build}/${basename $@} -${BibTeX} ${Build}/${basename $@} # Run again to finish citations${LaTeX} ${basename$@}.tex ## Define the default recipes. ${Build}:${Build} : mkdir -p ${Build}${BASE}.out.ps: ${Build}${BASE}.out.ps : | ${Build} ln -fs${Build}/Paper.out.ps . WileyNJD-AMA.bst: WileyNJD-AMA.bst : ln -fs ../AMA/AMA-stix/ama/WileyNJD-AMA.bst . %.tex : %.fig ${Build} %.tex : %.fig |${Build} fig2dev -L eepic $< >${Build}/$@ %.ps : %.fig${Build} %.ps : %.fig | ${Build} fig2dev -L ps$< > ${Build}/$@ %.pstex : %.fig ${Build} %.pstex : %.fig |${Build} fig2dev -L pstex $< >${Build}/$@ fig2dev -L pstex_t -p${Build}/$@$< > ${Build}/$@_t
• ## doc/papers/concurrency/Paper.tex

 r999c700 External scheduling allows users to wait for events from other threads without concern of unrelated events occurring. The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go channels. Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. The previous example shows a simple use @_Accept@ versus @wait@/@signal@ and its advantages. Note that while other languages often use @accept@/@select@ as the core external scheduling keyword, \CFA uses @waitfor@ to prevent name collisions with existing socket \textbf{api}s. While both mechanisms have strengths and weaknesses, this project uses a control-flow mechanism to stay consistent with other language semantics. Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor routines (see Section~\ref{s:Multi-MonitorScheduling}). For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread; \end{cfa} must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the condition queue. {\color{red}In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread.} Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. \subsubsection{Partial Signalling} \label{partial-sig} \end{comment} \subsection{Loose Object Definitions} In \uC, a monitor class declaration includes an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: \begin{cfa} monitor A {}; void f(A & mutex a); void g(A & mutex a) { waitfor(f); // Obvious which f() to wait for } void f(A & mutex a, int); // New different F added in scope void h(A & mutex a) { waitfor(f); // Less obvious which f() to wait for } \end{cfa} Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Here is the cfa-code for the entering phase of a monitor: \begin{center} \begin{tabular}{l} \begin{cfa} if monitor is free enter elif already own the monitor continue elif monitor accepts me enter else block \end{cfa} \end{tabular} \end{center} \label{s:LooseObjectDefinitions} In an object-oriented programming-language, a class includes an exhaustive list of operations. However, new members can be added via static inheritance or dynaic members, \eg JavaScript~\cite{JavaScript}. Similarly, monitor routines can be added at any time in \CFA, making it less clear for programmers and more difficult to implement. \begin{cfa} monitor M {}; void f( M & mutex m ); void g( M & mutex m ) { waitfor( f ); }       $\C{// clear which f}$ void f( M & mutex m, int );                           $\C{// different f}$ void h( M & mutex m ) { waitfor( f ); }       $\C{// unclear which f}$ \end{cfa} Hence, the cfa-code for the entering a monitor looks like: \begin{cfa} if ( $\textrm{\textit{monitor is free}}$ ) $\LstCommentStyle{// \color{red}enter}$ else if ( $\textrm{\textit{already own monitor}}$ ) $\LstCommentStyle{// \color{red}continue}$ else if ( $\textrm{\textit{monitor accepts me}}$ ) $\LstCommentStyle{// \color{red}enter}$ else $\LstCommentStyle{// \color{red}block}$ \end{cfa} For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions. However, a fast check for @monitor accepts me@ is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}. However, a fast check for \emph{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Figure~\ref{fig:ClassicalMonitor} shows monitors are often expressed as an entry (calling) queue, some acceptor queues, and an urgent stack/queue. \begin{figure} \centering \subfloat[Classical Monitor] { \subfloat[Classical monitor] { \label{fig:ClassicalMonitor} {\resizebox{0.45\textwidth}{!}{\input{monitor}}} {\resizebox{0.45\textwidth}{!}{\input{monitor.pstex_t}}} }% subfloat \qquad \subfloat[bulk acquire Monitor] { \quad \subfloat[Bulk acquire monitor] { \label{fig:BulkMonitor} {\resizebox{0.45\textwidth}{!}{\input{ext_monitor}}} {\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}} }% subfloat \caption{External Scheduling Monitor} \caption{Monitor Implementation} \label{f:MonitorImplementation} \end{figure} There are other alternatives to these pictures, but in the case of the left picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (\eg 128) of mutex members. This approach requires a unique dense ordering of routines with an upper-bound and that ordering must be consistent across translation units. For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritance. However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit. This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects. The alternative is to alter the implementation as in Figure~\ref{fig:BulkMonitor}. Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate. Generating a mask dynamically means that the storage for the mask information can vary between calls to @waitfor@, allowing for more flexibility and extensions. Storing an array of accepted routine pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (\eg listing \ref{f:nest-ext}) may now require additional searches for the @waitfor@ statement to check if a routine is already queued. For a fixed (small) number of mutex routines (\eg 128), the accept check reduces to a bitmask of allowed callers, which can be checked with a single instruction. This approach requires a unique dense ordering of routines with a small upper-bound and the ordering must be consistent across translation units. For object-oriented languages these constraints are common, but \CFA mutex routines can be added in any scope and are only visible in certain translation unit, precluding program-wide dense-ordering among mutex routines. Figure~\ref{fig:BulkMonitor} shows the \CFA monitor implementation. The mutex routine called is associated with each thread on the entry queue, while a list of acceptable routines is kept separately. The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a linear search. \begin{comment} \begin{figure} \begin{cfa}[caption={Example of nested external scheduling},label={f:nest-ext}] In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be  hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA. % ====================================================================== % ====================================================================== \end{comment} \subsection{Multi-Monitor Scheduling} % ====================================================================== % ====================================================================== \label{s:Multi-MonitorScheduling} External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax. Even in the simplest possible case, some new semantics needs to be established: Even in the simplest possible case, new semantics needs to be established: \begin{cfa} monitor M {}; void f(M & mutex a); void g(M & mutex b, M & mutex c) { waitfor(f); // two monitors M => unknown which to pass to f(M & mutex) } \end{cfa} The obvious solution is to specify the correct monitor as follows: void f( M & mutex m1 ); void g( M & mutex m1, M & mutex m2 ) { waitfor( f );                                                   $\C{// pass m1 or m2 to f?}$ } \end{cfa} The solution is for the programmer to disambiguate: \begin{cfa} waitfor( f, m2 );                                               $\C{// wait for call to f with argument m2}$ \end{cfa} Routine @g@ has acquired both locks, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@ (while @g@ still holds lock @m1@). This behaviour can be extended to the multi-monitor @waitfor@ statement. \begin{cfa} monitor M {}; void f(M & mutex a); void g(M & mutex a, M & mutex b) { // wait for call to f with argument b waitfor(f, b); } \end{cfa} This syntax is unambiguous. Both locks are acquired and kept by @g@. When routine @f@ is called, the lock for monitor @b@ is temporarily transferred from @g@ to @f@ (while @g@ still holds lock @a@). This behaviour can be extended to the multi-monitor @waitfor@ statement as follows. \begin{cfa} monitor M {}; void f(M & mutex a, M & mutex b); void g(M & mutex a, M & mutex b) { // wait for call to f with arguments a and b waitfor(f, a, b); } \end{cfa} Note that the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired in the routine. @waitfor@ used in any other context is undefined behaviour. void f( M & mutex m1, M & mutex m2 ); void g( M & mutex m1, M & mutex m2 ) { waitfor( f, m1, m2 );                                   $\C{// wait for call to f with arguments m1 and m2}$ } \end{cfa} Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by accepting routine. An important behaviour to note is when a set of monitors only match partially: \begin{cfa} mutex struct A {}; mutex struct B {}; void g(A & mutex a, B & mutex b) { waitfor(f, a, b); } void g( A & mutex m1, B & mutex m2 ) { waitfor( f, m1, m2 ); } A a1, a2; B b; void foo() { g(a1, b); // block on accept } g( a1, b ); // block on accept } void bar() { f(a2, b); // fulfill cooperation f( a2, b ); // fulfill cooperation } \end{cfa} It is also important to note that in the case of external scheduling the order of parameters is irrelevant; @waitfor(f,a,b)@ and @waitfor(f,b,a)@ are indistinguishable waiting condition. % ====================================================================== % ====================================================================== \subsection{\protect\lstinline|waitfor| Semantics} % ====================================================================== % ====================================================================== Syntactically, the @waitfor@ statement takes a routine identifier and a set of monitors. \end{figure} % ====================================================================== % ====================================================================== \subsection{Waiting For The Destructor} % ====================================================================== % ====================================================================== An interesting use for the @waitfor@ statement is destructor semantics. Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}). % ######     #    ######     #    #       #       ####### #       ###  #####  #     # % #     #   # #   #     #   # #   #       #       #       #        #  #     # ##   ## % #     #  #   #  #     #  #   #  #       #       #       #        #  #       # # # # % ######  #     # ######  #     # #       #       #####   #        #   #####  #  #  # % #       ####### #   #   ####### #       #       #       #        #        # #     # % #       #     # #    #  #     # #       #       #       #        #  #     # #     # % #       #     # #     # #     # ####### ####### ####### ####### ###  #####  #     # \section{Parallelism} Historically, computer performance was about processor speeds and instruction counts. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. \section{Paradigms} \subsection{User-Level Threads} A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}. These threads offer most of the same features that the operating system already provides but can be used on a much larger scale. Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. \subsection{Fibers : User-Level Threads Without Preemption} \label{fibers} A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}. However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}. An example of a language that uses fibers is Go~\cite{Go} \subsection{Jobs and Thread Pools} An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}. Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface. The gold standard of this implementation is Intel's TBB library~\cite{TBB}. \subsection{Paradigm Performance} While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done. \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}. It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues. Currently \CFA only supports one \textbf{cfacluster}, the initial one. \subsection{Future Work: Machine Setup}\label{machine} While this was not done in the context of this paper, another important aspect of clusters is affinity. While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. \subsection{Paradigms}\label{cfaparadigms} Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \textbf{uthread} is the default paradigm in \CFA. \section{Behind the Scenes} There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of bulk acquire and loose object definitions. Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. % ====================================================================== % ====================================================================== \section{Mutex Routines} % ====================================================================== % ====================================================================== The first step towards the monitor implementation is simple @mutex@ routines. \end{figure} \subsection{Details: Interaction with polymorphism} Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues. Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites. % ====================================================================== % ====================================================================== \section{Threading} \label{impl:thread} % ====================================================================== % ====================================================================== Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. \end{figure} \subsection{Processors} Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA. Indeed, any parallelism must go through operating-system libraries. Processors internally use coroutines to take advantage of the existing context-switching semantics. \subsection{Stack Management} One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible. In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large. \subsection{Context Switching} As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call. This option is not currently present in \CFA, but the changes required to add it are strictly additive. \subsection{Preemption} \label{preemption} Finally, an important aspect for any complete threading system is preemption. As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel. \subsection{Scheduler} Finally, an aspect that was not mentioned yet is the scheduling algorithm. Further discussion on scheduling is present in section \ref{futur:sched}. % ====================================================================== % ====================================================================== \section{Internal Scheduling} \label{impl:intsched} % ====================================================================== % ====================================================================== The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience): \begin{figure} \begin{center} {\resizebox{0.4\textwidth}{!}{\input{monitor}}} {\resizebox{0.4\textwidth}{!}{\input{monitor.pstex_t}}} \end{center} \caption{Traditional illustration of a monitor}
• ## doc/papers/concurrency/figures/ext_monitor.fig

 r999c700 -2 1200 2 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 3450.000 3150 3150 2850 3450 3150 3750 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 4350.000 3150 4050 2850 4350 3150 4650 6 5850 1950 6150 2250 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2100 105 105 6000 2100 6105 2205 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 2160 d\001 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1575.000 3450.000 1575 3150 1275 3450 1575 3750 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1575.000 4350.000 1575 4050 1275 4350 1575 4650 6 4275 1950 4575 2250 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2100 105 105 4425 2100 4530 2205 4 1 -1 0 0 0 10 0.0000 2 105 90 4425 2160 d\001 -6 6 5100 2100 5400 2400 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 2250 105 105 5250 2250 5355 2250 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2295 X\001 6 4275 1650 4575 1950 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 1800 105 105 4425 1800 4530 1905 4 1 -1 0 0 0 10 0.0000 2 105 90 4425 1860 b\001 -6 6 5100 1800 5400 2100 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 1950 105 105 5250 1950 5355 1950 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2010 Y\001 6 1495 5445 5700 5655 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 1575 5550 80 80 1575 5550 1655 5630 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2925 5550 105 105 2925 5550 3030 5655 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 4425 5550 105 105 4425 5550 4530 5655 4 0 -1 0 0 0 12 0.0000 2 135 1035 3150 5625 blocked task\001 4 0 -1 0 0 0 12 0.0000 2 135 870 1725 5625 active task\001 4 0 -1 0 0 0 12 0.0000 2 135 1050 4650 5625 routine mask\001 -6 6 5850 1650 6150 1950 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 1800 105 105 6000 1800 6105 1905 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 1860 b\001 6 3525 1800 3825 2400 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3675 1950 105 105 3675 1950 3780 1950 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3525 1800 3825 1800 3825 2400 3525 2400 3525 1800 4 1 4 0 0 0 10 0.0000 2 105 120 3675 2010 Y\001 -6 6 3070 5445 7275 5655 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3150 5550 80 80 3150 5550 3230 5630 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4500 5550 105 105 4500 5550 4605 5655 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 6000 5550 105 105 6000 5550 6105 5655 4 0 -1 0 0 0 12 0.0000 2 135 1035 4725 5625 blocked task\001 4 0 -1 0 0 0 12 0.0000 2 135 870 3300 5625 active task\001 4 0 -1 0 0 0 12 0.0000 2 135 1050 6225 5625 routine mask\001 6 3525 2100 3825 2400 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3675 2250 105 105 3675 2250 3780 2250 4 1 4 0 0 0 10 0.0000 2 105 120 3675 2295 X\001 -6 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3300 3600 105 105 3300 3600 3405 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3600 3600 105 105 3600 3600 3705 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6600 3900 105 105 6600 3900 6705 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6900 3900 105 105 6900 3900 7005 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2700 105 105 6000 2700 6105 2805 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2400 105 105 6000 2400 6105 2505 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 5100 4575 80 80 5100 4575 5180 4655 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1725 3600 105 105 1725 3600 1830 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2025 3600 105 105 2025 3600 2130 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5025 3900 105 105 5025 3900 5130 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5325 3900 105 105 5325 3900 5430 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2700 105 105 4425 2700 4530 2805 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2400 105 105 4425 2400 4530 2505 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3525 4575 80 80 3525 4575 3605 4655 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 4050 2925 5475 2925 5475 3225 4050 3225 4050 2925 2475 2925 3900 2925 3900 3225 2475 3225 2475 2925 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 3150 3750 3750 3750 3750 4050 3150 4050 1575 3750 2175 3750 2175 4050 1575 4050 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3 3150 3450 3750 3450 3900 3675 1575 3450 2175 3450 2325 3675 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 3750 3150 3600 3375 2175 3150 2025 3375 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3 3150 4350 3750 4350 3900 4575 1575 4350 2175 4350 2325 4575 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 3750 4050 3600 4275 2175 4050 2025 4275 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 3150 4650 3750 4650 3750 4950 4950 4950 1575 4650 2175 4650 2175 4950 3375 4950 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 6450 3750 6300 3975 4875 3750 4725 3975 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 4950 4950 5175 5100 3375 4950 3600 5100 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9 5250 4950 6450 4950 6450 4050 7050 4050 7050 3750 6450 3750 6450 2850 6150 2850 6150 1650 3675 4950 4875 4950 4875 4050 5475 4050 5475 3750 4875 3750 4875 2850 4575 2850 4575 1650 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5 5850 4200 5850 3300 4350 3300 4350 4200 5850 4200 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2 4275 4200 4275 3300 2775 3300 2775 4200 4275 4200 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 60.00 120.00 7 1 1.00 60.00 120.00 5250 3150 5250 2400 3675 3075 3675 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4125 2850 4575 3000 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3150 3150 3750 3150 3750 2850 5700 2850 5700 1650 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5700 2850 6150 3000 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 5100 1800 5400 1800 5400 2400 5100 2400 5100 1800 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2745 a\001 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2445 c\001 4 1 -1 0 0 0 12 0.0000 2 135 315 5100 5325 exit\001 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 3075 A\001 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 4875 condition\001 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 5100 B\001 4 0 -1 0 0 0 12 0.0000 2 135 420 6600 3675 stack\001 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3225 acceptor/\001 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3450 signalled\001 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 2850 condition\001 4 1 -1 0 0 0 12 0.0000 2 165 420 6000 1350 entry\001 4 1 -1 0 0 0 12 0.0000 2 135 495 6000 1575 queue\001 4 0 -1 0 0 0 12 0.0000 2 135 525 6300 2400 arrival\001 4 0 -1 0 0 0 12 0.0000 2 135 630 6300 2175 order of\001 4 1 -1 0 0 0 12 0.0000 2 135 525 5100 3675 shared\001 4 1 -1 0 0 0 12 0.0000 2 135 735 5100 3975 variables\001 4 0 0 50 -1 0 11 0.0000 2 165 855 4275 3150 Acceptables\001 4 0 0 50 -1 0 11 0.0000 2 120 165 5775 2700 W\001 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 2400 X\001 4 0 0 50 -1 0 11 0.0000 2 120 105 5775 2100 Z\001 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 1800 Y\001 1575 3150 2175 3150 2175 2850 4125 2850 4125 1650 4 1 -1 0 0 0 10 0.0000 2 75 75 4425 2745 a\001 4 1 -1 0 0 0 10 0.0000 2 75 75 4425 2445 c\001 4 1 -1 0 0 0 12 0.0000 2 135 315 3525 5325 exit\001 4 1 -1 0 0 0 12 0.0000 2 135 135 1725 3075 A\001 4 1 -1 0 0 0 12 0.0000 2 135 795 1725 4875 condition\001 4 1 -1 0 0 0 12 0.0000 2 135 135 1725 5100 B\001 4 0 -1 0 0 0 12 0.0000 2 135 420 5025 3675 stack\001 4 0 -1 0 0 0 12 0.0000 2 180 750 5025 3225 acceptor/\001 4 0 -1 0 0 0 12 0.0000 2 180 750 5025 3450 signalled\001 4 1 -1 0 0 0 12 0.0000 2 135 795 1725 2850 condition\001 4 1 -1 0 0 0 12 0.0000 2 165 420 4425 1350 entry\001 4 1 -1 0 0 0 12 0.0000 2 135 495 4425 1575 queue\001 4 0 -1 0 0 0 12 0.0000 2 135 525 4725 2400 arrival\001 4 0 -1 0 0 0 12 0.0000 2 135 630 4725 2175 order of\001 4 1 -1 0 0 0 12 0.0000 2 135 525 3525 3675 shared\001 4 1 -1 0 0 0 12 0.0000 2 135 735 3525 3975 variables\001 4 0 4 50 -1 0 11 0.0000 2 120 135 4150 1875 Y\001 4 0 4 50 -1 0 11 0.0000 2 120 105 4150 2175 Z\001 4 0 4 50 -1 0 11 0.0000 2 120 135 4150 2475 X\001 4 0 4 50 -1 0 11 0.0000 2 120 165 4150 2775 W\001 4 0 -1 0 0 3 12 0.0000 2 150 540 5025 4275 urgent\001 4 1 0 50 -1 0 11 0.0000 2 165 600 3150 3150 accepted\001
• ## doc/papers/concurrency/figures/monitor.fig

 r999c700 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 3600 1500 3600 2100 4200 2100 4200 900 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 2700 1500 2700 2100 3300 2100 3300 1500 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9 3600 4200 4800 4200 4800 3300 5400 3300 5400 3000 4800 3000 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5 4200 3450 4200 2550 2700 2550 2700 3450 4200 3450 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 2700 1500 2700 2100 3300 2100 3300 1500 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1995 a\001 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1695 c\001 4 0 -1 0 0 0 12 0.0000 2 180 750 4950 2700 signalled\001 4 1 -1 0 0 0 12 0.0000 2 135 795 1650 2100 condition\001 4 1 -1 0 0 0 12 0.0000 2 135 135 2550 1425 X\001 4 1 -1 0 0 0 12 0.0000 2 135 135 3450 1425 Y\001 4 1 4 0 0 0 12 0.0000 2 135 135 2550 1425 X\001 4 1 4 0 0 0 12 0.0000 2 135 135 3450 1425 Y\001 4 1 -1 0 0 0 12 0.0000 2 165 420 4350 600 entry\001 4 1 -1 0 0 0 12 0.0000 2 135 495 4350 825 queue\001 4 1 -1 0 0 0 10 0.0000 2 75 75 3450 1995 c\001 4 1 -1 0 0 0 12 0.0000 2 135 570 3000 1200 queues\001 4 0 -1 0 0 3 12 0.0000 2 150 540 4950 3525 urgent\001
Note: See TracChangeset for help on using the changeset viewer.