Changeset 6d43cc57 for doc/papers
- Timestamp:
- Jun 22, 2018, 1:51:56 PM (6 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
- Children:
- 3d56d15b
- Parents:
- 999c700
- Location:
- doc/papers/concurrency
- Files:
-
- 4 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/papers/concurrency/Makefile
r999c700 r6d43cc57 20 20 21 21 FIGURES = ${addsuffix .tex, \ 22 monitor \23 ext_monitor \24 22 int_monitor \ 25 23 dependency \ … … 27 25 28 26 PICTURES = ${addsuffix .pstex, \ 27 monitor \ 28 ext_monitor \ 29 29 system \ 30 30 monitor_structs \ … … 59 59 dvips ${Build}/$< -o $@ 60 60 61 ${BASE}.dvi : Makefile ${B uild} ${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \62 annex/local.bib ../../bibliography/pl.bib 61 ${BASE}.dvi : Makefile ${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \ 62 annex/local.bib ../../bibliography/pl.bib | ${Build} 63 63 # Must have *.aux file containing citations for bibtex 64 64 if [ ! -r ${basename $@}.aux ] ; then ${LaTeX} ${basename $@}.tex ; fi 65 ${BibTeX} ${Build}/${basename $@}65 -${BibTeX} ${Build}/${basename $@} 66 66 # Some citations reference others so run again to resolve these citations 67 67 ${LaTeX} ${basename $@}.tex 68 ${BibTeX} ${Build}/${basename $@}68 -${BibTeX} ${Build}/${basename $@} 69 69 # Run again to finish citations 70 70 ${LaTeX} ${basename $@}.tex … … 72 72 ## Define the default recipes. 73 73 74 ${Build} :74 ${Build} : 75 75 mkdir -p ${Build} 76 76 77 ${BASE}.out.ps :${Build}77 ${BASE}.out.ps : | ${Build} 78 78 ln -fs ${Build}/Paper.out.ps . 79 79 80 WileyNJD-AMA.bst :80 WileyNJD-AMA.bst : 81 81 ln -fs ../AMA/AMA-stix/ama/WileyNJD-AMA.bst . 82 82 83 %.tex : %.fig ${Build}83 %.tex : %.fig | ${Build} 84 84 fig2dev -L eepic $< > ${Build}/$@ 85 85 86 %.ps : %.fig ${Build}86 %.ps : %.fig | ${Build} 87 87 fig2dev -L ps $< > ${Build}/$@ 88 88 89 %.pstex : %.fig ${Build}89 %.pstex : %.fig | ${Build} 90 90 fig2dev -L pstex $< > ${Build}/$@ 91 91 fig2dev -L pstex_t -p ${Build}/$@ $< > ${Build}/$@_t -
doc/papers/concurrency/Paper.tex
r999c700 r6d43cc57 1537 1537 External scheduling allows users to wait for events from other threads without concern of unrelated events occurring. 1538 1538 The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go channels. 1539 Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics. 1540 Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. 1541 The previous example shows a simple use @_Accept@ versus @wait@/@signal@ and its advantages. 1542 Note that while other languages often use @accept@/@select@ as the core external scheduling keyword, \CFA uses @waitfor@ to prevent name collisions with existing socket \textbf{api}s. 1539 While both mechanisms have strengths and weaknesses, this project uses a control-flow mechanism to stay consistent with other language semantics. 1540 Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor routines (see Section~\ref{s:Multi-MonitorScheduling}). 1543 1541 1544 1542 For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread; … … 1654 1652 \end{cfa} 1655 1653 must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the condition queue. 1656 {\color{red}In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread.}1657 1654 1658 1655 Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. … … 1854 1851 The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. 1855 1852 Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. 1856 1857 \subsubsection{Partial Signalling} \label{partial-sig}1858 1853 \end{comment} 1859 1854 … … 1932 1927 1933 1928 \subsection{Loose Object Definitions} 1934 1935 In \uC, a monitor class declaration includes an exhaustive list of monitor operations. 1936 Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: 1937 1938 \begin{cfa} 1939 monitor A {}; 1940 1941 void f(A & mutex a); 1942 void g(A & mutex a) { 1943 waitfor(f); // Obvious which f() to wait for 1944 } 1945 1946 void f(A & mutex a, int); // New different F added in scope 1947 void h(A & mutex a) { 1948 waitfor(f); // Less obvious which f() to wait for 1949 } 1950 \end{cfa} 1951 1952 Furthermore, external scheduling is an example where implementation constraints become visible from the interface. 1953 Here is the cfa-code for the entering phase of a monitor: 1954 \begin{center} 1955 \begin{tabular}{l} 1956 \begin{cfa} 1957 if monitor is free 1958 enter 1959 elif already own the monitor 1960 continue 1961 elif monitor accepts me 1962 enter 1963 else 1964 block 1965 \end{cfa} 1966 \end{tabular} 1967 \end{center} 1929 \label{s:LooseObjectDefinitions} 1930 1931 In an object-oriented programming-language, a class includes an exhaustive list of operations. 1932 However, new members can be added via static inheritance or dynaic members, \eg JavaScript~\cite{JavaScript}. 1933 Similarly, monitor routines can be added at any time in \CFA, making it less clear for programmers and more difficult to implement. 1934 \begin{cfa} 1935 monitor M {}; 1936 void `f`( M & mutex m ); 1937 void g( M & mutex m ) { waitfor( `f` ); } $\C{// clear which f}$ 1938 void `f`( M & mutex m, int ); $\C{// different f}$ 1939 void h( M & mutex m ) { waitfor( `f` ); } $\C{// unclear which f}$ 1940 \end{cfa} 1941 Hence, the cfa-code for the entering a monitor looks like: 1942 \begin{cfa} 1943 if ( $\textrm{\textit{monitor is free}}$ ) $\LstCommentStyle{// \color{red}enter}$ 1944 else if ( $\textrm{\textit{already own monitor}}$ ) $\LstCommentStyle{// \color{red}continue}$ 1945 else if ( $\textrm{\textit{monitor accepts me}}$ ) $\LstCommentStyle{// \color{red}enter}$ 1946 else $\LstCommentStyle{// \color{red}block}$ 1947 \end{cfa} 1968 1948 For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions. 1969 However, a fast check for @monitor accepts me@is much harder to implement depending on the constraints put on the monitors.1970 Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}.1949 However, a fast check for \emph{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. 1950 Figure~\ref{fig:ClassicalMonitor} shows monitors are often expressed as an entry (calling) queue, some acceptor queues, and an urgent stack/queue. 1971 1951 1972 1952 \begin{figure} 1973 1953 \centering 1974 \subfloat[Classical Monitor] {1954 \subfloat[Classical monitor] { 1975 1955 \label{fig:ClassicalMonitor} 1976 {\resizebox{0.45\textwidth}{!}{\input{monitor }}}1956 {\resizebox{0.45\textwidth}{!}{\input{monitor.pstex_t}}} 1977 1957 }% subfloat 1978 \q quad1979 \subfloat[ bulk acquire Monitor] {1958 \quad 1959 \subfloat[Bulk acquire monitor] { 1980 1960 \label{fig:BulkMonitor} 1981 {\resizebox{0.45\textwidth}{!}{\input{ext_monitor }}}1961 {\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}} 1982 1962 }% subfloat 1983 \caption{External Scheduling Monitor} 1963 \caption{Monitor Implementation} 1964 \label{f:MonitorImplementation} 1984 1965 \end{figure} 1985 1966 1986 There are other alternatives to these pictures, but in the case of the left picture, implementing a fast accept check is relatively easy. 1987 Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (\eg 128) of mutex members. 1988 This approach requires a unique dense ordering of routines with an upper-bound and that ordering must be consistent across translation units. 1989 For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritance. 1990 However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit. 1991 This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects. 1992 1993 The alternative is to alter the implementation as in Figure~\ref{fig:BulkMonitor}. 1994 Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate. 1995 Generating a mask dynamically means that the storage for the mask information can vary between calls to @waitfor@, allowing for more flexibility and extensions. 1996 Storing an array of accepted routine pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search. 1997 Furthermore, supporting nested external scheduling (\eg listing \ref{f:nest-ext}) may now require additional searches for the @waitfor@ statement to check if a routine is already queued. 1998 1967 For a fixed (small) number of mutex routines (\eg 128), the accept check reduces to a bitmask of allowed callers, which can be checked with a single instruction. 1968 This approach requires a unique dense ordering of routines with a small upper-bound and the ordering must be consistent across translation units. 1969 For object-oriented languages these constraints are common, but \CFA mutex routines can be added in any scope and are only visible in certain translation unit, precluding program-wide dense-ordering among mutex routines. 1970 1971 Figure~\ref{fig:BulkMonitor} shows the \CFA monitor implementation. 1972 The mutex routine called is associated with each thread on the entry queue, while a list of acceptable routines is kept separately. 1973 The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a linear search. 1974 1975 \begin{comment} 1999 1976 \begin{figure} 2000 1977 \begin{cfa}[caption={Example of nested external scheduling},label={f:nest-ext}] … … 2020 1997 In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be hard to write. 2021 1998 This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA. 2022 2023 % ====================================================================== 2024 % ====================================================================== 1999 \end{comment} 2000 2001 2025 2002 \subsection{Multi-Monitor Scheduling} 2026 % ====================================================================== 2027 % ====================================================================== 2003 \label{s:Multi-MonitorScheduling} 2028 2004 2029 2005 External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax. 2030 Even in the simplest possible case, somenew semantics needs to be established:2006 Even in the simplest possible case, new semantics needs to be established: 2031 2007 \begin{cfa} 2032 2008 monitor M {}; 2033 2034 void f(M & mutex a); 2035 2036 void g(M & mutex b, M & mutex c) { 2037 waitfor(f); // two monitors M => unknown which to pass to f(M & mutex) 2038 } 2039 \end{cfa} 2040 The obvious solution is to specify the correct monitor as follows: 2041 2009 void f( M & mutex m1 ); 2010 void g( M & mutex m1, M & mutex m2 ) { 2011 waitfor( f ); $\C{// pass m1 or m2 to f?}$ 2012 } 2013 \end{cfa} 2014 The solution is for the programmer to disambiguate: 2015 \begin{cfa} 2016 waitfor( f, m2 ); $\C{// wait for call to f with argument m2}$ 2017 \end{cfa} 2018 Routine @g@ has acquired both locks, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@ (while @g@ still holds lock @m1@). 2019 This behaviour can be extended to the multi-monitor @waitfor@ statement. 2042 2020 \begin{cfa} 2043 2021 monitor M {}; 2044 2045 void f(M & mutex a); 2046 2047 void g(M & mutex a, M & mutex b) { 2048 // wait for call to f with argument b 2049 waitfor(f, b); 2050 } 2051 \end{cfa} 2052 This syntax is unambiguous. 2053 Both locks are acquired and kept by @g@. 2054 When routine @f@ is called, the lock for monitor @b@ is temporarily transferred from @g@ to @f@ (while @g@ still holds lock @a@). 2055 This behaviour can be extended to the multi-monitor @waitfor@ statement as follows. 2056 2057 \begin{cfa} 2058 monitor M {}; 2059 2060 void f(M & mutex a, M & mutex b); 2061 2062 void g(M & mutex a, M & mutex b) { 2063 // wait for call to f with arguments a and b 2064 waitfor(f, a, b); 2065 } 2066 \end{cfa} 2067 2068 Note that the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired in the routine. @waitfor@ used in any other context is undefined behaviour. 2022 void f( M & mutex m1, M & mutex m2 ); 2023 void g( M & mutex m1, M & mutex m2 ) { 2024 waitfor( f, m1, m2 ); $\C{// wait for call to f with arguments m1 and m2}$ 2025 } 2026 \end{cfa} 2027 Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by accepting routine. 2069 2028 2070 2029 An important behaviour to note is when a set of monitors only match partially: 2071 2072 2030 \begin{cfa} 2073 2031 mutex struct A {}; 2074 2075 2032 mutex struct B {}; 2076 2077 void g(A & mutex a, B & mutex b) { 2078 waitfor(f, a, b); 2079 } 2080 2033 void g( A & mutex m1, B & mutex m2 ) { 2034 waitfor( f, m1, m2 ); 2035 } 2081 2036 A a1, a2; 2082 2037 B b; 2083 2084 2038 void foo() { 2085 g(a1, b); // block on accept 2086 } 2087 2039 g( a1, b ); // block on accept 2040 } 2088 2041 void bar() { 2089 f( a2, b); // fulfill cooperation2042 f( a2, b ); // fulfill cooperation 2090 2043 } 2091 2044 \end{cfa} … … 2094 2047 It is also important to note that in the case of external scheduling the order of parameters is irrelevant; @waitfor(f,a,b)@ and @waitfor(f,b,a)@ are indistinguishable waiting condition. 2095 2048 2096 % ====================================================================== 2097 % ====================================================================== 2049 2098 2050 \subsection{\protect\lstinline|waitfor| Semantics} 2099 % ======================================================================2100 % ======================================================================2101 2051 2102 2052 Syntactically, the @waitfor@ statement takes a routine identifier and a set of monitors. … … 2197 2147 \end{figure} 2198 2148 2199 % ====================================================================== 2200 % ====================================================================== 2149 2201 2150 \subsection{Waiting For The Destructor} 2202 % ====================================================================== 2203 % ====================================================================== 2151 2204 2152 An interesting use for the @waitfor@ statement is destructor semantics. 2205 2153 Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}). … … 2228 2176 2229 2177 2230 % ###### # ###### # # # ####### # ### ##### # #2231 % # # # # # # # # # # # # # # # ## ##2232 % # # # # # # # # # # # # # # # # # #2233 % ###### # # ###### # # # # ##### # # ##### # # #2234 % # ####### # # ####### # # # # # # # #2235 % # # # # # # # # # # # # # # # #2236 % # # # # # # # ####### ####### ####### ####### ### ##### # #2237 2178 \section{Parallelism} 2179 2238 2180 Historically, computer performance was about processor speeds and instruction counts. 2239 2181 However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. … … 2245 2187 While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. 2246 2188 2189 2247 2190 \section{Paradigms} 2191 2192 2248 2193 \subsection{User-Level Threads} 2194 2249 2195 A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}. 2250 2196 These threads offer most of the same features that the operating system already provides but can be used on a much larger scale. … … 2255 2201 Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. 2256 2202 2203 2257 2204 \subsection{Fibers : User-Level Threads Without Preemption} \label{fibers} 2205 2258 2206 A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}. 2259 2207 However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}. … … 2264 2212 An example of a language that uses fibers is Go~\cite{Go} 2265 2213 2214 2266 2215 \subsection{Jobs and Thread Pools} 2216 2267 2217 An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}. 2268 2218 Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface. … … 2275 2225 The gold standard of this implementation is Intel's TBB library~\cite{TBB}. 2276 2226 2227 2277 2228 \subsection{Paradigm Performance} 2229 2278 2230 While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level. 2279 2231 Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. … … 2283 2235 Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done. 2284 2236 2237 2285 2238 \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} 2239 2286 2240 A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}. 2287 2241 It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues. … … 2291 2245 Currently \CFA only supports one \textbf{cfacluster}, the initial one. 2292 2246 2247 2293 2248 \subsection{Future Work: Machine Setup}\label{machine} 2249 2294 2250 While this was not done in the context of this paper, another important aspect of clusters is affinity. 2295 2251 While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. … … 2297 2253 OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. 2298 2254 2255 2299 2256 \subsection{Paradigms}\label{cfaparadigms} 2257 2300 2258 Given these building blocks, it is possible to reproduce all three of the popular paradigms. 2301 2259 Indeed, \textbf{uthread} is the default paradigm in \CFA. … … 2305 2263 2306 2264 2307 2308 2265 \section{Behind the Scenes} 2266 2309 2267 There are several challenges specific to \CFA when implementing concurrency. 2310 2268 These challenges are a direct result of bulk acquire and loose object definitions. … … 2323 2281 Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. 2324 2282 2325 % ====================================================================== 2326 % ====================================================================== 2283 2327 2284 \section{Mutex Routines} 2328 % ======================================================================2329 % ======================================================================2330 2285 2331 2286 The first step towards the monitor implementation is simple @mutex@ routines. … … 2362 2317 \end{figure} 2363 2318 2319 2364 2320 \subsection{Details: Interaction with polymorphism} 2321 2365 2322 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. 2366 2323 However, it is shown that entry-point locking solves most of the issues. … … 2442 2399 Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites. 2443 2400 2444 % ====================================================================== 2445 % ====================================================================== 2401 2446 2402 \section{Threading} \label{impl:thread} 2447 % ======================================================================2448 % ======================================================================2449 2403 2450 2404 Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. … … 2459 2413 \end{figure} 2460 2414 2415 2461 2416 \subsection{Processors} 2417 2462 2418 Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA. 2463 2419 Indeed, any parallelism must go through operating-system libraries. … … 2467 2423 Processors internally use coroutines to take advantage of the existing context-switching semantics. 2468 2424 2425 2469 2426 \subsection{Stack Management} 2427 2470 2428 One of the challenges of this system is to reduce the footprint as much as possible. 2471 2429 Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible. … … 2474 2432 In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large. 2475 2433 2434 2476 2435 \subsection{Context Switching} 2436 2477 2437 As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. 2478 2438 To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call. … … 2488 2448 This option is not currently present in \CFA, but the changes required to add it are strictly additive. 2489 2449 2450 2490 2451 \subsection{Preemption} \label{preemption} 2452 2491 2453 Finally, an important aspect for any complete threading system is preemption. 2492 2454 As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. … … 2522 2484 Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel. 2523 2485 2486 2524 2487 \subsection{Scheduler} 2525 2488 Finally, an aspect that was not mentioned yet is the scheduling algorithm. … … 2527 2490 Further discussion on scheduling is present in section \ref{futur:sched}. 2528 2491 2529 % ====================================================================== 2530 % ====================================================================== 2492 2531 2493 \section{Internal Scheduling} \label{impl:intsched} 2532 % ====================================================================== 2533 % ====================================================================== 2494 2534 2495 The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience): 2535 2496 2536 2497 \begin{figure} 2537 2498 \begin{center} 2538 {\resizebox{0.4\textwidth}{!}{\input{monitor }}}2499 {\resizebox{0.4\textwidth}{!}{\input{monitor.pstex_t}}} 2539 2500 \end{center} 2540 2501 \caption{Traditional illustration of a monitor} -
doc/papers/concurrency/figures/ext_monitor.fig
r999c700 r6d43cc57 8 8 -2 9 9 1200 2 10 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 3450.000 3150 3150 2850 3450 3150375011 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 4350.000 3150 4050 2850 4350 3150465012 6 5850 1950 6150225013 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2100 105 105 6000 2100 6105220514 4 1 -1 0 0 0 10 0.0000 2 105 90 60002160 d\00110 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1575.000 3450.000 1575 3150 1275 3450 1575 3750 11 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1575.000 4350.000 1575 4050 1275 4350 1575 4650 12 6 4275 1950 4575 2250 13 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2100 105 105 4425 2100 4530 2205 14 4 1 -1 0 0 0 10 0.0000 2 105 90 4425 2160 d\001 15 15 -6 16 6 5100 2100 5400 240017 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 2250 105 105 5250 2250 5355 225018 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2295 X\00116 6 4275 1650 4575 1950 17 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 1800 105 105 4425 1800 4530 1905 18 4 1 -1 0 0 0 10 0.0000 2 105 90 4425 1860 b\001 19 19 -6 20 6 5100 1800 5400 2100 21 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 1950 105 105 5250 1950 5355 1950 22 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2010 Y\001 20 6 1495 5445 5700 5655 21 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 1575 5550 80 80 1575 5550 1655 5630 22 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2925 5550 105 105 2925 5550 3030 5655 23 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 4425 5550 105 105 4425 5550 4530 5655 24 4 0 -1 0 0 0 12 0.0000 2 135 1035 3150 5625 blocked task\001 25 4 0 -1 0 0 0 12 0.0000 2 135 870 1725 5625 active task\001 26 4 0 -1 0 0 0 12 0.0000 2 135 1050 4650 5625 routine mask\001 23 27 -6 24 6 5850 1650 6150 1950 25 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 1800 105 105 6000 1800 6105 1905 26 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 1860 b\001 28 6 3525 1800 3825 2400 29 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3675 1950 105 105 3675 1950 3780 1950 30 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 31 3525 1800 3825 1800 3825 2400 3525 2400 3525 1800 32 4 1 4 0 0 0 10 0.0000 2 105 120 3675 2010 Y\001 27 33 -6 28 6 3070 5445 7275 5655 29 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3150 5550 80 80 3150 5550 3230 5630 30 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4500 5550 105 105 4500 5550 4605 5655 31 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 6000 5550 105 105 6000 5550 6105 5655 32 4 0 -1 0 0 0 12 0.0000 2 135 1035 4725 5625 blocked task\001 33 4 0 -1 0 0 0 12 0.0000 2 135 870 3300 5625 active task\001 34 4 0 -1 0 0 0 12 0.0000 2 135 1050 6225 5625 routine mask\001 34 6 3525 2100 3825 2400 35 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3675 2250 105 105 3675 2250 3780 2250 36 4 1 4 0 0 0 10 0.0000 2 105 120 3675 2295 X\001 35 37 -6 36 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3300 3600 105 105 3300 3600 3405370537 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3600 3600 105 105 3600 3600 3705370538 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6600 3900 105 105 6600 3900 6705400539 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6900 3900 105 105 6900 3900 7005400540 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2700 105 105 6000 2700 6105280541 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2400 105 105 6000 2400 6105250542 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 5100 4575 80 80 5100 4575 5180465538 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1725 3600 105 105 1725 3600 1830 3705 39 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2025 3600 105 105 2025 3600 2130 3705 40 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5025 3900 105 105 5025 3900 5130 4005 41 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5325 3900 105 105 5325 3900 5430 4005 42 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2700 105 105 4425 2700 4530 2805 43 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2400 105 105 4425 2400 4530 2505 44 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3525 4575 80 80 3525 4575 3605 4655 43 45 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 44 4050 2925 5475 2925 5475 3225 4050 3225 4050292546 2475 2925 3900 2925 3900 3225 2475 3225 2475 2925 45 47 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 46 3150 3750 3750 3750 3750 4050 3150405048 1575 3750 2175 3750 2175 4050 1575 4050 47 49 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3 48 3150 3450 3750 3450 3900367550 1575 3450 2175 3450 2325 3675 49 51 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 50 3750 3150 3600337552 2175 3150 2025 3375 51 53 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3 52 3150 4350 3750 4350 3900457554 1575 4350 2175 4350 2325 4575 53 55 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 54 3750 4050 3600427556 2175 4050 2025 4275 55 57 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 56 3150 4650 3750 4650 3750 4950 4950495058 1575 4650 2175 4650 2175 4950 3375 4950 57 59 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 58 6450 3750 6300397560 4875 3750 4725 3975 59 61 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 60 4950 4950 5175510062 3375 4950 3600 5100 61 63 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9 62 5250 4950 6450 4950 6450 4050 7050 4050 7050 3750 6450375063 6450 2850 6150 2850 6150165064 3675 4950 4875 4950 4875 4050 5475 4050 5475 3750 4875 3750 65 4875 2850 4575 2850 4575 1650 64 66 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5 65 5850 4200 5850 3300 4350 3300 4350 4200 5850420066 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1267 4275 4200 4275 3300 2775 3300 2775 4200 4275 4200 68 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 67 69 1 1 1.00 60.00 120.00 68 7 1 1.00 60.00 120.00 69 5250 3150 5250 2400 70 3675 3075 3675 2400 71 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 72 4125 2850 4575 3000 70 73 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 71 3150 3150 3750 3150 3750 2850 5700 2850 5700 1650 72 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 73 5700 2850 6150 3000 74 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 75 5100 1800 5400 1800 5400 2400 5100 2400 5100 1800 76 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2745 a\001 77 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2445 c\001 78 4 1 -1 0 0 0 12 0.0000 2 135 315 5100 5325 exit\001 79 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 3075 A\001 80 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 4875 condition\001 81 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 5100 B\001 82 4 0 -1 0 0 0 12 0.0000 2 135 420 6600 3675 stack\001 83 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3225 acceptor/\001 84 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3450 signalled\001 85 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 2850 condition\001 86 4 1 -1 0 0 0 12 0.0000 2 165 420 6000 1350 entry\001 87 4 1 -1 0 0 0 12 0.0000 2 135 495 6000 1575 queue\001 88 4 0 -1 0 0 0 12 0.0000 2 135 525 6300 2400 arrival\001 89 4 0 -1 0 0 0 12 0.0000 2 135 630 6300 2175 order of\001 90 4 1 -1 0 0 0 12 0.0000 2 135 525 5100 3675 shared\001 91 4 1 -1 0 0 0 12 0.0000 2 135 735 5100 3975 variables\001 92 4 0 0 50 -1 0 11 0.0000 2 165 855 4275 3150 Acceptables\001 93 4 0 0 50 -1 0 11 0.0000 2 120 165 5775 2700 W\001 94 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 2400 X\001 95 4 0 0 50 -1 0 11 0.0000 2 120 105 5775 2100 Z\001 96 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 1800 Y\001 74 1575 3150 2175 3150 2175 2850 4125 2850 4125 1650 75 4 1 -1 0 0 0 10 0.0000 2 75 75 4425 2745 a\001 76 4 1 -1 0 0 0 10 0.0000 2 75 75 4425 2445 c\001 77 4 1 -1 0 0 0 12 0.0000 2 135 315 3525 5325 exit\001 78 4 1 -1 0 0 0 12 0.0000 2 135 135 1725 3075 A\001 79 4 1 -1 0 0 0 12 0.0000 2 135 795 1725 4875 condition\001 80 4 1 -1 0 0 0 12 0.0000 2 135 135 1725 5100 B\001 81 4 0 -1 0 0 0 12 0.0000 2 135 420 5025 3675 stack\001 82 4 0 -1 0 0 0 12 0.0000 2 180 750 5025 3225 acceptor/\001 83 4 0 -1 0 0 0 12 0.0000 2 180 750 5025 3450 signalled\001 84 4 1 -1 0 0 0 12 0.0000 2 135 795 1725 2850 condition\001 85 4 1 -1 0 0 0 12 0.0000 2 165 420 4425 1350 entry\001 86 4 1 -1 0 0 0 12 0.0000 2 135 495 4425 1575 queue\001 87 4 0 -1 0 0 0 12 0.0000 2 135 525 4725 2400 arrival\001 88 4 0 -1 0 0 0 12 0.0000 2 135 630 4725 2175 order of\001 89 4 1 -1 0 0 0 12 0.0000 2 135 525 3525 3675 shared\001 90 4 1 -1 0 0 0 12 0.0000 2 135 735 3525 3975 variables\001 91 4 0 4 50 -1 0 11 0.0000 2 120 135 4150 1875 Y\001 92 4 0 4 50 -1 0 11 0.0000 2 120 105 4150 2175 Z\001 93 4 0 4 50 -1 0 11 0.0000 2 120 135 4150 2475 X\001 94 4 0 4 50 -1 0 11 0.0000 2 120 165 4150 2775 W\001 95 4 0 -1 0 0 3 12 0.0000 2 150 540 5025 4275 urgent\001 96 4 1 0 50 -1 0 11 0.0000 2 165 600 3150 3150 accepted\001 -
doc/papers/concurrency/figures/monitor.fig
r999c700 r6d43cc57 72 72 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 73 73 3600 1500 3600 2100 4200 2100 4200 900 74 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 475 2700 1500 2700 2100 3300 2100 3300 150076 74 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9 77 75 3600 4200 4800 4200 4800 3300 5400 3300 5400 3000 4800 3000 … … 79 77 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5 80 78 4200 3450 4200 2550 2700 2550 2700 3450 4200 3450 79 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 80 2700 1500 2700 2100 3300 2100 3300 1500 81 81 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1995 a\001 82 82 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1695 c\001 … … 89 89 4 0 -1 0 0 0 12 0.0000 2 180 750 4950 2700 signalled\001 90 90 4 1 -1 0 0 0 12 0.0000 2 135 795 1650 2100 condition\001 91 4 1 -10 0 0 12 0.0000 2 135 135 2550 1425 X\00192 4 1 -10 0 0 12 0.0000 2 135 135 3450 1425 Y\00191 4 1 4 0 0 0 12 0.0000 2 135 135 2550 1425 X\001 92 4 1 4 0 0 0 12 0.0000 2 135 135 3450 1425 Y\001 93 93 4 1 -1 0 0 0 12 0.0000 2 165 420 4350 600 entry\001 94 94 4 1 -1 0 0 0 12 0.0000 2 135 495 4350 825 queue\001 … … 100 100 4 1 -1 0 0 0 10 0.0000 2 75 75 3450 1995 c\001 101 101 4 1 -1 0 0 0 12 0.0000 2 135 570 3000 1200 queues\001 102 4 0 -1 0 0 3 12 0.0000 2 150 540 4950 3525 urgent\001
Note: See TracChangeset
for help on using the changeset viewer.