Changeset 9b4343e for doc/proposals/concurrency
- Timestamp:
- Oct 13, 2016, 12:50:43 PM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- a3eaa29
- Parents:
- a9aab60
- Location:
- doc/proposals/concurrency
- Files:
-
- 2 added
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/.gitignore
ra9aab60 r9b4343e 17 17 concurrency.pdf 18 18 concurrency.ps 19 version.aux -
doc/proposals/concurrency/Makefile
ra9aab60 r9b4343e 58 58 # Run again to get index title into table of contents 59 59 ${LaTeX} ${basename $@}.tex 60 -./bump_ver.sh 60 61 ${LaTeX} ${basename $@}.tex 61 62 … … 76 77 fig2dev -L pstex_t -p $@ $< > $@_t 77 78 79 78 80 # Local Variables: # 79 81 # compile-command: "make" # -
doc/proposals/concurrency/concurrency.tex
ra9aab60 r9b4343e 33 33 \usepackage[usenames]{color} 34 34 \usepackage[pagewise]{lineno} 35 \usepackage{fancyhdr} 35 36 \renewcommand{\linenumberfont}{\scriptsize\sffamily} 36 37 \input{common} % bespoke macros used in the document … … 70 71 \setcounter{secnumdepth}{3} % number subsubsections 71 72 \setcounter{tocdepth}{3} % subsubsections in table of contents 73 \linenumbers % comment out to turn off line numbering 72 74 \makeindex 75 \pagestyle{fancy} 76 \fancyhf{} 77 \cfoot{\thepage} 78 \rfoot{v\input{version}} 73 79 74 80 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% … … 84 90 \maketitle 85 91 \section{Introduction} 86 This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build "higher level" features. The simplest possible core is a thread and a lock but this low level approach is hard to master. An easier approach for users is be to support higherlevel construct as the basis of the concurrency in \CFA.87 Indeed, for hig ly productive parallel programming high-level approaches are much more popular\cite{HPP:Study}. Examples are task based parallelism, message passing, implicit threading.88 89 There are actually two problems that need to be solved in the design of the concurrency for a language. Which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization while parallelism tools are more about performance, cost and res source utilisation.92 This proposal provides a minimal core concurrency API that is both simple, efficient and can be reused to build higher-level features. The simplest possible core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is be to support higher-level construct as the basis of the concurrency in \CFA. 93 Indeed, for highly productive parallel programming high-level approaches are much more popular\cite{HPP:Study}. Examples are task based parallelism, message passing, implicit threading. 94 95 There are actually two problems that need to be solved in the design of the concurrency for a language. Which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization while parallelism tools are more about performance, cost and resource utilization. 90 96 91 97 \section{Concurrency} 92 Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared state completely (Erlang\cite{Erlang}, Haskell\cite{Haskell}, Akka (Scala)\cit). In the paradigms, interaction between concurrent objects rely on message passing or other paradigms that often closely relate to networking concepts. However, in imperative or OO languages these approaches entail a clear distinction between concurrent and non concurrent paradigms. Which in turns mean that programmers need to learn two sets of designs patterns in order to be effective at their jobs. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. However for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm\cite{HPP:Study}. This paperproposes Monitors\cit as the core concurrency construct.98 Several tool can be used to solve concurrency challenges. Since these challenges always appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared-state (Erlang\cite{Erlang}, Haskell\cite{Haskell}, Akka (Scala)\cit). In these paradigms, interaction among concurrent objects rely on message passing or other paradigms that often closely relate to networking concepts. However, in imperative or OO languages, these approaches entail a clear distinction between concurrent and non-concurrent paradigms (i.e. message passing versus routine call). Which in turns mean that programmers need to learn two sets of designs patterns in order to be effective. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on non-concurrent constructs like routine calls and objects. At a lower level these can be implemented as locks and atomic operations. However, for productivity reasons it is desireable to have a higher-level construct to be the core concurrency paradigm\cite{HPP:Study}. This project proposes Monitors\cit as the core concurrency construct. 93 99 94 100 Finally, an approach that is worth mentionning because it is gaining in popularity is transactionnal memory\cite{Dice10}. However, the performance and feature set is currently too restrictive to be possible to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. 95 101 96 102 \section{Monitors} 97 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java\cite{Java} or \uC\cite{uC++book} but does not strictly require OOP semantics. The only requirements is t o be ableto declare a handle to a shared object and a set of routines that act on it :103 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java\cite{Java} or \uC\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it : 98 104 \begin{lstlisting} 99 105 typedef /*some monitor type*/ monitor; … … 107 113 108 114 \subsection{Call semantics} \label{call} 109 The above example of monitors already displays some of their intrinsic caracteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because since at their core, monitors are simply implicit mutual exclusion objects (locks) and copying semantics of these is ill defined. Therefore, monitors are implicitly non-copyable.110 111 Another aspect to consider is when a monitor acquires its mutual exclusion. Indeed, a monitor may need to be passed t ohelper routines that do not acquire the monitor mutual exclusion on entry. Examples of this can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following example :115 The above example of monitors already displays some of their intrinsic caracteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable. 116 117 Another aspect to consider is when a monitor acquires its mutual exclusion. Indeed, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual exclusion on entry. Examples of this can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following example : 112 118 113 119 \begin{lstlisting} 114 120 mutex struct counter_t { /*...*/ }; 115 121 116 void ?{}(counter_t & mutex this);122 void ?{}(counter_t & nomutex this); 117 123 int ++?(counter_t & mutex this); 118 void ?{}(int * this, counter_t & mutex cnt); 119 120 bool is_zero(counter_t & nomutex this) { 121 int val = this; 122 return val == 0; 123 } 124 \end{lstlisting} 125 *semantics of the declaration of \code{mutex struct counter_t} will be discussed in details in \ref{data} 126 127 This is an example of a monitor used as safe(ish) counter for concurrency. This API, which offers the prefix increment operator and a conversion operator to \code{int}, guarantees that reading the value (by converting it to \code{int}) and incrementing it are mutually exclusive. Note that the \code{is_zero} routine uses the \code{nomutex} keyword. Indeed, since reading the value is already atomic, there is no point in maintaining the mutual exclusion once the value is copied locally (in the variable \code{val} ). 124 void ?{}(Int * this, counter_t & mutex cnt); 125 \end{lstlisting} 126 *semantics of the declaration of \code{mutex struct counter_t} are discussed in details in section \ref{data} 127 128 This example is of a monitor implementing an atomic counter. Here, the constructor uses the \code{nomutex} keyword to signify that it does not acquire the coroutine mutual exclusion when constructing. This is because object not yet constructed should never be shared and therefore do not require mutual exclusion. The prefix increment operator 129 uses \code{mutex} to protect the incrementing process from race conditions. Finally, we have a conversion operator from \code{counter_t} to \code{Int}. This conversion may or may not require the \code{mutex} key word depending whether or not reading an \code{Int} is an atomic operation or not. 128 130 129 131 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. If there were a meaning to routine \code{void foo(counter_t & this)} then one could argue that it should be to default to the safest option : \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. It can be argued that this is the more "normal" behavior, \code{nomutex} effectively stating explicitly that "this routine has nothing special". An other alternative is to make one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routine \code{void foo(counter_t & this)}. Mandatory keywords would also have the added benefice of being more clearly self-documented but at the cost of extra typing. In the end, which solution should be picked is still up for debate. For the reminder of this proposal, the explicit approach will be used for the sake of clarity. … … 131 133 Regardless of which keyword is kept, it is important to establish when mutex/nomutex may be used depending on type parameters. 132 134 \begin{lstlisting} 133 int f01(monitor & mutex m); 134 int f02(const monitor & mutex m); 135 int f03(monitor * mutex m); 136 int f04(monitor * mutex * m); 137 int f05(monitor ** mutex m); 138 int f06(monitor[10] mutex m); 139 int f07(monitor[] mutex m); 140 int f08(vector(monitor) & mutex m); 141 int f09(list(monitor) & mutex m); 142 int f10([monitor*, int] & mutex m); 143 int f11(graph(monitor*) & mutex m); 144 \end{lstlisting} 145 146 For the first few routines it seems to make sense to support the mutex keyword for such small variations. The difference between pointers and reference (\code{f01} vs \code{f03}) or const and non-const (\code{f01} vs \code{f02}) has no significance to mutual exclusion. It may not always make sense to acquire the monitor when extra dereferences (\code{f04}, \code{f05}) are added but it is still technically feasible and the present of the explicit mutex keywork does make it very clear of the user's intentions. Passing in a known-sized array(\code{f06}) is also technically feasible but is close to the limits. Indeed, the size of the array is not actually enforced by the compiler and if replaced by a variable-sized array (\code{f07}) or a higher-level container (\code{f08}, \code{f09}) it becomes much more complex to properly acquire all the locks needed for such a complex critical section. This implicit acquisition also poses the question of what qualifies as a container. If the mutex keyword is supported on monitors stored inside of other types it can quickly become complex and unclear which monitor should be acquired and when. The extreme example of this is \code{f11} which takes a possibly cyclic graph of pointers to monitors. With such a routine signature the intuition of which monitors will be acquired on entry is lost\cite{Chicken}. Where to draw the lines is up for debate but it seems reasonnable to consider \code{f03} as accepted and \code{f06} as rejected. 135 int f1(monitor & mutex m); 136 int f2(const monitor & mutex m); 137 int f3(monitor ** mutex m); 138 int f4(monitor *[] mutex m); 139 int f5(graph(monitor*) & mutex m); 140 \end{lstlisting} 141 142 The problem is to indentify which object(s) should be acquired. Furthermore we also need to acquire each objects only once. In case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhastive list of objects to acquire on entering. Adding references (\code{f3}) still allows the compiler and programmer to indentify which object will be acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths aren't necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5} which uses a custom graph of monitors. To keep everyone as sane as possible, this projects to imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with potential qualifiers and references. 147 143 148 144 \subsection{Data semantics} \label{data} … … 153 149 }; 154 150 155 void ?{}(counter_t & mutex this) {151 void ?{}(counter_t & nomutex this) { 156 152 this.cnt = 0; 157 153 } … … 168 164 Thread 1 & Thread 2 \\ 169 165 \begin{lstlisting} 170 void main(counter_t & mutex c) {166 void f(counter_t & mutex c) { 171 167 for(;;) { 172 int count = c; 173 sout | count | endl; 168 sout | (int)c | endl; 174 169 } 175 170 } 176 171 \end{lstlisting} &\begin{lstlisting} 177 void main(counter_t & mutex c) {172 void g(counter_t & mutex c) { 178 173 for(;;) { 179 174 ++c; … … 197 192 \end{lstlisting} 198 193 199 This code acquires both locks before entering the critical section. In practice, writing multi-locking routines that can lead to deadlocks can be very tricky. Having language level support for such feature is therefore a significant asset for \CFA. However, as the this proposal shows,this does have significant repercussions relating to scheduling (see \ref{insched} and \ref{extsched}). The ability to acquire multiple monitors at the same time does incur a significant pitfall even without looking into scheduling. For example :194 This code acquires both locks before entering the critical section. In practice, writing multi-locking routines that can lead to deadlocks can be very tricky. Having language level support for such feature is therefore a significant asset for \CFA. However, this does have significant repercussions relating to scheduling (see \ref{insched} and \ref{extsched}). The ability to acquire multiple monitors at the same time does incur a significant pitfall even without looking into scheduling. For example : 200 195 \begin{lstlisting} 201 196 void foo(A & mutex a, B & mutex a) { … … 216 211 \end{lstlisting} 217 212 213 % TODO 218 214 TODO: dig further into monitor order aquiring 219 215 … … 239 235 \end{lstlisting} 240 236 241 Here routine \code{foo} waits on the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. This can easily be extended to multi-monitor calls by offering the same guarantee.237 Here routine \code{foo} waits on the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. This semantic can easily be extended to multi-monitor calls by offering the same guarantee. 242 238 243 239 \begin{center} … … 266 262 \end{center} 267 263 268 A direct extension of the single monitor semantics would be to release all locks when waiting and transferring ownership of all locks when signalling. However, for the purpose of synchronization it may be usefull to only release some of the locks but keep others. On the technical side, partially releasing lock is feasible but from the user perspective a choice must be made for the syntax of this feature. It is possible to do without any extra syntax by relying on order of acquisition :264 A direct extension of the single monitor semantics would be to release all locks when waiting and transferring ownership of all locks when signalling. However, for the purpose of synchronization it may be usefull to only release some of the locks but keep others. On the technical side, partially releasing lock is feasible but from the user perspective a choice must be made for the syntax of this feature. It is possible to do without any extra syntax by relying on order of acquisition (Note that here the use of helper routines is irrelevant here, only routines the acquire mutual exclusion have an impact on internal scheduling): 269 265 270 266 \begin{center} … … 273 269 \hline 274 270 \begin{lstlisting} 271 condition e; 272 275 273 void foo(monitor & mutex a, 276 274 monitor & mutex b) { 277 wait( a.e);275 wait(e); 278 276 } 279 277 … … 285 283 foo(a,b); 286 284 \end{lstlisting} &\begin{lstlisting} 285 condition e; 286 287 287 void bar(monitor & mutex a, 288 288 monitor & nomutex b) { … … 292 292 void foo(monitor & mutex a, 293 293 monitor & mutex b) { 294 wait( a.e);294 wait(e); 295 295 } 296 296 297 297 bar(a, b); 298 298 \end{lstlisting} &\begin{lstlisting} 299 condition e; 300 299 301 void bar(monitor & mutex a, 300 302 monitor & nomutex b) { … … 304 306 void baz(monitor & nomutex a, 305 307 monitor & mutex b) { 306 wait( a.e);308 wait(e); 307 309 } 308 310 … … 314 316 This can be interpreted in two different ways : 315 317 \begin{enumerate} 316 \item \code{wait} atomically releases the monitors \underline{theoretically} acquired by the inner-most mutex routine.317 \item \code{wait} atomically releases the monitors \underline{actually} acquired by the inner-most mutex routine.318 \item \code{wait} atomically releases the monitors acquired by the inner-most mutex routine, \underline{ignoring} nested calls. 319 \item \code{wait} atomically releases the monitors acquired by the inner-most mutex routine, \underline{considering} nested calls. 318 320 \end{enumerate} 319 321 While the difference between these two is subtle, it has a significant impact. In the first case it means that the calls to \code{foo} would behave the same in Context 1 and 2. This semantic would also mean that the call to \code{wait} in routine \code{baz} would only release \code{monitor b}. While this may seem intuitive with these examples, it does have one significant implication, it creates a strong distinction between acquiring multiple monitors in sequence and acquiring the same monitors simulatenously. … … 337 339 \end{center} 338 340 339 This is not intuitive because even if both methods will display the same monitors state both inside and outside the critical section respectively, the behavior is different. Furthermore, the actual acquiring order will be exaclty the same since acquiring a monitor from inside its mutual exclusion is a no-op. This means that even if the data and the actual control flow are the same using both methods, the behavior of the \code{wait} will be different. The alternative is option 2, that is releasing \underline{actually} acquired monitors. This solves the issue of having the two acquiring method differ at the cost of making routine \code{foo} behave differently depending on from which context it is called (Context 1 or 2). Indeed in Context 2, routine \code{foo} will actually behavelike routine \code{baz} rather than having the same behavior than in context 1. The fact that both implicit approaches can be unintuitive depending on the perspective may be a sign that the explicit approach is superior.341 This is not intuitive because even if both methods display the same monitors state both inside and outside the critical section respectively, the behavior is different. Furthermore, the actual acquiring order will be exaclty the same since acquiring a monitor from inside its mutual exclusion is a no-op. This means that even if the data and the actual control flow are the same using both methods, the behavior of the \code{wait} will be different. The alternative is option 2, that is releasing acquired monitors, \underline{considering} nesting. This solves the issue of having the two acquiring method differ at the cost of making routine \code{foo} behave differently depending on from which context it is called (Context 1 or 2). Indeed in Context 2, routine \code{foo} actually behaves like routine \code{baz} rather than having the same behavior than in context 1. The fact that both implicit approaches can be unintuitive depending on the perspective may be a sign that the explicit approach is superior. 340 342 \\ 341 343 … … 414 416 \\ 415 417 416 All these cases have there pros and cons. Case 1 is more distinct because it means programmers need to be carefull about where the condition was initialized as well as where it is used. On the other hand, it is very clear and explicit which monitor will be released and which monitor will stay acquired. This is similar to Case 2, which releases only the monitors explictly listed. However, in Case 2, calling the \code{wait} routine instead of the \code{waitRelease} routine will releaseall the acquired monitor. The Case 3 is an improvement on that since it releases all the monitors except those specified. The result is that the \code{wait} routine can be written as follows :418 All these cases have there pros and cons. Case 1 is more distinct because it means programmers need to be carefull about where the condition is initialized as well as where it is used. On the other hand, it is very clear and explicit which monitor is released and which monitor stays acquired. This is similar to Case 2, which releases only the monitors explictly listed. However, in Case 2, calling the \code{wait} routine instead of the \code{waitRelease} routine releases all the acquired monitor. The Case 3 is an improvement on that since it releases all the monitors except those specified. The result is that the \code{wait} routine can be written as follows : 417 419 \begin{lstlisting} 418 420 void wait(condition & cond) { … … 422 424 This alternative offers nice and consistent behavior between \code{wait} and \code{waitHold}. However, one large pitfall is that mutual exclusion can now be violated by calls to library code. Indeed, even if the following example seems benign there is one significant problem : 423 425 \begin{lstlisting} 424 extern void doStuff(); 426 monitor global; 427 428 extern void doStuff(); //uses global 425 429 426 430 void foo(monitor & mutex m) { … … 429 433 //... 430 434 } 431 \end{lstlisting} 432 433 Indeed, if Case 2 or 3 are chosen it any code can violate the mutual exclusion of calling code by issuing calls to \code{wait} or \code{waitHold} in a nested monitor context. Case 2 can be salvaged by removing the \code{wait} routine from the API but Case 3 cannot prevent users from calling \code{waitHold(someCondition, [])}. For this reason the syntax proposed in Case 3 is rejected. Note that syntaxes proposed in case 1 and 2 are not exclusive. Indeed, by supporting two types of condition as follows both cases can be supported : 435 436 foo(global); 437 \end{lstlisting} 438 439 Indeed, if Case 2 or 3 are chosen it any code can violate the mutual exclusion of the calling code by issuing calls to \code{wait} or \code{waitHold} in a nested monitor context. Case 2 can be salvaged by removing the \code{wait} routine from the API but Case 3 cannot prevent users from calling \code{waitHold(someCondition, [])}. For this reason the syntax proposed in Case 3 is rejected. Note that the syntax proposed in case 1 and 2 are not exclusive. Indeed, by supporting two types of condition both cases can be supported : 434 440 \begin{lstlisting} 435 441 struct condition { /*...*/ }; … … 446 452 \end{lstlisting} 447 453 448 Regardless of the option chosen for wait semantics, signal must be symmetrical. In all cases, signal only needs a single parameter, the condition variable that needs to be signalled. But \code{signal} needs to be called from the same monitor(s) tha n thecall to \code{wait}. Otherwise, mutual exclusion cannot be properly transferred back to the waiting monitor.454 Regardless of the option chosen for wait semantics, signal must be symmetrical. In all cases, signal only needs a single parameter, the condition variable that needs to be signalled. But \code{signal} needs to be called from the same monitor(s) that call to \code{wait}. Otherwise, mutual exclusion cannot be properly transferred back to the waiting monitor. 449 455 450 456 Finally, an additionnal semantic which can be very usefull is the \code{signalBlock} routine. This routine behaves like signal for all of the semantics discussed above, but with the subtelty that mutual exclusion is transferred to the waiting task immediately rather than wating for the end of the critical section. … … 591 597 592 598 Examples of languages that support are Java\cite{Java}, Haskell\cite{Haskell} and \uC\cite{uC++book}. 593 594 599 \subsection{Jobs and thread pools} 595 600 The opposite approach is to base parallelism on \glspl{job}. Indeed, \glspl{job} offer limited flexibility but at the benefit of a simpler user interface. In \gls{job} based systems users express parallelism as units of work and the dependency graph (either explicit or implicit) that tie them together. This means users need not to worry about concurrency but significantly limits the interaction that can occur between different jobs. Indeed, any \gls{job} that blocks also blocks the underlying \gls{kthread}, this effectively mean the CPU utilization, and therefore throughput, will suffer noticeably. The golden standard of this implementation is Intel's TBB library\cite{TBB}. … … 627 632 \end{lstlisting} 628 633 629 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use some function pointer representation as the interface of threads (for example : \Csharp \cite{Csharp} and Scala \cite{Scala}). However, we consider that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is definetely a special functionin \CFA, we can reuse the existing syntax for declaring routines with unordinary name, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as such :634 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use some function pointer representation as the interface of threads (for example : \Csharp \cite{Csharp} and Scala \cite{Scala}). However, we consider that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is definetely a special routine in \CFA, we can reuse the existing syntax for declaring routines with unordinary name, i.e. operator overloading. As such the \code{main} routine of a thread can be defined as such : 630 635 \begin{lstlisting} 631 636 thread struct foo {};
Note: See TracChangeset
for help on using the changeset viewer.