Changeset 4ee36bf0
- Timestamp:
- Nov 8, 2017, 2:50:35 PM (7 years ago)
- Branches:
- ADT, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- 049ead9
- Parents:
- 136ccd7 (diff), e35f30a (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 2 added
- 1 deleted
- 40 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/.gitignore
r136ccd7 r4ee36bf0 16 16 build/*.out 17 17 build/*.ps 18 build/*.pstex 19 build/*.pstex_t 18 20 build/*.tex 19 21 build/*.toc -
doc/proposals/concurrency/Makefile
r136ccd7 r4ee36bf0 13 13 annex/glossary \ 14 14 text/intro \ 15 text/basics \ 15 16 text/cforall \ 16 text/basics \17 17 text/concurrency \ 18 18 text/internals \ 19 19 text/parallelism \ 20 text/results \ 20 21 text/together \ 21 22 text/future \ … … 29 30 }} 30 31 31 PICTURES = ${addsuffix .pstex, \ 32 } 32 PICTURES = ${addprefix build/, ${addsuffix .pstex, \ 33 system \ 34 }} 33 35 34 36 PROGRAMS = ${addsuffix .tex, \ … … 67 69 build/*.out \ 68 70 build/*.ps \ 71 build/*.pstex \ 69 72 build/*.pstex_t \ 70 73 build/*.tex \ -
doc/proposals/concurrency/style/cfa-format.tex
r136ccd7 r4ee36bf0 254 254 }{} 255 255 256 \lstnewenvironment{gocode}[1][]{ 257 \lstset{ 258 language = Golang, 259 style=defaultStyle, 260 #1 261 } 262 }{} 263 256 264 \newcommand{\zero}{\lstinline{zero_t}\xspace} 257 265 \newcommand{\one}{\lstinline{one_t}\xspace} -
doc/proposals/concurrency/text/basics.tex
r136ccd7 r4ee36bf0 9 9 At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution. 10 10 11 Indeed, while execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining, execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency. 12 13 Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. Indeed, concurrency challenges appear with non-determinism. Using mutual-exclusion or synchronisation are ways of limiting the lack of determinism in a system. A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Now it is important to understand that uncertainty is not undesireable; uncertainty can often be used by systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit. 11 Execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining. Execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency. 12 13 Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. 14 15 A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Mutual-exclusion and synchronisation are ways of limiting non-determinism in a concurrent system. Now it is important to understand that uncertainty is desireable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit. 14 16 15 17 \section{\protect\CFA 's Thread Building Blocks} 16 One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.18 One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay. 17 19 18 20 \section{Coroutines: A stepping stone}\label{coroutine} 19 While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switchs and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}. 20 21 A good example of a problem made easier with coroutines is genereting the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{fig:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence will be used, while the rightmost approach requires to user to hold internal state between calls on behalf of th sequence generator and makes it much harder to handle corner cases like the Fibonacci seed. 21 While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switches and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}. 22 22 23 \begin{figure} 23 \label{fig:fibonacci-c}24 24 \begin{center} 25 25 \begin{tabular}{c @{\hskip 0.025in}|@{\hskip 0.025in} c @{\hskip 0.025in}|@{\hskip 0.025in} c} … … 45 45 } 46 46 } 47 48 int main() { 49 void print_fib(int n) { 50 printf("%d\n", n); 51 } 52 53 fibonacci_func( 54 10, print_fib 55 ); 56 57 58 59 } 47 60 \end{ccode}&\begin{ccode}[tabsize=2] 48 61 //Using output array … … 62 75 f2 = next; 63 76 } 64 *array = next; 65 array++; 66 } 77 array[i] = next; 78 } 79 } 80 81 82 int main() { 83 int a[10]; 84 85 fibonacci_func( 86 10, a 87 ); 88 89 for(int i=0;i<10;i++){ 90 printf("%d\n", a[i]); 91 } 92 67 93 } 68 94 \end{ccode}&\begin{ccode}[tabsize=2] … … 70 96 typedef struct { 71 97 int f1, f2; 72 } iterator_t;98 } Iterator_t; 73 99 74 100 int fibonacci_state( 75 iterator_t * it101 Iterator_t * it 76 102 ) { 77 103 int f; 78 104 f = it->f1 + it->f2; 79 105 it->f2 = it->f1; 80 it->f1 = f;106 it->f1 = max(f,1); 81 107 return f; 82 108 } … … 87 113 88 114 115 116 int main() { 117 Iterator_t it={0,0}; 118 119 for(int i=0;i<10;i++){ 120 printf("%d\n", 121 fibonacci_state( 122 &it 123 ); 124 ); 125 } 126 127 } 89 128 \end{ccode} 90 129 \end{tabular} 91 130 \end{center} 92 131 \caption{Different implementations of a fibonacci sequence generator in C.} 132 \label{lst:fibonacci-c} 93 133 \end{figure} 94 134 95 96 Figure \ref{fig:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, using the coroutine stack to hold sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is a easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example. 135 A good example of a problem made easier with coroutines is generators, like the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed. 136 137 Figure \ref{lst:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example. 97 138 98 139 \begin{figure} 99 \label{fig:fibonacci-cfa}100 140 \begin{cfacode} 101 141 coroutine Fibonacci { … … 108 148 109 149 //main automacically called on first resume 110 void main(Fibonacci & this) {150 void main(Fibonacci & this) with (this) { 111 151 int fn1, fn2; //retained between resumes 112 this.fn= 0;113 fn1 = this.fn;152 fn = 0; 153 fn1 = fn; 114 154 suspend(this); //return to last resume 115 155 116 this.fn= 1;156 fn = 1; 117 157 fn2 = fn1; 118 fn1 = this.fn;158 fn1 = fn; 119 159 suspend(this); //return to last resume 120 160 121 161 for ( ;; ) { 122 this.fn= fn1 + fn2;162 fn = fn1 + fn2; 123 163 fn2 = fn1; 124 fn1 = this.fn;164 fn1 = fn; 125 165 suspend(this); //return to last resume 126 166 } … … 140 180 \end{cfacode} 141 181 \caption{Implementation of fibonacci using coroutines} 182 \label{lst:fibonacci-cfa} 142 183 \end{figure} 143 184 144 \subsection{Construction} 145 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads. 146 147 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine. 148 149 Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks: 150 151 \begin{cfacode} 152 //async: Runs function asynchronously on another thread 153 forall(otype T) 154 extern void async(void (*func)(T*), T* obj); 155 156 forall(otype T) 157 void noop(T *) {} 158 159 void bar() { 160 int a; 161 async(noop, &a); 162 } 163 \end{cfacode} 164 165 The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information: 166 167 \begin{ccode} 168 extern void async(/* omitted */, void (*func)(void *), void *obj); 169 170 void noop(/* omitted */, void *obj){} 171 172 void bar(){ 173 int a; 174 void _thunk0(int *_p0){ 175 /* omitted */ 176 noop(/* omitted */, _p0); 177 } 178 /* omitted */ 179 async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a)); 180 } 181 \end{ccode} 182 The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block. This extra challenge limits which solutions are viable because storing the function pointer for too long causes undefined behavior; i.e. the stack based thunk being destroyed before it was used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that the routines cannot be passed outside of the scope of the functions these were declared in. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks. 183 184 \subsection{Alternative: Composition} 185 One solution to this challenge is to use composition/containement, where uses add insert a coroutine field which contains the necessary information to manage the coroutine. 186 187 \begin{cfacode} 188 struct Fibonacci { 189 int fn; //used for communication 190 coroutine c; //composition 191 }; 192 193 void ?{}(Fibonacci & this) { 194 this.fn = 0; 195 (this.c){}; //Call constructor to initialize coroutine 196 } 197 \end{cfacode} 198 There are two downsides to this approach. The first, which is relatively minor, made aware of the main routine pointer. This information must either be store in the coroutine runtime data or in its static type structure. When using composition, all coroutine handles have the same static type structure which means the pointer to the main needs to be part of the runtime data. This requirement means the coroutine data must be made larger to store a value that is actually a compile time constant (address of the main routine). The second problem, which is both subtle and significant, is that now users can get the initialisation order of coroutines wrong. Indeed, every field of a \CFA struct is constructed but in declaration order, unless users explicitly write otherwise. This semantics means that users who forget to initialize the coroutine handle may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem. Figure \ref{fig:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. This is a good example where the control flow is made much simpler from being able to resume the coroutine from the constructor and highlights the idea that interesting control flow can occor in the constructor. 185 Figure \ref{lst:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor. 186 199 187 \begin{figure} 200 \label{fig:fmt-line}201 188 \begin{cfacode}[tabsize=3] 202 189 //format characters into blocks of 4 and groups of 5 blocks per line … … 244 231 \end{cfacode} 245 232 \caption{Formatting text into lines of 5 blocks of 4 characters.} 233 \label{lst:fmt-line} 246 234 \end{figure} 247 235 236 \subsection{Construction} 237 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads. 238 239 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine. 240 241 Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks: 242 243 \begin{cfacode} 244 //async: Runs function asynchronously on another thread 245 forall(otype T) 246 extern void async(void (*func)(T*), T* obj); 247 248 forall(otype T) 249 void noop(T*) {} 250 251 void bar() { 252 int a; 253 async(noop, &a); //start thread running noop with argument a 254 } 255 \end{cfacode} 256 257 The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information: 258 259 \begin{ccode} 260 extern void async(/* omitted */, void (*func)(void *), void *obj); 261 262 void noop(/* omitted */, void *obj){} 263 264 void bar(){ 265 int a; 266 void _thunk0(int *_p0){ 267 /* omitted */ 268 noop(/* omitted */, _p0); 269 } 270 /* omitted */ 271 async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a)); 272 } 273 \end{ccode} 274 The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks. 275 276 \subsection{Alternative: Composition} 277 One solution to this challenge is to use composition/containement, where coroutine fields are added to manage the coroutine. 278 279 \begin{cfacode} 280 struct Fibonacci { 281 int fn; //used for communication 282 coroutine c; //composition 283 }; 284 285 void FibMain(void *) { 286 //... 287 } 288 289 void ?{}(Fibonacci & this) { 290 this.fn = 0; 291 //Call constructor to initialize coroutine 292 (this.c){myMain}; 293 } 294 \end{cfacode} 295 The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, doing so the users carefully choose construction order to prevent usage of unconstructed objects. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically. 248 296 249 297 \subsection{Alternative: Reserved keyword} … … 255 303 }; 256 304 \end{cfacode} 257 Th is mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can bothbe constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.305 The \code{coroutine} keyword means the compiler can find and inject code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users wantint to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases. 258 306 259 307 \subsection{Alternative: Lamda Objects} … … 268 316 Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little. 269 317 270 A variation of this would be to use a nsimple function pointer in the same way pthread does for threads :318 A variation of this would be to use a simple function pointer in the same way pthread does for threads : 271 319 \begin{cfacode} 272 320 void foo( coroutine_t cid, void * arg ) { … … 281 329 } 282 330 \end{cfacode} 283 This semantic is more common for thread interfaces than coroutines but would workequally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.331 This semantics is more common for thread interfaces than coroutines works equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity. 284 332 285 333 \subsection{Alternative: Trait-based coroutines} … … 350 398 \end{cfacode} 351 399 352 In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With the se semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously400 In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With the static semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously. 353 401 \begin{cfacode} 354 402 typedef void (*voidFunc)(int); … … 361 409 void ?{}(FuncRunner & this, voidFunc inFunc, int arg) { 362 410 this.func = inFunc; 411 this.arg = arg; 363 412 } 364 413 365 414 void main(FuncRunner & this) { 415 //thread starts here and runs the function 366 416 this.func( this.arg ); 367 417 } 368 418 \end{cfacode} 369 419 370 A n consequence of the stronglytyped approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}.420 A consequence of the strongly-typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}. 371 421 372 422 Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} after the constructor has completed and \code{join} before the destructor runs. … … 389 439 \end{cfacode} 390 440 391 This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once and users cannot make any progamming errors and it naturally scales to multiple threads meaning basic synchronisation is very simple441 This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once, users cannot make any progamming errors, and it naturally scales to multiple threads meaning basic synchronisation is very simple. 392 442 393 443 \begin{cfacode} … … 411 461 \end{cfacode} 412 462 413 However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created463 However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created. 414 464 415 465 \begin{cfacode} -
doc/proposals/concurrency/text/cforall.tex
r136ccd7 r4ee36bf0 1 1 % ====================================================================== 2 2 % ====================================================================== 3 \chapter{Cforall crash course}3 \chapter{Cforall Overview} 4 4 % ====================================================================== 5 5 % ====================================================================== 6 6 7 Th is thesis presents the design for a set of concurrency features in \CFA. Since it is a new dialect of C, the following is a quick introduction to thelanguage, specifically tailored to the features needed to support concurrency.7 The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency. 8 8 9 \CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a received (e.g.: this), it does have some notion of objects\footnote{C defines the term objects as : [Where to I get the C11 reference manual?]}, most importantly construction and destruction of objects. Most of the following pieces of code can be found on the \CFA website \cite{www-cfa} 9 \CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (e.g., this), it does have some notion of objects\footnote{C defines the term objects as : ``region of data storage in the execution environment, the contents of which can represent 10 values''\cite[3.15]{C11}}, most importantly construction and destruction of objects. Most of the following code examples can be found on the \CFA website \cite{www-cfa} 10 11 11 12 \section{References} 12 13 13 Like \CC, \CFA introduces re ferences as an alternative to pointers. In regards to concurrency, the semantics difference between pointers and references are not particularly relevant but since this document uses mostly references here is a quick overview of the semantics:14 Like \CC, \CFA introduces rebindable references providing multiple dereferecing as an alternative to pointers. In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics: 14 15 \begin{cfacode} 15 16 int x, *p1 = &x, **p2 = &p1, ***p3 = &p2, 16 &r1 = x, &&r2 = r1, &&&r3 = r2;17 &r1 = x, &&r2 = r1, &&&r3 = r2; 17 18 ***p3 = 3; //change x 18 19 r3 = 3; //change x, ***r3 … … 25 26 sizeof(&ar[1]) == sizeof(int *); //is true, i.e., the size of a reference 26 27 \end{cfacode} 27 The important t hing to take away from this code snippet is that references offer a handle to an object much like pointers but which is automatically derefferenced when convinient.28 The important take away from this code example is that references offer a handle to an object, much like pointers, but which is automatically dereferenced for convinience. 28 29 29 30 \section{Overloading} 30 31 31 Another important feature of \CFA is function overloading as in Java and \CC, where routine with the same name are selected based on the numbersand type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada\cite{Ada}. For routines with multiple parameters and returns, the selection is complex.32 Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada\cite{Ada}. For routines with multiple parameters and returns, the selection is complex. 32 33 \begin{cfacode} 33 34 //selection based on type and number of parameters … … 45 46 double d = f(4); //select (2) 46 47 \end{cfacode} 47 This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes. As seen in chapter \ref{basics}, routine s mainis an example that benefits from overloading.48 This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes. As seen in chapter \ref{basics}, routine \code{main} is an example that benefits from overloading. 48 49 49 50 \section{Operators} 50 Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation would be, like so:51 Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation occur, e.g.: 51 52 \begin{cfacode} 52 53 int ++? (int op); //unary prefix increment … … 101 102 102 103 \section{Parametric Polymorphism} 103 Routines in \CFA can also be reused for multiple types. This is done using the \code{forall} clause which gives \CFA it's name. \code{forall} clauses allow seperatly compiled routines to support generic usage over multiple types. For example, the following sum function will work for any type which supportconstruction from 0 and addition :104 Routines in \CFA can also be reused for multiple types. This capability is done using the \code{forall} clause which gives \CFA its name. \code{forall} clauses allow separately compiled routines to support generic usage over multiple types. For example, the following sum function works for any type that supports construction from 0 and addition : 104 105 \begin{cfacode} 105 106 //constraint type, 0 and + … … 116 117 \end{cfacode} 117 118 118 Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits. Traits are named collection of constraints whichcan be used both instead and in addition to regular constraints:119 Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits. Traits are named collection of constraints that can be used both instead and in addition to regular constraints: 119 120 \begin{cfacode} 120 121 trait sumable( otype T ) { … … 130 131 131 132 \section{with Clause/Statement} 132 Since \CFA lacks the concept of a receiver, certain functions end-up needing to repeat variable names often , to solve this \CFA offers the \code{with} statementwhich opens an aggregate scope making its fields directly accessible (like Pascal).133 Since \CFA lacks the concept of a receiver, certain functions end-up needing to repeat variable names often. To remove this inconvenience, \CFA provides the \code{with} statement, which opens an aggregate scope making its fields directly accessible (like Pascal). 133 134 \begin{cfacode} 134 135 struct S { int i, j; }; 135 int mem(S & this) with this//with clause136 int mem(S & this) with (this) //with clause 136 137 i = 1; //this->i 137 138 j = 2; //this->j … … 140 141 struct S1 { ... } s1; 141 142 struct S2 { ... } s2; 142 with s1//with statement143 with (s1) //with statement 143 144 { 144 145 //access fields of s1 145 146 //without qualification 146 with s2//nesting147 with (s2) //nesting 147 148 { 148 149 //access fields of s1 and s2 … … 150 151 } 151 152 } 152 with s1, s2//scopes open in parallel153 with (s1, s2) //scopes open in parallel 153 154 { 154 155 //access fields of s1 and s2 -
doc/proposals/concurrency/text/concurrency.tex
r136ccd7 r4ee36bf0 4 4 % ====================================================================== 5 5 % ====================================================================== 6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account.6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels\cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. 7 7 8 8 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. 9 9 10 An approach that is worth mention ning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA.10 An approach that is worth mentioning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA. 11 11 12 12 One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared-memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct. … … 19 19 20 20 \subsection{Synchronization} 21 As for mutual-exclusion, low-level synchronisation primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering simple solution to otherwise involved challenges. An example is barging. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronisation happens around a critical section, where threads must acquire critical sections in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use status flags and other flag variables to detect barging threads are said to be using barging avoidance while algorithms that baton-passing locks between threads instead of releasing the locks are said to be using barging prevention.21 As for mutual-exclusion, low-level synchronisation primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronisation happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic exmaple is the thread that finishes using a ressource and unblocks a thread waiting to use the resource, but the unblocked thread must compete again to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use status flags and other flag variables to detect barging threads are said to be using barging avoidance while algorithms that baton-passing locks between threads instead of releasing the locks are said to be using barging prevention. 22 22 23 23 % ====================================================================== … … 71 71 \end{tabular} 72 72 \end{center} 73 Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting .74 75 Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con structed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation.76 77 For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire multiple times the same monitorwithout deadlock. For example, figure \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree.73 Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting, which is similar in usage to \CC \code{atomic} template. 74 75 Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation. 76 77 For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, figure \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree. 78 78 \begin{figure} 79 79 \label{fig:search} … … 95 95 \end{figure} 96 96 97 Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is making exactly one of these keywords mandatory, which would providethe same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}.97 Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the ``normal'' parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that ``this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}. 98 98 99 99 The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations: … … 113 113 int f5(monitor * mutex m []); //Not Okay : Array of unkown length 114 114 \end{cfacode} 115 Note that not all array functions are actually distinct in the type system sense. However, eventhe code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.116 117 Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion of ten receives anobject, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls.115 Note that not all array functions are actually distinct in the type system. However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic. 116 117 Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion of the receiver object, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls. 118 118 \begin{cfacode} 119 119 int f(MonitorA & mutex a, MonitorB & mutex b); … … 123 123 f(a,b); 124 124 \end{cfacode} 125 The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use \gls{multi-acq} locks, users can effectivelyforce the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order:125 While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors in the way is safe from deadlock. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order: 126 126 \begin{cfacode} 127 127 void foo(A & mutex a, B & mutex b) { //acquire a & b … … 139 139 The \gls{multi-acq} monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. 140 140 141 However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires:141 However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown\cit, solving this problem requires: 142 142 \begin{enumerate} 143 143 \item Dynamically tracking of the monitor-call order. 144 144 \item Implement rollback semantics. 145 145 \end{enumerate} 146 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. 147 148 \Gls{multi-acq} and \gls{bulk-acq} can be used together in interesting ways, for example:146 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. While \CFA provides only a partial solution, many system provide no solution and the \CFA partial solution handles many useful cases. 147 148 For example, \gls{multi-acq} and \gls{bulk-acq} can be used together in interesting ways: 149 149 \begin{cfacode} 150 150 monitor bank { ... }; … … 157 157 } 158 158 \end{cfacode} 159 This example shows a trivial solution to the bank account transferproblem\cit. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering.160 161 \subs ubsection{\code{mutex} statement} \label{mutex-stmt}159 This example shows a trivial solution to the bank-account transfer-problem\cit. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering. 160 161 \subsection{\code{mutex} statement} \label{mutex-stmt} 162 162 163 163 The call semantics discussed aboved have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem\cit. Listing \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters. … … 218 218 \end{cfacode} 219 219 220 221 % ====================================================================== 222 % ====================================================================== 223 \section{Internal scheduling} \label{insched} 220 Like threads and coroutines, monitors are defined in terms of traits with some additional language support in the form of the \code{monitor} keyword. The monitor trait is : 221 \begin{cfacode} 222 trait is_monitor(dtype T) { 223 monitor_desc * get_monitor( T & ); 224 void ^?{}( T & mutex ); 225 }; 226 \end{cfacode} 227 Note that the destructor of a monitor must be a \code{mutex} routine. This requirement ensures that the destructor has mutual-exclusion. As with any object, any call to a monitor, using \code{mutex} or otherwise, is Undefined Behaviour after the destructor has run. 228 229 % ====================================================================== 230 % ====================================================================== 231 \section{Internal scheduling} \label{intsched} 224 232 % ====================================================================== 225 233 % ====================================================================== … … 248 256 \end{cfacode} 249 257 250 There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion. Second, in \CFA, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering.251 252 An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, foois guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency.258 There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. 259 260 An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency. 253 261 254 262 % ====================================================================== … … 257 265 % ====================================================================== 258 266 % ====================================================================== 259 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use a single condition as paremeter and waits on the monitors associated with the condition.267 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition as paremeter and explicitly names the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a set of monitors on first use (called branding) which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. 260 268 261 269 \begin{multicols}{2} … … 295 303 \end{pseudo} 296 304 \end{multicols} 297 This version uses \gls{bulk-acq} (denoted using the \&symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate.298 299 While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem\cit, which occurs when a \code{wait} is made on a thread that holds more than one monitor. For example, the following pseudo-code will run into the nestedmonitor problem :305 This version uses \gls{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate. 306 307 While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem\cit, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem : 300 308 \begin{multicols}{2} 301 309 \begin{pseudo} … … 317 325 \end{pseudo} 318 326 \end{multicols} 327 328 The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} results in another set of problems such as releasing monitor \code{C}, which has nothing to do with the \code{signal}. 329 319 330 However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the nested monitor problem. 320 331 … … 339 350 \end{multicols} 340 351 341 Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics. Listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. Note that listing \ref{lst:int-bulk-cfa} uses non-\code{mutex} parameter to introduce monitor \code{b} into context. However, for the purpose of translating the given pseudo-code into \CFA-code any method of introducing new monitors into context, other than a \code{mutex} parameter, is acceptable, e.g. global variables, pointer parameters or using locals with the \code{mutex}-statement. 352 % ====================================================================== 353 % ====================================================================== 354 \subsection{Internal Scheduling - in depth} 355 % ====================================================================== 356 % ====================================================================== 357 358 A larger example is presented to show complex issuesfor \gls{bulk-acq} and all the implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code any method of introducing monitor into context, other than a \code{mutex} parameter, is acceptable, e.g., global variables, pointer parameters or using locals with the \code{mutex}-statement. 342 359 343 360 \begin{figure}[!b] … … 376 393 377 394 \begin{figure}[!b] 395 \begin{center} 396 \begin{cfacode}[xleftmargin=.4\textwidth] 397 monitor A a; 398 monitor B b; 399 condition c; 400 \end{cfacode} 401 \end{center} 378 402 \begin{multicols}{2} 379 403 Waiting thread 380 404 \begin{cfacode} 381 monitor A; 382 monitor B; 383 extern condition c; 384 void foo(A & mutex a, B & b) { 405 mutex(a) { 385 406 //Code Section 1 386 407 mutex(a, b) { … … 397 418 Signalling thread 398 419 \begin{cfacode} 399 monitor A; 400 monitor B; 401 extern condition c; 402 void foo(A & mutex a, B & b) { 420 mutex(a) { 403 421 //Code Section 5 404 422 mutex(a, b) { … … 415 433 \end{figure} 416 434 417 It is particularly important to pay attention to code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 16), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor A, simply waking up the waiting thread is not an option because it would violatemutual exclusion. There are three options.435 The complexity begins at code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should ``release \code{A & B}'' (line 16), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor \code{A}, simply waking up the waiting thread is not an option because it violates mutual exclusion. There are three options. 418 436 419 437 \subsubsection{Delaying signals} 420 The first more obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is the correct time to transfer ownership when the last lock is no longer needed because this semantics fits most closely to the behaviour of single monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of objects, effectively making the existing singlemonitor semantic viable by simply changing monitors to monitor groups.438 The obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. 421 439 \begin{multicols}{2} 422 440 Waiter … … 443 461 \end{multicols} 444 462 However, this solution can become much more complicated depending on what is executed while secretly holding B (at line 10). Indeed, nothing prevents signalling monitor A on a different condition variable: 445 \begin{multicols}{2} 446 Thread 1 463 \begin{figure} 464 \begin{multicols}{3} 465 Thread $\alpha$ 447 466 \begin{pseudo}[numbers=left, firstnumber=1] 448 467 acquire A … … 453 472 \end{pseudo} 454 473 455 Thread 2456 \begin{pseudo}[numbers=left, firstnumber=6]457 acquire A458 wait A459 release A460 \end{pseudo}461 462 474 \columnbreak 463 475 464 Thread 3465 \begin{pseudo}[numbers=left, firstnumber= 9]476 Thread $\gamma$ 477 \begin{pseudo}[numbers=left, firstnumber=1] 466 478 acquire A 467 479 acquire A & B 468 480 signal A & B 469 481 release A & B 470 //Secretly keep B here471 482 signal A 472 483 release A 473 //Wakeup thread 1 or 2? 474 //Who wakes up the other thread? 475 \end{pseudo} 484 \end{pseudo} 485 486 \columnbreak 487 488 Thread $\beta$ 489 \begin{pseudo}[numbers=left, firstnumber=1] 490 acquire A 491 wait A 492 release A 493 \end{pseudo} 494 476 495 \end{multicols} 496 \caption{Dependency graph} 497 \label{lst:dependency} 498 \end{figure} 477 499 478 500 The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example. Depending on the order of signals (line 12 and 15) two cases can happen. … … 484 506 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect. 485 507 486 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogenous group and therefore invalidates the main benefit ofthis approach.508 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogenous group and therefore effectively precludes this approach. 487 509 488 510 \subsubsection{Dependency graphs} 489 In the Listing 1 pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases A and then the waiter transfers back ownership of A when it releases it, then the problem is solved. Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions:511 In the listing \ref{lst:int-bulk-pseudo} pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions: 490 512 491 513 \begin{multicols}{2} … … 514 536 515 537 \begin{figure} 516 \begin{multicols}{3}517 Thread $\alpha$518 \begin{pseudo}[numbers=left, firstnumber=1]519 acquire A520 acquire A & B521 wait A & B522 release A & B523 release A524 \end{pseudo}525 526 \columnbreak527 528 Thread $\gamma$529 \begin{pseudo}[numbers=left, firstnumber=1]530 acquire A531 acquire A & B532 signal A & B533 release A & B534 signal A535 release A536 \end{pseudo}537 538 \columnbreak539 540 Thread $\beta$541 \begin{pseudo}[numbers=left, firstnumber=1]542 acquire A543 wait A544 release A545 \end{pseudo}546 547 \end{multicols}548 \caption{Dependency graph}549 \label{lst:dependency}550 \end{figure}551 552 \begin{figure}553 538 \begin{center} 554 539 \input{dependency} 555 540 \end{center} 541 \caption{Dependency graph of the statements in listing \ref{lst:dependency}} 556 542 \label{fig:dependency} 557 \caption{Dependency graph of the statements in listing \ref{lst:dependency}}558 543 \end{figure} 559 544 560 Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs as well as the corresponding dependency graph. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement. The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one.545 Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one. 561 546 562 547 \subsubsection{Partial signalling} \label{partial-sig} 563 Finally, the solution that is chosen for \CFA is to use partial signalling. Consider the following case: 564 565 \begin{multicols}{2} 566 \begin{pseudo}[numbers=left] 567 acquire A 568 acquire A & B 569 wait A & B 570 release A & B 571 release A 572 \end{pseudo} 573 574 \columnbreak 575 576 \begin{pseudo}[numbers=left, firstnumber=6] 577 acquire A 578 acquire A & B 579 signal A & B 580 release A & B 581 //... More code 582 release A 583 \end{pseudo} 584 \end{multicols} 585 The partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm which is why it was chosen. 548 Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any downsides worth mentionning. 586 549 587 550 % ====================================================================== … … 590 553 % ====================================================================== 591 554 % ====================================================================== 592 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine\footnote{name to be discussed}.593 594 The example in listing \ref{lst:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour cause the need for additional synchronisation when a two-way handshake is needed. To avoid this extraneous synchronisation, the \code{condition} type offers the \code{signal_block} routine which handle two-way handshakes as shown in the example. This removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor neither before nor after the call.595 555 \begin{figure} 596 556 \begin{tabular}{|c|c|} … … 622 582 girlPhoneNo = phoneNo; 623 583 624 //wake boy fro nchair584 //wake boy from chair 625 585 signal(exchange); 626 586 } … … 669 629 girlPhoneNo = phoneNo; 670 630 671 //wake boy fro nchair631 //wake boy from chair 672 632 signal(exchange); 673 633 } … … 696 656 \label{lst:datingservice} 697 657 \end{figure} 658 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine\footnote{name to be discussed}. 659 660 The example in listing \ref{lst:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour requires additional synchronisation when a two-way handshake is needed. To avoid this extraneous synchronisation, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor neither before nor after the call. 698 661 699 662 % ====================================================================== … … 702 665 % ====================================================================== 703 666 % ====================================================================== 704 An alternative to internal scheduling is to use external scheduling.667 An alternative to internal scheduling is external scheduling, e.g., in \uC. 705 668 \begin{center} 706 \begin{tabular}{|c|c| }707 Internal Scheduling & External Scheduling \\669 \begin{tabular}{|c|c|c|} 670 Internal Scheduling & External Scheduling & Go\\ 708 671 \hline 709 \begin{ucppcode} 672 \begin{ucppcode}[tabsize=3] 710 673 _Monitor Semaphore { 711 674 condition c; … … 713 676 public: 714 677 void P() { 715 if(inUse) wait(c); 678 if(inUse) 679 wait(c); 716 680 inUse = true; 717 681 } … … 721 685 } 722 686 } 723 \end{ucppcode}&\begin{ucppcode} 687 \end{ucppcode}&\begin{ucppcode}[tabsize=3] 724 688 _Monitor Semaphore { 725 689 … … 727 691 public: 728 692 void P() { 729 if(inUse) _Accept(V); 693 if(inUse) 694 _Accept(V); 730 695 inUse = true; 731 696 } … … 735 700 } 736 701 } 737 \end{ucppcode} 702 \end{ucppcode}&\begin{gocode}[tabsize=3] 703 type MySem struct { 704 inUse bool 705 c chan bool 706 } 707 708 // acquire 709 func (s MySem) P() { 710 if s.inUse { 711 select { 712 case <-s.c: 713 } 714 } 715 s.inUse = true 716 } 717 718 // release 719 func (s MySem) V() { 720 s.inUse = false 721 722 //This actually deadlocks 723 //when single thread 724 s.c <- false 725 } 726 \end{gocode} 738 727 \end{tabular} 739 728 \end{center} 740 This method is more constrained and explicit, which helps users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC with \code{_Accept}) or in terms of data (e.g. Go with channels). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s.741 742 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor. This entails that a third routine, say \code{isInUse()}, may have acquired mutual exclusion several times while routine \code{P} was waiting. On the other hand, external scheduling guarantees that while routine \code{P} was waiting, no routine other than \code{V} couldacquire the monitor.729 This method is more constrained and explicit, which helps users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s. 730 731 For the \code{P} member above using internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor, allowing a third routine, say \code{isInUse()}, acquire mutual exclusion several times while routine \code{P} is waiting. On the other hand, external scheduling guarantees that while routine \code{P} is waiting, no routine other than \code{V} can acquire the monitor. 743 732 744 733 % ====================================================================== … … 747 736 % ====================================================================== 748 737 % ====================================================================== 749 In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented it becomes both more difficult to implement but also less clear for theuser:738 In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: 750 739 751 740 \begin{cfacode} … … 786 775 \end{center} 787 776 788 There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Indeed simply updating a bitmask when the acceptor queue changes is enough to have a check that executes in a single instruction, even with a fairly large number (e.g. 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type declares all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. Its important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units.789 The alternative is to have a picture like this one:777 There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type enumerates (declares) all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. It is important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units. 778 The alternative is to alter the implementeation like this: 790 779 791 780 \begin{center} … … 793 782 \end{center} 794 783 795 Not storing the mask inside the monitor means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of function-pointers would solve the issue of uniquely identifying acceptable routines. However, the single instruction bitmask compare has been replaced by dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling may now require additionnal searches on calls to waitfor to check if a routine is already queued in. 796 797 Note that in the second picture, tasks need to always keep track of through which routine they are attempting to acquire the monitor and the routine mask needs to have both a function pointer and a set of monitors, as will be discussed in the next section. These details where omitted from the picture for the sake of simplifying the representation. 798 799 At this point we must make a decision between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitively hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA. 784 Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function-pointers replaces the single instruction bitmask compare with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additionnal searches on calls to \code{waitfor} statement to check if a routine is already queued in. 785 786 \begin{figure} 787 \begin{cfacode} 788 monitor M {}; 789 void foo( M & mutex a ) {} 790 void bar( M & mutex b ) { 791 //Nested in the waitfor(bar, c) call 792 waitfor(foo, b); 793 } 794 void baz( M & mutex c ) { 795 waitfor(bar, c); 796 } 797 798 \end{cfacode} 799 \caption{Example of nested external scheduling} 800 \label{lst:nest-ext} 801 \end{figure} 802 803 Note that in the second picture, tasks need to always keep track of which routine they are attempting to acquire the monitor and the routine mask needs to have both a function pointer and a set of monitors, as will be discussed in the next section. These details where omitted from the picture for the sake of simplifying the representation. 804 805 At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitively hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA. 800 806 801 807 % ====================================================================== … … 811 817 void f(M & mutex a); 812 818 813 void g(M & mutex a, M & mutex b) {814 waitfor(f); // ambiguous, keep a pass b or other way around?819 void g(M & mutex b, M & mutex c) { 820 waitfor(f); //two monitors M => unkown which to pass to f(M & mutex) 815 821 } 816 822 \end{cfacode} … … 828 834 \end{cfacode} 829 835 830 This syntax is unambiguous. Both locks are acquired and kept . When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor waitforstatement as follows.836 This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor \code{waitfor} statement as follows. 831 837 832 838 \begin{cfacode} … … 842 848 Note that the set of monitors passed to the \code{waitfor} statement must be entirely contained in the set of monitors already acquired in the routine. \code{waitfor} used in any other context is Undefined Behaviour. 843 849 844 An important behavior to note is that what happenswhen a set of monitors only match partially :850 An important behavior to note is when a set of monitors only match partially : 845 851 846 852 \begin{cfacode} … … 865 871 \end{cfacode} 866 872 867 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is i mportant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are to distinctwaiting condition.873 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition. 868 874 869 875 % ====================================================================== … … 873 879 % ====================================================================== 874 880 875 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted . This is because the compiler validates at compile time the validity of the waitforstatement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading.881 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading. 876 882 \begin{figure} 877 883 \begin{cfacode} … … 898 904 waitfor(f2, a1, a2); //Incorrect : Mutex arguments don't match 899 905 waitfor(f1, 1); //Incorrect : 1 not a mutex argument 900 waitfor(f 4, a1); //Incorrect : f9 not a function901 waitfor(*fp, a1 ); //Incorrect : fp not a identifier906 waitfor(f9, a1); //Incorrect : f9 function does not exist 907 waitfor(*fp, a1 ); //Incorrect : fp not an identifier 902 908 waitfor(f4, a1); //Incorrect : f4 ambiguous 903 909 … … 909 915 \end{figure} 910 916 911 Finally, for added flexibility, \CFA supports constructing complex waitfor mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain will form a single statement which will baton-pass to any one function that fits one of the function+monitor set which was passed in. To eanble users to tell which was the accepted function, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching functionalready arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the mask based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones.917 Finally, for added flexibility, \CFA supports constructing complex \code{waitfor} mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain forms a single statement that uses baton-pass to any one function that fits one of the function+monitor set passed in. To eanble users to tell which accepted function is accepted, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching function call already arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the mask based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones. 912 918 913 919 \begin{figure} … … 973 979 \label{lst:waitfor2} 974 980 \end{figure} 981 982 % ====================================================================== 983 % ====================================================================== 984 \subsection{Waiting for the destructor} 985 % ====================================================================== 986 % ====================================================================== 987 An interesting use for the \code{waitfor} statement is destructor semantics. Indeed, the \code{waitfor} statement can accept any \code{mutex} routine, which includes the destructor (see section \ref{data}). However, with the semantics discussed until now, waiting for the destructor does not make any sense since using an object after its destructor is called is undefined behaviour. The simplest approach is to disallow \code{waitfor} on a destructor. However, a more expressive approach is to flip execution ordering when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current \code{mutex} routine, similarly to how a condition is signalled. 988 \begin{figure} 989 \begin{cfacode} 990 monitor Executer {}; 991 struct Action; 992 993 void ^?{} (Executer & mutex this); 994 void execute(Executer & mutex this, const Action & ); 995 void run (Executer & mutex this) { 996 while(true) { 997 waitfor(execute, this); 998 or waitfor(^?{} , this) { 999 break; 1000 } 1001 } 1002 } 1003 \end{cfacode} 1004 \caption{Example of an executor which executes action in series until the destructor is called.} 1005 \label{lst:dtor-order} 1006 \end{figure} 1007 For example, listing \ref{lst:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction. -
doc/proposals/concurrency/text/future.tex
r136ccd7 r4ee36bf0 5 5 % ====================================================================== 6 6 7 Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA. 8 \section{Non-Blocking IO} 7 \section{Flexible Scheduling} \label{futur:sched} 9 8 10 9 11 \section{Other concurrency tools} 10 \section{Non-Blocking IO} \label{futur:nbio} 11 While most of the parallelism tools 12 However, many modern workloads are not bound on computation but on IO operations, an common case being webservers and XaaS (anything as a service). These type of workloads often require significant engineering around amortising costs of blocking IO operations. While improving throughtput of these operations is outside what \CFA can do as a language, it can help users to make better use of the CPU time otherwise spent waiting on IO operations. The current trend is to use asynchronous programming using tools like callbacks and/or futurs and promises\cit. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear 12 13 13 14 14 \section{Implicit threading} 15 % Finally, simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the system level. 16 % 17 % \begin{center} 18 % \begin{tabular}[t]{|c|c|c|} 19 % Sequential & System Parallel & Language Parallel \\ 20 % \begin{lstlisting} 21 % void big_sum(int* a, int* b, 22 % int* out, 23 % size_t length) 24 % { 25 % for(int i = 0; i < length; ++i ) { 26 % out[i] = a[i] + b[i]; 27 % } 28 % } 29 % 30 % 31 % 32 % 33 % 34 % int* a[10000]; 35 % int* b[10000]; 36 % int* c[10000]; 37 % //... fill in a and b ... 38 % big_sum(a, b, c, 10000); 39 % \end{lstlisting} &\begin{lstlisting} 40 % void big_sum(int* a, int* b, 41 % int* out, 42 % size_t length) 43 % { 44 % range ar(a, a + length); 45 % range br(b, b + length); 46 % range or(out, out + length); 47 % parfor( ai, bi, oi, 48 % [](int* ai, int* bi, int* oi) { 49 % oi = ai + bi; 50 % }); 51 % } 52 % 53 % int* a[10000]; 54 % int* b[10000]; 55 % int* c[10000]; 56 % //... fill in a and b ... 57 % big_sum(a, b, c, 10000); 58 % \end{lstlisting}&\begin{lstlisting} 59 % void big_sum(int* a, int* b, 60 % int* out, 61 % size_t length) 62 % { 63 % for (ai, bi, oi) in (a, b, out) { 64 % oi = ai + bi; 65 % } 66 % } 67 % 68 % 69 % 70 % 71 % 72 % int* a[10000]; 73 % int* b[10000]; 74 % int* c[10000]; 75 % //... fill in a and b ... 76 % big_sum(a, b, c, 10000); 77 % \end{lstlisting} 78 % \end{tabular} 79 % \end{center} 80 % 15 16 \section{Other concurrency tools} \label{futur:tools} 81 17 82 18 83 \section{Multiple Paradigms} 19 \section{Implicit threading} \label{futur:implcit} 20 Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The cannonical example of implcit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithm\cit. Listing \ref{lst:parfor} shows three different code examples that accomplish pointwise sums of large arrays. Note that none of these example explicitly declare any concurrency or parallelism objects. 21 22 \begin{figure} 23 \begin{center} 24 \begin{tabular}[t]{|c|c|c|} 25 Sequential & Library Parallel & Language Parallel \\ 26 \begin{cfacode}[tabsize=3] 27 void big_sum( 28 int* a, int* b, 29 int* o, 30 size_t len) 31 { 32 for( 33 int i = 0; 34 i < len; 35 ++i ) 36 { 37 o[i]=a[i]+b[i]; 38 } 39 } 84 40 85 41 86 \section{Transactions} 42 43 44 45 int* a[10000]; 46 int* b[10000]; 47 int* c[10000]; 48 //... fill in a & b 49 big_sum(a,b,c,10000); 50 \end{cfacode} &\begin{cfacode}[tabsize=3] 51 void big_sum( 52 int* a, int* b, 53 int* o, 54 size_t len) 55 { 56 range ar(a, a+len); 57 range br(b, b+len); 58 range or(o, o+len); 59 parfor( ai, bi, oi, 60 []( int* ai, 61 int* bi, 62 int* oi) 63 { 64 oi=ai+bi; 65 }); 66 } 67 68 69 int* a[10000]; 70 int* b[10000]; 71 int* c[10000]; 72 //... fill in a & b 73 big_sum(a,b,c,10000); 74 \end{cfacode}&\begin{cfacode}[tabsize=3] 75 void big_sum( 76 int* a, int* b, 77 int* o, 78 size_t len) 79 { 80 parfor (ai,bi,oi) 81 in (a, b, o ) 82 { 83 oi = ai + bi; 84 } 85 } 86 87 88 89 90 91 92 93 int* a[10000]; 94 int* b[10000]; 95 int* c[10000]; 96 //... fill in a & b 97 big_sum(a,b,c,10000); 98 \end{cfacode} 99 \end{tabular} 100 \end{center} 101 \caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.} 102 \label{lst:parfor} 103 \end{figure} 104 105 Implicit parallelism is a general solution and therefore is 106 107 \section{Multiple Paradigms} \label{futur:paradigms} 108 109 110 \section{Transactions} \label{futur:transaction} 111 Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA. -
doc/proposals/concurrency/text/internals.tex
r136ccd7 r4ee36bf0 1 1 2 2 \chapter{Behind the scene} 3 4 5 % ====================================================================== 6 % ====================================================================== 7 \section{Implementation Details: Interaction with polymorphism} 8 % ====================================================================== 9 % ====================================================================== 10 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be complex to support. However, it is shown that entry-point locking solves most of the issues. 11 12 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. 13 14 Before looking into complex control-flow, it is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example: 3 There are several challenges specific to \CFA when implementing concurrency. These challenges are direct results of \gls{bulk-acq} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system. 4 5 The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since many conconcurrency operations can use an unbound amount of memory (depending on \gls{bulk-acq}) statically defining information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively. 6 7 Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. The threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though the later is preferable in terms of performance). 8 9 Note that since the major contributions of this thesis are extending monitor semantics to \gls{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristiques of \CFA are consired as problems which have already been solved and therefore will not be discussed further. 10 11 % ====================================================================== 12 % ====================================================================== 13 \section{Mutex routines} 14 % ====================================================================== 15 % ====================================================================== 16 17 The first step towards the monitor implementation is simple mutex-routines using monitors. In the single monitor case, this is done using the entry/exit procedure highlighted in listing \ref{lst:entry1}. This entry/exit procedure doesn't actually have to be extended to support multiple monitors, indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlocks\cit. In \CFA, ordering of monitor relies on memory ordering, this is sufficient because all objects are guaranteed to have distinct non-overlaping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behavior. When a mutex call is made, the concerned monitors are agregated into an variable-length pointer array and sorted based on pointer values. This array is concerved during the entire duration of the mutual-exclusion and it's ordering reused extensively. 15 18 \begin{figure} 16 \label{fig:locking-site} 19 \begin{multicols}{2} 20 Entry 21 \begin{pseudo} 22 if monitor is free 23 enter 24 elif already own the monitor 25 continue 26 else 27 block 28 increment recursions 29 \end{pseudo} 30 \columnbreak 31 Exit 32 \begin{pseudo} 33 decrement recursion 34 if recursion == 0 35 if entry queue not empty 36 wake-up thread 37 \end{pseudo} 38 \end{multicols} 39 \caption{Initial entry and exit routine for monitors} 40 \label{lst:entry1} 41 \end{figure} 42 43 \subsection{ Details: Interaction with polymorphism} 44 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues. 45 46 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. It is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example: 47 \begin{figure}[H] 17 48 \begin{center} 18 \setlength\tabcolsep{1.5pt}19 49 \begin{tabular}{|c|c|c|} 20 50 Mutex & \gls{callsite-locking} & \gls{entry-point-locking} \\ … … 67 97 \end{center} 68 98 \caption{Callsite vs entry-point locking for mutex calls} 69 \ end{figure}70 71 72 Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait, which is possible because monitors are designed in terms a trait. For example:99 \label{fig:locking-site} 100 \end{figure} 101 102 Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait, for example: 73 103 \begin{cfacode} 74 104 //Incorrect: T is not a monitor … … 81 111 \end{cfacode} 82 112 83 84 % ====================================================================== 85 % ====================================================================== 86 \section{Internal scheduling: Implementation} \label{inschedimpl} 87 % ====================================================================== 88 % ====================================================================== 89 There are several challenges specific to \CFA when implementing internal scheduling. These challenges are direct results of \gls{bulk-acq} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation of internal scheduling. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system. 90 91 The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since internal scheduling can use an unbound amount of memory (depending on \gls{bulk-acq}) statically defining information information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively. 92 93 Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. In the case of external scheduling, the threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though adding too much to the mutex routine stack size can become expansive faster). 94 95 The following figure is the traditionnal illustration of a monitor : 113 Both entry-point and callsite locking are valid implementations. The current \CFA implementations uses entry-point locking because it seems to require less work if done using \gls{raii}, effectively transferring the burden of implementation to object construction/destruction. The same could be said of callsite locking, the difference being that the later does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, i.e.: the function body. 114 115 % ====================================================================== 116 % ====================================================================== 117 \section{Threading} \label{impl:thread} 118 % ====================================================================== 119 % ====================================================================== 120 121 Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. 122 123 \begin{figure} 124 \begin{center} 125 {\resizebox{\textwidth}{!}{\input{system.pstex_t}}} 126 \end{center} 127 \caption{Overview of the entire system} 128 \label{fig:system1} 129 \end{figure} 130 131 \subsection{Context Switching} 132 As mentionned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading. This is because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. This assumptions means that the basic recipe for context-switch is only to copy all callee-saved registers unto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that instruction pointer can be left untouched since the context-switch always inside the same function. In the case of coroutines, that is the entire story. Threads however do not simply context-switch between each other directly. The context-switch to processors which is where the scheduling happens. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the "kernel" where scheduling and other system operation happen. Obiously, this has the cost of doubling the context-switch cost from because threads must context-switch to an intermediate stack. However, the performance of the 2-step context-switch is still superior to a \code{pthread_yield}(see section \ref{results}). additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch to use manually (or as part of monitors). This option is not currently present in \CFA but the changes required to add it are strictly additive. 133 134 \subsection{Processors} 135 Parallelism in \CFA are built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically pthreads in the current implementation of \CFA. Indeed, any parallelism must go through operatiing system librairies. However, \gls{cfathread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor kernel threads simply fetch a user-level thread from the scheduler and run, they are effectively executers for user-threads. The main benefit of this approach is that it offers a well defined boundary between kernel code and user-code, for example kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics. 136 137 \subsection{Stack management} 138 One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all pthreads created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create there own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the kernel thread stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, i.e. the initial kernel thread that is given to any program. In order to respect user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor. 139 140 \subsection{Preemption} 141 Finally, an important aspect for any complete threading system is preemption. As mentionned in chapter \ref{basics}, preemption introduces an extra degree of unceretainty, which enables users to have multiple threads interleave transparrently between eachother, rather than having to cooperate between thread for proper scheduling and CPU distribution. Indeed, preemption is desireable because it adds a degree of isolation between tasks. In a fully cooperative system, any thread that runs into a long loop can starve other threads, while in a preemptive system starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly programmer burden. Obviously, preemption is not optimal for every workload, however any preemptive system can become a cooperative system by making the time-slices extremely large. Which is why \CFA uses a preemptive threading system. 142 143 Preemption in \CFA is based on kernel timers which are used to run a discreet event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption it sorts these expiration times in a list and sets a kernel timer for the closest one, effectiveling stepping between preemption events on each signals sent by the timer. These timers use the linux signal {\tt SIGALRM}, which is delivered to the process. This is important because when delivering signals to a process, the kernel documentation states that the signal can be delivered to any kernel thread for which the signal isn't block i.e. : 144 \begin{quote} 145 A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal. 146 SIGNAL(7) - Linux Programmer's Manual 147 \end{quote} 148 For the sake of simplicity and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every thread except one. Now because of how involontary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread. 149 150 Involontary context-switching is done by sending {\tt SIGUSER1} to the corresponding processor and having the thread yield from inside the signal handler. Effectively context-switch away from the signal-handler back to the kernel and the signal-handler frame will be unwound when the thread is scheduled again. This means that a signal-handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal-handlers save and restore signal masks because user-thread migration can cause signal mask to migrate from one kernel thread to another. This is only a problem if all kernel threads among which a user thread can migrate differ in terms of signal masks. However, since the kernel thread hanlding preemption requires a different signal mask, executing user threads on the kernel alarm thread can cause deadlocks. For this reason, the alarm thread is on a tight loop around a system call to \code{sigwait} or more specifically \code{sigwaitinfo}, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (e.g. on thread termination). This is also done using {\tt SIGALRM}, but sent throught the \code{pthread_sigqueue}. Indeed, \code{sigwait} can differentiate signals sent from \code{pthread_sigqueue} from signals sent from alarms or the kernel. 151 152 \subsection{Scheduler} \footnote{ I'm not sure what to write here, is this section even needed. } 153 Finally, an aspect that was not mentionned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors. Will this is not the highest performance algorithm, it has the significant advantage of being robust to heterogenous workloads. This is a very simple scheduling approach but is sufficient to for the context of this thesis. 154 155 What to do here? 156 157 However, when 158 As will be mentionned \ref{futur:sched} it needs to be updated when clusters will be 159 160 clusters 161 162 163 164 Among the most pressing updates to the \CFA 165 uses single queue 166 in future should move to multiple queues with workstealing 167 general purpouse means robust > fast 168 worksharing can higher standard deviation in performance 169 170 171 % ====================================================================== 172 % ====================================================================== 173 \section{Internal scheduling} \label{impl:intsched} 174 % ====================================================================== 175 % ====================================================================== 176 To ease the understanding of monitors, like many other concepts, they are generelly represented graphically. While non-scheduled monitors are simple enough for a graphical representation to be useful, internal scheduling is complex enough to justify a visual representation. The following figure is the traditionnal illustration of a monitor : 96 177 97 178 \begin{center} … … 99 180 \end{center} 100 181 101 For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{inschedimpl}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields :182 This picture has several components, the two most important being the entry-queue and the AS-stack. The entry-queue is a (almost) FIFO list where threads waiting to enter are parked, while the AS-stack is a FILO list used for threads that have been signaled or otherwise marked as running next. For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{intsched}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields : 102 183 103 184 \begin{center} … … 105 186 \end{center} 106 187 107 \newpage 108 109 This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling. 110 188 This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling (see listing \ref{lst:entry2}). 189 190 \begin{figure}[b] 111 191 \begin{multicols}{2} 112 192 Entry 113 \begin{pseudo} [numbers=left]193 \begin{pseudo} 114 194 if monitor is free 115 195 enter 116 elif Ialready own the monitor196 elif already own the monitor 117 197 continue 118 198 else … … 123 203 \columnbreak 124 204 Exit 125 \begin{pseudo} [numbers=left, firstnumber=8]205 \begin{pseudo} 126 206 decrement recursion 127 207 if recursion == 0 … … 135 215 \end{pseudo} 136 216 \end{multicols} 137 138 Some important things to notice about the exit routine. The solution discussed in \ref{inschedimpl} can be seen on line 11 of the previous pseudo code. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has trasnferred ownership. This solution is safe as well as preventing any potential barging. 139 140 % ====================================================================== 141 % ====================================================================== 142 \section{Implementation Details: External scheduling queues} 143 % ====================================================================== 144 % ====================================================================== 145 To support multi-monitor external scheduling means that some kind of entry-queues must be used that is aware of both monitors. However, acceptable routines must be aware of the entry queues which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement a systematic algorithm of disambiguating which queue is relavant regardless of user ordering. The proposed algorithm is to fall back on monitors lock ordering and specify that the monitor that is acquired first is the lock with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order arrival, they also contain the second mutex. Therefore, another thread with the same highest priority monitor but a different lowest priority monitor may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it may not be known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that half the multi-monitor queues will go unused for the entire duration of the program. 146 147 148 \section{Internals} 149 The complete mask can be pushed to any one, we are in a context where we already have full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what. 217 \caption{Entry and exit routine for monitors with internal scheduling} 218 \label{lst:entry2} 219 \end{figure} 220 221 Some important things to notice about the exit routine. The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{lst:entry2}. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging. 222 223 The data structure used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the callstack of the \code{wait} and \code{signal_block} routines. 224 225 % ====================================================================== 226 % ====================================================================== 227 \section{External scheduling} 228 % ====================================================================== 229 % ====================================================================== 230 Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that entry-queues are no longer specific to a single monitor, as mentionned in section \ref{extsched}. This means that some kind of entry-queues must be used that is aware of both monitors and which holds threads that are currently waiting to enter the critical section. This challenge is solved for internal scheduling by having the entry-queues in conditions no longer be tied to a monitor, effectively allowing conditions to be moved outside of monitors. However, in the case of external scheduling, acceptable routines must be aware of the entry queues, which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement that a systematic algorithm of disambiguating which monitor holds the relevant queue regardless of user ordering. The proposed algorithm is to fall back on monitor lock ordering and specify that the monitor that is acquired first is the one with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. 231 232 This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order of arrival, they also contain a set of monitors. Therefore, another thread whos set contains the same highest priority monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it is not known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that some queues will go unused for the entire duration of the program, for example if a monitor is only used in a pair. 233 234 Therefore, the following modifications need to be made to support external scheduling : 235 \begin{itemize} 236 \item The threads waiting on the entry-queue need to keep track of which routine is trying to enter, and using which set of monitors. The \code{mutex} routine already has all the required information on it's stack so the thread only needs to keep a pointer to that information. 237 \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that the complete mask can be pushed to any owned monitors, regardless of \code{when} statements, the \code{waitfor} statement is used in a context where the thread already has full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what. 238 \item The entry/exit routine need to be updated as shown in listing \ref{lst:entry3}. 239 \end{itemize} 240 241 Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is need for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular \code{waitfor} statements, the callstack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The waitfor semantics can then be adjusted correspondingly, as seen in listing \ref{lst:entry-dtor} 242 243 \begin{figure} 244 \begin{multicols}{2} 245 Entry 246 \begin{pseudo} 247 if monitor is free 248 enter 249 elif already own the monitor 250 continue 251 elif matches waitfor mask 252 push waiter to AS-stack 253 continue 254 else 255 block 256 increment recursion 257 \end{pseudo} 258 \columnbreak 259 Exit 260 \begin{pseudo} 261 decrement recursion 262 if recursion == 0 263 if signal_stack not empty 264 set_owner to thread 265 if all monitors ready 266 wake-up thread 267 268 if entry queue not empty 269 wake-up thread 270 \end{pseudo} 271 \end{multicols} 272 \caption{Entry and exit routine for monitors with internal scheduling and external scheduling} 273 \label{lst:entry3} 274 \end{figure} 275 276 \begin{figure} 277 \begin{multicols}{2} 278 Destructor Entry 279 \begin{pseudo} 280 if monitor is free 281 enter 282 elif already own the monitor 283 increment recursion 284 return 285 create wait context 286 if matches waitfor mask 287 reset mask 288 push self to AS-stack 289 baton pass 290 else 291 wait 292 increment recursion 293 \end{pseudo} 294 \columnbreak 295 Waitfor 296 \begin{pseudo} 297 lock all monitors 298 if matching thread is already there 299 if found destructor 300 push destructor to AS-stack 301 unlock all monitors 302 else 303 push self to AS-stack 304 baton pass 305 return 306 307 if non-blocking 308 Unlock all monitors 309 Return 310 311 push self to AS-stack 312 set waitfor mask 313 block 314 return 315 \end{pseudo} 316 \end{multicols} 317 \caption{Pseudo code for the \code{waitfor} routine and the \code{mutex} entry routine for destructors} 318 \label{lst:entry-dtor} 319 \end{figure} -
doc/proposals/concurrency/text/intro.tex
r136ccd7 r4ee36bf0 3 3 % ====================================================================== 4 4 5 This thesis provides a minimal concurrency \acrshort{api} that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. The high-level approach and its minimal \acrshort{api} are tested in a dialect of C, call \CFA. [Is there value to say that this thesis is also an early definition of the \CFA language and library in regards to concurrency?]5 This thesis provides a minimal concurrency \acrshort{api} that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. The high-level approach and its minimal \acrshort{api} are tested in a dialect of C, call \CFA. Furthermore, the proposed \acrshort{api} doubles as an early definition of the \CFA language and library. This thesis also comes with an implementation of the concurrency library for \CFA as well as all the required language features added to the source-to-source translator. 6 6 7 7 There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency and which parallelism tools are available to the programmer. While these two concepts are often combined, they are in fact distinct, requiring different tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization. -
doc/proposals/concurrency/text/parallelism.tex
r136ccd7 r4ee36bf0 16 16 17 17 \subsection{Fibers : user-level threads without preemption} 18 A popular varient of \glspl{uthread} is what is often refered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignorefibers.18 A popular varient of \glspl{uthread} is what is often refered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. The significant difference between \glspl{uthread} and \glspl{fiber} is the lack of \gls{preemption} in the later one. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignores fibers. 19 19 20 20 An example of a language that uses fibers is Go~\cite{Go} … … 26 26 27 27 \subsection{Paradigm performance} 28 While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., not thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortised by the actual work done. 29 30 \TODO 28 While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., no thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortised by the actual work done. 31 29 32 30 \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} 33 31 32 \Glspl{cfacluster} have not been fully implmented in the context of this thesis, currently \CFA only supports one \gls{cfacluster}, the initial one. The objective of \gls{cfacluster} is to group \gls{kthread} with identical settings together. \Glspl{uthread} can be scheduled on a \glspl{kthread} of a given \gls{cfacluster}, allowing organization between \glspl{kthread} and \glspl{uthread}. It is important that \glspl{kthread} belonging to a same \glspl{cfacluster} have homogenous settings, otherwise migrating a \gls{uthread} from one \gls{kthread} to the other can cause issues. 34 33 35 34 \subsection{Future Work: Machine setup}\label{machine} … … 37 36 38 37 \subsection{Paradigms}\label{cfaparadigms} 39 Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \glspl{uthread} is the default paradigm in \CFA. However, disabling \gls pl{preemption} on the \gls{cfacluster} means \glspl{cfathread} effectively become \glspl{fiber}. Since several \glspl{cfacluster} with different scheduling policy can coexist in the same application, this allows \glspl{fiber} and \glspl{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \glspl{uthread} or \glspl{fiber}.38 Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \glspl{uthread} is the default paradigm in \CFA. However, disabling \gls{preemption} on the \gls{cfacluster} means \glspl{cfathread} effectively become \glspl{fiber}. Since several \glspl{cfacluster} with different scheduling policy can coexist in the same application, this allows \glspl{fiber} and \glspl{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \glspl{uthread} or \glspl{fiber}. -
doc/proposals/concurrency/text/together.tex
r136ccd7 r4ee36bf0 36 36 } 37 37 \end{cfacode} 38 One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on for 38 One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner : 39 39 \begin{cfacode} 40 40 // Visualization declaration -
doc/proposals/concurrency/thesis.tex
r136ccd7 r4ee36bf0 35 35 \usepackage[pagewise]{lineno} 36 36 \usepackage{fancyhdr} 37 \usepackage{float} 37 38 \renewcommand{\linenumberfont}{\scriptsize\sffamily} 39 \usepackage{siunitx} 40 \sisetup{ binary-units=true } 38 41 \input{style} % bespoke macros used in the document 39 42 \usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref} … … 107 110 \input{together} 108 111 112 \input{results} 113 109 114 \input{future} 110 115 -
doc/proposals/concurrency/version
r136ccd7 r4ee36bf0 1 0.1 0.2121 0.11.47 -
src/InitTweak/GenInit.cc
r136ccd7 r4ee36bf0 214 214 } 215 215 // a type is managed if it appears in the map of known managed types, or if it contains any polymorphism (is a type variable or generic type containing a type variable) 216 return managedTypes.find( SymTab::Mangler::mangle ( type ) ) != managedTypes.end() || GenPoly::isPolyType( type );216 return managedTypes.find( SymTab::Mangler::mangleConcrete( type ) ) != managedTypes.end() || GenPoly::isPolyType( type ); 217 217 } 218 218 … … 232 232 Type * type = InitTweak::getPointerBase( params.front()->get_type() ); 233 233 assert( type ); 234 managedTypes.insert( SymTab::Mangler::mangle ( type ) );234 managedTypes.insert( SymTab::Mangler::mangleConcrete( type ) ); 235 235 } 236 236 } … … 242 242 if ( ObjectDecl * field = dynamic_cast< ObjectDecl * >( member ) ) { 243 243 if ( isManaged( field ) ) { 244 // generic parameters should not play a role in determining whether a generic type is constructed - construct all generic types, so that 245 // polymorphic constructors make generic types managed types 244 246 StructInstType inst( Type::Qualifiers(), aggregateDecl ); 245 managedTypes.insert( SymTab::Mangler::mangle ( &inst ) );247 managedTypes.insert( SymTab::Mangler::mangleConcrete( &inst ) ); 246 248 break; 247 249 } -
src/ResolvExpr/AlternativeFinder.cc
r136ccd7 r4ee36bf0 312 312 Cost convCost = conversionCost( actualType, formalType, indexer, env ); 313 313 PRINT( 314 std::cerr << std::endl << "cost is " << convCost << std::endl;314 std::cerr << std::endl << "cost is " << convCost << std::endl; 315 315 ) 316 316 if ( convCost == Cost::infinity ) { … … 318 318 } 319 319 convCost.incPoly( polyCost( formalType, env, indexer ) + polyCost( actualType, env, indexer ) ); 320 PRINT( 321 std::cerr << "cost with polycost is " << convCost << std::endl; 322 ) 320 323 return convCost; 321 324 } … … 370 373 if ( function->get_isVarArgs() ) { 371 374 convCost.incUnsafe(); 375 PRINT( std::cerr << "end of formals with varargs function: inc unsafe: " << convCost << std::endl; ; ) 372 376 // convert reference-typed expressions to value-typed expressions 373 377 referenceToRvalueConversion( *actualExpr ); … … 378 382 } 379 383 Type * formalType = (*formal)->get_type(); 380 PRINT(381 std::cerr << std::endl << "converting ";382 actualType->print( std::cerr, 8 );383 std::cerr << std::endl << " to ";384 formalType->print( std::cerr, 8 );385 std::cerr << std::endl << "environment is: ";386 alt.env.print( std::cerr, 8 );387 std::cerr << std::endl;388 )389 384 convCost += computeExpressionConversionCost( *actualExpr, formalType, indexer, alt.env ); 390 385 ++formal; // can't be in for-loop update because of the continue -
src/ResolvExpr/CurrentObject.cc
r136ccd7 r4ee36bf0 260 260 261 261 AggregateIterator( const std::string & kind, const std::string & name, Type * inst, const MemberList & members ) : kind( kind ), name( name ), inst( inst ), members( members ), curMember( members.begin() ), sub( makeGenericSubstitution( inst ) ) { 262 PRINT( std::cerr << "Creating " << kind << "(" << name << ")"; ) 262 263 init(); 263 264 } -
src/SymTab/Mangler.cc
r136ccd7 r4ee36bf0 32 32 namespace SymTab { 33 33 std::string Mangler::mangleType( Type * ty ) { 34 Mangler mangler( false, true );34 Mangler mangler( false, true, true ); 35 35 maybeAccept( ty, mangler ); 36 36 return mangler.get_mangleName(); 37 37 } 38 38 39 Mangler::Mangler( bool mangleOverridable, bool typeMode ) 40 : nextVarNum( 0 ), isTopLevel( true ), mangleOverridable( mangleOverridable ), typeMode( typeMode ) {} 39 std::string Mangler::mangleConcrete( Type* ty ) { 40 Mangler mangler( false, false, false ); 41 maybeAccept( ty, mangler ); 42 return mangler.get_mangleName(); 43 } 44 45 Mangler::Mangler( bool mangleOverridable, bool typeMode, bool mangleGenericParams ) 46 : nextVarNum( 0 ), isTopLevel( true ), mangleOverridable( mangleOverridable ), typeMode( typeMode ), mangleGenericParams( mangleGenericParams ) {} 41 47 42 48 Mangler::Mangler( const Mangler &rhs ) : mangleName() { … … 166 172 167 173 mangleName << ( refType->get_name().length() + prefix.length() ) << prefix << refType->get_name(); 168 } 169 170 void Mangler::mangleGenericRef( ReferenceToType * refType, std::string prefix ) { 171 printQualifiers( refType ); 172 173 std::ostringstream oldName( mangleName.str() ); 174 mangleName.clear(); 175 176 mangleName << prefix << refType->get_name(); 177 178 std::list< Expression* >& params = refType->get_parameters(); 179 if ( ! params.empty() ) { 180 mangleName << "_"; 181 for ( std::list< Expression* >::const_iterator param = params.begin(); param != params.end(); ++param ) { 182 TypeExpr *paramType = dynamic_cast< TypeExpr* >( *param ); 183 assertf(paramType, "Aggregate parameters should be type expressions: %s", toString(*param).c_str()); 184 maybeAccept( paramType->get_type(), *this ); 174 175 if ( mangleGenericParams ) { 176 std::list< Expression* >& params = refType->get_parameters(); 177 if ( ! params.empty() ) { 178 mangleName << "_"; 179 for ( std::list< Expression* >::const_iterator param = params.begin(); param != params.end(); ++param ) { 180 TypeExpr *paramType = dynamic_cast< TypeExpr* >( *param ); 181 assertf(paramType, "Aggregate parameters should be type expressions: %s", toString(*param).c_str()); 182 maybeAccept( paramType->get_type(), *this ); 183 } 184 mangleName << "_"; 185 185 } 186 mangleName << "_";187 186 } 188 189 oldName << mangleName.str().length() << mangleName.str();190 mangleName.str( oldName.str() );191 187 } 192 188 193 189 void Mangler::visit( StructInstType * aggregateUseType ) { 194 if ( typeMode ) mangleGenericRef( aggregateUseType, "s" ); 195 else mangleRef( aggregateUseType, "s" ); 190 mangleRef( aggregateUseType, "s" ); 196 191 } 197 192 198 193 void Mangler::visit( UnionInstType * aggregateUseType ) { 199 if ( typeMode ) mangleGenericRef( aggregateUseType, "u" ); 200 else mangleRef( aggregateUseType, "u" ); 194 mangleRef( aggregateUseType, "u" ); 201 195 } 202 196 … … 285 279 varNums[ (*i)->name ] = std::pair< int, int >( nextVarNum++, (int)(*i)->get_kind() ); 286 280 for ( std::list< DeclarationWithType* >::iterator assert = (*i)->assertions.begin(); assert != (*i)->assertions.end(); ++assert ) { 287 Mangler sub_mangler( mangleOverridable, typeMode );281 Mangler sub_mangler( mangleOverridable, typeMode, mangleGenericParams ); 288 282 sub_mangler.nextVarNum = nextVarNum; 289 283 sub_mangler.isTopLevel = false; -
src/SymTab/Mangler.h
r136ccd7 r4ee36bf0 30 30 /// Mangle syntax tree object; primary interface to clients 31 31 template< typename SynTreeClass > 32 static std::string mangle( SynTreeClass *decl, bool mangleOverridable = true, bool typeMode = false );32 static std::string mangle( SynTreeClass *decl, bool mangleOverridable = true, bool typeMode = false, bool mangleGenericParams = true ); 33 33 /// Mangle a type name; secondary interface 34 34 static std::string mangleType( Type* ty ); 35 /// Mangle ignoring generic type parameters 36 static std::string mangleConcrete( Type* ty ); 37 35 38 36 39 virtual void visit( ObjectDecl *declaration ); … … 62 65 bool mangleOverridable; ///< Specially mangle overridable built-in methods 63 66 bool typeMode; ///< Produce a unique mangled name for a type 67 bool mangleGenericParams; ///< Include generic parameters in name mangling if true 64 68 65 Mangler( bool mangleOverridable, bool typeMode );69 Mangler( bool mangleOverridable, bool typeMode, bool mangleGenericParams ); 66 70 Mangler( const Mangler & ); 67 71 68 72 void mangleDecl( DeclarationWithType *declaration ); 69 73 void mangleRef( ReferenceToType *refType, std::string prefix ); 70 void mangleGenericRef( ReferenceToType *refType, std::string prefix );71 74 72 75 void printQualifiers( Type *type ); … … 74 77 75 78 template< typename SynTreeClass > 76 std::string Mangler::mangle( SynTreeClass *decl, bool mangleOverridable, bool typeMode ) {77 Mangler mangler( mangleOverridable, typeMode );79 std::string Mangler::mangle( SynTreeClass *decl, bool mangleOverridable, bool typeMode, bool mangleGenericParams ) { 80 Mangler mangler( mangleOverridable, typeMode, mangleGenericParams ); 78 81 maybeAccept( decl, mangler ); 79 82 return mangler.get_mangleName(); -
src/SymTab/Validate.cc
r136ccd7 r4ee36bf0 268 268 HoistStruct::hoistStruct( translationUnit ); // must happen after EliminateTypedef, so that aggregate typedefs occur in the correct order 269 269 ReturnTypeFixer::fix( translationUnit ); // must happen before autogen 270 acceptAll( translationUnit, epc ); // must happen before VerifyCtorDtorAssign, because void return objects should not exist; before LinkReferenceToTypes because it is an indexer and needs correct types for mangling 270 271 acceptAll( translationUnit, lrt ); // must happen before autogen, because sized flag needs to propagate to generated functions 271 272 acceptAll( translationUnit, genericParams ); // check as early as possible - can't happen before LinkReferenceToTypes 272 acceptAll( translationUnit, epc ); // must happen before VerifyCtorDtorAssign, because void return objects should not exist273 273 VerifyCtorDtorAssign::verify( translationUnit ); // must happen before autogen, because autogen examines existing ctor/dtors 274 274 ReturnChecker::checkFunctionReturns( translationUnit ); -
src/benchmark/Makefile.am
r136ccd7 r4ee36bf0 27 27 28 28 noinst_PROGRAMS = 29 30 all : ctxswitch$(EXEEXT) mutex$(EXEEXT) signal$(EXEEXT) waitfor$(EXEEXT) creation$(EXEEXT) 29 31 30 32 bench$(EXEEXT) : … … 63 65 ctxswitch-pthread$(EXEEXT): 64 66 @BACKEND_CC@ ctxswitch/pthreads.c -DBENCH_N=50000000 -I. -lrt -pthread ${AM_CFLAGS} ${CFLAGS} ${ccflags} 65 66 ## =========================================================================================================67 creation$(EXEEXT) :\68 creation-pthread.run \69 creation-cfa_coroutine.run \70 creation-cfa_thread.run \71 creation-upp_coroutine.run \72 creation-upp_thread.run73 74 creation-cfa_coroutine$(EXEEXT):75 ${CC} creation/cfa_cor.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}76 77 creation-cfa_thread$(EXEEXT):78 ${CC} creation/cfa_thrd.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}79 80 creation-upp_coroutine$(EXEEXT):81 u++ creation/upp_cor.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags}82 83 creation-upp_thread$(EXEEXT):84 u++ creation/upp_thrd.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags}85 86 creation-pthread$(EXEEXT):87 @BACKEND_CC@ creation/pthreads.c -DBENCH_N=250000 -I. -lrt -pthread ${AM_CFLAGS} ${CFLAGS} ${ccflags}88 67 89 68 ## ========================================================================================================= … … 153 132 154 133 ## ========================================================================================================= 134 creation$(EXEEXT) :\ 135 creation-pthread.run \ 136 creation-cfa_coroutine.run \ 137 creation-cfa_thread.run \ 138 creation-upp_coroutine.run \ 139 creation-upp_thread.run 140 141 creation-cfa_coroutine$(EXEEXT): 142 ${CC} creation/cfa_cor.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags} 143 144 creation-cfa_thread$(EXEEXT): 145 ${CC} creation/cfa_thrd.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags} 146 147 creation-upp_coroutine$(EXEEXT): 148 u++ creation/upp_cor.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags} 149 150 creation-upp_thread$(EXEEXT): 151 u++ creation/upp_thrd.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags} 152 153 creation-pthread$(EXEEXT): 154 @BACKEND_CC@ creation/pthreads.c -DBENCH_N=250000 -I. -lrt -pthread ${AM_CFLAGS} ${CFLAGS} ${ccflags} 155 156 ## ========================================================================================================= 155 157 156 158 %.run : %$(EXEEXT) ${REPEAT} -
src/benchmark/Makefile.in
r136ccd7 r4ee36bf0 444 444 .NOTPARALLEL: 445 445 446 all : ctxswitch$(EXEEXT) mutex$(EXEEXT) signal$(EXEEXT) waitfor$(EXEEXT) creation$(EXEEXT) 447 446 448 bench$(EXEEXT) : 447 449 @for ccflags in "-debug" "-nodebug"; do \ … … 479 481 @BACKEND_CC@ ctxswitch/pthreads.c -DBENCH_N=50000000 -I. -lrt -pthread ${AM_CFLAGS} ${CFLAGS} ${ccflags} 480 482 481 creation$(EXEEXT) :\482 creation-pthread.run \483 creation-cfa_coroutine.run \484 creation-cfa_thread.run \485 creation-upp_coroutine.run \486 creation-upp_thread.run487 488 creation-cfa_coroutine$(EXEEXT):489 ${CC} creation/cfa_cor.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}490 491 creation-cfa_thread$(EXEEXT):492 ${CC} creation/cfa_thrd.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}493 494 creation-upp_coroutine$(EXEEXT):495 u++ creation/upp_cor.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags}496 497 creation-upp_thread$(EXEEXT):498 u++ creation/upp_thrd.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags}499 500 creation-pthread$(EXEEXT):501 @BACKEND_CC@ creation/pthreads.c -DBENCH_N=250000 -I. -lrt -pthread ${AM_CFLAGS} ${CFLAGS} ${ccflags}502 503 483 mutex$(EXEEXT) :\ 504 484 mutex-function.run \ … … 562 542 waitfor-cfa4$(EXEEXT): 563 543 ${CC} schedext/cfa4.c -DBENCH_N=500000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags} 544 545 creation$(EXEEXT) :\ 546 creation-pthread.run \ 547 creation-cfa_coroutine.run \ 548 creation-cfa_thread.run \ 549 creation-upp_coroutine.run \ 550 creation-upp_thread.run 551 552 creation-cfa_coroutine$(EXEEXT): 553 ${CC} creation/cfa_cor.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags} 554 555 creation-cfa_thread$(EXEEXT): 556 ${CC} creation/cfa_thrd.c -DBENCH_N=10000000 -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags} 557 558 creation-upp_coroutine$(EXEEXT): 559 u++ creation/upp_cor.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags} 560 561 creation-upp_thread$(EXEEXT): 562 u++ creation/upp_thrd.cc -DBENCH_N=50000000 -I. -nodebug -lrt -quiet ${AM_CFLAGS} ${CFLAGS} ${ccflags} 563 564 creation-pthread$(EXEEXT): 565 @BACKEND_CC@ creation/pthreads.c -DBENCH_N=250000 -I. -lrt -pthread ${AM_CFLAGS} ${CFLAGS} ${ccflags} 564 566 565 567 %.run : %$(EXEEXT) ${REPEAT} -
src/benchmark/csv-data.c
r136ccd7 r4ee36bf0 111 111 StartTime = Time(); 112 112 for( int i = 0;; i++ ) { 113 signal( &cond1a);114 if( i > N ) break; 115 wait( &cond1b);113 signal(cond1a); 114 if( i > N ) break; 115 wait(cond1b); 116 116 } 117 117 EndTime = Time(); … … 122 122 void side1B( mon_t & mutex a ) { 123 123 for( int i = 0;; i++ ) { 124 signal( &cond1b);125 if( i > N ) break; 126 wait( &cond1a);124 signal(cond1b); 125 if( i > N ) break; 126 wait(cond1a); 127 127 } 128 128 } … … 159 159 StartTime = Time(); 160 160 for( int i = 0;; i++ ) { 161 signal( &cond2a);162 if( i > N ) break; 163 wait( &cond2b);161 signal(cond2a); 162 if( i > N ) break; 163 wait(cond2b); 164 164 } 165 165 EndTime = Time(); … … 170 170 void side2B( mon_t & mutex a, mon_t & mutex b ) { 171 171 for( int i = 0;; i++ ) { 172 signal( &cond2b);173 if( i > N ) break; 174 wait( &cond2a);172 signal(cond2b); 173 if( i > N ) break; 174 wait(cond2a); 175 175 } 176 176 } -
src/benchmark/schedint/cfa1.c
r136ccd7 r4ee36bf0 15 15 16 16 void __attribute__((noinline)) call( M & mutex a1 ) { 17 signal( &c);17 signal(c); 18 18 } 19 19 … … 22 22 BENCH( 23 23 for (size_t i = 0; i < n; i++) { 24 wait( &c);24 wait(c); 25 25 }, 26 26 result -
src/benchmark/schedint/cfa2.c
r136ccd7 r4ee36bf0 15 15 16 16 void __attribute__((noinline)) call( M & mutex a1, M & mutex a2 ) { 17 signal( &c);17 signal(c); 18 18 } 19 19 … … 22 22 BENCH( 23 23 for (size_t i = 0; i < n; i++) { 24 wait( &c);24 wait(c); 25 25 }, 26 26 result -
src/benchmark/schedint/cfa4.c
r136ccd7 r4ee36bf0 15 15 16 16 void __attribute__((noinline)) call( M & mutex a1, M & mutex a2, M & mutex a3, M & mutex a4 ) { 17 signal( &c);17 signal(c); 18 18 } 19 19 … … 22 22 BENCH( 23 23 for (size_t i = 0; i < n; i++) { 24 wait( &c);24 wait(c); 25 25 }, 26 26 result -
src/libcfa/concurrency/coroutine.c
r136ccd7 r4ee36bf0 156 156 this->limit = (char *)libCeiling( (unsigned long)this->storage, 16 ); // minimum alignment 157 157 } // if 158 assertf( this->size >= MinStackSize, "Stack size % d provides less than minimum of %d bytes for a stack.", this->size, MinStackSize );158 assertf( this->size >= MinStackSize, "Stack size %zd provides less than minimum of %d bytes for a stack.", this->size, MinStackSize ); 159 159 160 160 this->base = (char *)this->limit + this->size; -
src/libcfa/concurrency/invoke.h
r136ccd7 r4ee36bf0 25 25 #define _INVOKE_H_ 26 26 27 #define unlikely(x) __builtin_expect(!!(x), 0) 28 #define thread_local _Thread_local 29 30 typedef void (*fptr_t)(); 31 32 struct spinlock { 33 volatile int lock; 34 #ifdef __CFA_DEBUG__ 35 const char * prev_name; 36 void* prev_thrd; 37 #endif 38 }; 39 40 struct __thread_queue_t { 41 struct thread_desc * head; 42 struct thread_desc ** tail; 43 }; 44 45 struct __condition_stack_t { 46 struct __condition_criterion_t * top; 47 }; 48 49 #ifdef __CFORALL__ 50 extern "Cforall" { 51 void ?{}( struct __thread_queue_t & ); 52 void append( struct __thread_queue_t *, struct thread_desc * ); 53 struct thread_desc * pop_head( struct __thread_queue_t * ); 54 struct thread_desc * remove( struct __thread_queue_t *, struct thread_desc ** ); 55 56 void ?{}( struct __condition_stack_t & ); 57 void push( struct __condition_stack_t *, struct __condition_criterion_t * ); 58 struct __condition_criterion_t * pop( struct __condition_stack_t * ); 59 60 void ?{}(spinlock & this); 61 void ^?{}(spinlock & this); 62 } 63 #endif 64 65 struct coStack_t { 66 unsigned int size; // size of stack 67 void *storage; // pointer to stack 68 void *limit; // stack grows towards stack limit 69 void *base; // base of stack 70 void *context; // address of cfa_context_t 71 void *top; // address of top of storage 72 bool userStack; // whether or not the user allocated the stack 73 }; 74 75 enum coroutine_state { Halted, Start, Inactive, Active, Primed }; 76 77 struct coroutine_desc { 78 struct coStack_t stack; // stack information of the coroutine 79 const char *name; // textual name for coroutine/task, initialized by uC++ generated code 80 int errno_; // copy of global UNIX variable errno 81 enum coroutine_state state; // current execution status for coroutine 82 struct coroutine_desc * starter; // first coroutine to resume this one 83 struct coroutine_desc * last; // last coroutine to resume this one 84 }; 85 86 struct __waitfor_mask_t { 87 short * accepted; // the index of the accepted function, -1 if none 88 struct __acceptable_t * clauses; // list of acceptable functions, null if any 89 short size; // number of acceptable functions 90 }; 91 92 struct monitor_desc { 93 struct spinlock lock; // spinlock to protect internal data 94 struct thread_desc * owner; // current owner of the monitor 95 struct __thread_queue_t entry_queue; // queue of threads that are blocked waiting for the monitor 96 struct __condition_stack_t signal_stack; // stack of conditions to run next once we exit the monitor 97 unsigned int recursion; // monitor routines can be called recursively, we need to keep track of that 98 struct __waitfor_mask_t mask; // mask used to know if some thread is waiting for something while holding the monitor 99 struct __condition_node_t * dtor_node; // node used to signal the dtor in a waitfor dtor 100 }; 101 102 struct __monitor_group_t { 103 struct monitor_desc ** list; // currently held monitors 104 short size; // number of currently held monitors 105 fptr_t func; // last function that acquired monitors 106 }; 107 108 struct thread_desc { 109 // Core threading fields 110 struct coroutine_desc self_cor; // coroutine body used to store context 111 struct monitor_desc self_mon; // monitor body used for mutual exclusion 112 struct monitor_desc * self_mon_p; // pointer to monitor with sufficient lifetime for current monitors 113 struct __monitor_group_t monitors; // monitors currently held by this thread 114 115 // Link lists fields 116 struct thread_desc * next; // instrusive link field for threads 117 118 27 #define unlikely(x) __builtin_expect(!!(x), 0) 28 #define thread_local _Thread_local 29 30 typedef void (*fptr_t)(); 31 typedef int_fast16_t __lock_size_t; 32 33 struct spinlock { 34 volatile int lock; 35 #ifdef __CFA_DEBUG__ 36 const char * prev_name; 37 void* prev_thrd; 38 #endif 39 }; 40 41 struct __thread_queue_t { 42 struct thread_desc * head; 43 struct thread_desc ** tail; 44 }; 45 46 struct __condition_stack_t { 47 struct __condition_criterion_t * top; 48 }; 49 50 #ifdef __CFORALL__ 51 extern "Cforall" { 52 void ?{}( struct __thread_queue_t & ); 53 void append( struct __thread_queue_t &, struct thread_desc * ); 54 struct thread_desc * pop_head( struct __thread_queue_t & ); 55 struct thread_desc * remove( struct __thread_queue_t &, struct thread_desc ** ); 56 57 void ?{}( struct __condition_stack_t & ); 58 void push( struct __condition_stack_t &, struct __condition_criterion_t * ); 59 struct __condition_criterion_t * pop( struct __condition_stack_t & ); 60 61 void ?{}(spinlock & this); 62 void ^?{}(spinlock & this); 63 } 64 #endif 65 66 struct coStack_t { 67 // size of stack 68 size_t size; 69 70 // pointer to stack 71 void *storage; 72 73 // stack grows towards stack limit 74 void *limit; 75 76 // base of stack 77 void *base; 78 79 // address of cfa_context_t 80 void *context; 81 82 // address of top of storage 83 void *top; 84 85 // whether or not the user allocated the stack 86 bool userStack; 87 }; 88 89 enum coroutine_state { Halted, Start, Inactive, Active, Primed }; 90 91 struct coroutine_desc { 92 // stack information of the coroutine 93 struct coStack_t stack; 94 95 // textual name for coroutine/task, initialized by uC++ generated code 96 const char *name; 97 98 // copy of global UNIX variable errno 99 int errno_; 100 101 // current execution status for coroutine 102 enum coroutine_state state; 103 104 // first coroutine to resume this one 105 struct coroutine_desc * starter; 106 107 // last coroutine to resume this one 108 struct coroutine_desc * last; 109 }; 110 111 struct __waitfor_mask_t { 112 // the index of the accepted function, -1 if none 113 short * accepted; 114 115 // list of acceptable functions, null if any 116 struct __acceptable_t * clauses; 117 118 // number of acceptable functions 119 __lock_size_t size; 120 }; 121 122 struct monitor_desc { 123 // spinlock to protect internal data 124 struct spinlock lock; 125 126 // current owner of the monitor 127 struct thread_desc * owner; 128 129 // queue of threads that are blocked waiting for the monitor 130 struct __thread_queue_t entry_queue; 131 132 // stack of conditions to run next once we exit the monitor 133 struct __condition_stack_t signal_stack; 134 135 // monitor routines can be called recursively, we need to keep track of that 136 unsigned int recursion; 137 138 // mask used to know if some thread is waiting for something while holding the monitor 139 struct __waitfor_mask_t mask; 140 141 // node used to signal the dtor in a waitfor dtor 142 struct __condition_node_t * dtor_node; 143 }; 144 145 struct __monitor_group_t { 146 // currently held monitors 147 struct monitor_desc ** list; 148 149 // number of currently held monitors 150 __lock_size_t size; 151 152 // last function that acquired monitors 153 fptr_t func; 154 }; 155 156 struct thread_desc { 157 // Core threading fields 158 // coroutine body used to store context 159 struct coroutine_desc self_cor; 160 161 // monitor body used for mutual exclusion 162 struct monitor_desc self_mon; 163 164 // pointer to monitor with sufficient lifetime for current monitors 165 struct monitor_desc * self_mon_p; 166 167 // monitors currently held by this thread 168 struct __monitor_group_t monitors; 169 170 // Link lists fields 171 // instrusive link field for threads 172 struct thread_desc * next; 119 173 }; 120 174 121 175 #ifdef __CFORALL__ 122 176 extern "Cforall" { 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 177 static inline monitor_desc * ?[?]( const __monitor_group_t & this, ptrdiff_t index ) { 178 return this.list[index]; 179 } 180 181 static inline bool ?==?( const __monitor_group_t & lhs, const __monitor_group_t & rhs ) { 182 if( (lhs.list != 0) != (rhs.list != 0) ) return false; 183 if( lhs.size != rhs.size ) return false; 184 if( lhs.func != rhs.func ) return false; 185 186 // Check that all the monitors match 187 for( int i = 0; i < lhs.size; i++ ) { 188 // If not a match, check next function 189 if( lhs[i] != rhs[i] ) return false; 190 } 191 192 return true; 193 } 194 } 195 #endif 142 196 143 197 #endif //_INVOKE_H_ … … 146 200 #define _INVOKE_PRIVATE_H_ 147 201 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 202 struct machine_context_t { 203 void *SP; 204 void *FP; 205 void *PC; 206 }; 207 208 // assembler routines that performs the context switch 209 extern void CtxInvokeStub( void ); 210 void CtxSwitch( void * from, void * to ) asm ("CtxSwitch"); 211 212 #if defined( __x86_64__ ) 213 #define CtxGet( ctx ) __asm__ ( \ 214 "movq %%rsp,%0\n" \ 215 "movq %%rbp,%1\n" \ 216 : "=rm" (ctx.SP), "=rm" (ctx.FP) ) 217 #elif defined( __i386__ ) 218 #define CtxGet( ctx ) __asm__ ( \ 219 "movl %%esp,%0\n" \ 220 "movl %%ebp,%1\n" \ 221 : "=rm" (ctx.SP), "=rm" (ctx.FP) ) 222 #endif 169 223 170 224 #endif //_INVOKE_PRIVATE_H_ -
src/libcfa/concurrency/kernel
r136ccd7 r4ee36bf0 26 26 //----------------------------------------------------------------------------- 27 27 // Locks 28 void lock ( spinlock * DEBUG_CTX_PARAM2 ); // Lock the spinlock, spin if already acquired 29 void lock_yield( spinlock * DEBUG_CTX_PARAM2 ); // Lock the spinlock, yield repeatedly if already acquired 30 bool try_lock ( spinlock * DEBUG_CTX_PARAM2 ); // Lock the spinlock, return false if already acquired 31 void unlock ( spinlock * ); // Unlock the spinlock 28 // Lock the spinlock, spin if already acquired 29 void lock ( spinlock * DEBUG_CTX_PARAM2 ); 30 31 // Lock the spinlock, yield repeatedly if already acquired 32 void lock_yield( spinlock * DEBUG_CTX_PARAM2 ); 33 34 // Lock the spinlock, return false if already acquired 35 bool try_lock ( spinlock * DEBUG_CTX_PARAM2 ); 36 37 // Unlock the spinlock 38 void unlock ( spinlock * ); 32 39 33 40 struct semaphore { … … 39 46 void ?{}(semaphore & this, int count = 1); 40 47 void ^?{}(semaphore & this); 41 void P(semaphore *this);42 void V(semaphore *this);48 void P (semaphore & this); 49 void V (semaphore & this); 43 50 44 51 … … 46 53 // Cluster 47 54 struct cluster { 48 spinlock ready_queue_lock; // Ready queue locks 49 __thread_queue_t ready_queue; // Ready queue for threads 50 unsigned long long int preemption; // Preemption rate on this cluster 55 // Ready queue locks 56 spinlock ready_queue_lock; 57 58 // Ready queue for threads 59 __thread_queue_t ready_queue; 60 61 // Preemption rate on this cluster 62 unsigned long long int preemption; 51 63 }; 52 64 53 void ?{} (cluster & this);65 void ?{} (cluster & this); 54 66 void ^?{}(cluster & this); 55 67 … … 79 91 struct processor { 80 92 // Main state 81 struct processorCtx_t * runner; // Coroutine ctx who does keeps the state of the processor 82 cluster * cltr; // Cluster from which to get threads 83 pthread_t kernel_thread; // Handle to pthreads 93 // Coroutine ctx who does keeps the state of the processor 94 struct processorCtx_t * runner; 95 96 // Cluster from which to get threads 97 cluster * cltr; 98 99 // Handle to pthreads 100 pthread_t kernel_thread; 84 101 85 102 // Termination 86 volatile bool do_terminate; // Set to true to notify the processor should terminate 87 semaphore terminated; // Termination synchronisation 103 // Set to true to notify the processor should terminate 104 volatile bool do_terminate; 105 106 // Termination synchronisation 107 semaphore terminated; 88 108 89 109 // RunThread data 90 struct FinishAction finish; // Action to do after a thread is ran 110 // Action to do after a thread is ran 111 struct FinishAction finish; 91 112 92 113 // Preemption data 93 struct alarm_node_t * preemption_alarm; // Node which is added in the discrete event simulaiton 94 bool pending_preemption; // If true, a preemption was triggered in an unsafe region, the processor must preempt as soon as possible 114 // Node which is added in the discrete event simulaiton 115 struct alarm_node_t * preemption_alarm; 116 117 // If true, a preemption was triggered in an unsafe region, the processor must preempt as soon as possible 118 bool pending_preemption; 95 119 96 120 #ifdef __CFA_DEBUG__ 97 char * last_enable; // Last function to enable preemption on this processor 121 // Last function to enable preemption on this processor 122 char * last_enable; 98 123 #endif 99 124 }; 100 125 101 void ?{}(processor & this);102 void ?{}(processor & this, cluster * cltr);126 void ?{}(processor & this); 127 void ?{}(processor & this, cluster * cltr); 103 128 void ^?{}(processor & this); 104 129 -
src/libcfa/concurrency/kernel.c
r136ccd7 r4ee36bf0 158 158 LIB_DEBUG_PRINT_SAFE("Kernel : core %p signaling termination\n", &this); 159 159 this.do_terminate = true; 160 P( &this.terminated );160 P( this.terminated ); 161 161 pthread_join( this.kernel_thread, NULL ); 162 162 } … … 216 216 } 217 217 218 V( &this->terminated );218 V( this->terminated ); 219 219 220 220 LIB_DEBUG_PRINT_SAFE("Kernel : core %p terminated\n", this); … … 335 335 336 336 lock( &this_processor->cltr->ready_queue_lock DEBUG_CTX2 ); 337 append( &this_processor->cltr->ready_queue, thrd );337 append( this_processor->cltr->ready_queue, thrd ); 338 338 unlock( &this_processor->cltr->ready_queue_lock ); 339 339 … … 344 344 verify( disable_preempt_count > 0 ); 345 345 lock( &this->ready_queue_lock DEBUG_CTX2 ); 346 thread_desc * head = pop_head( &this->ready_queue );346 thread_desc * head = pop_head( this->ready_queue ); 347 347 unlock( &this->ready_queue_lock ); 348 348 verify( disable_preempt_count > 0 ); … … 398 398 } 399 399 400 void BlockInternal(spinlock * * locks, unsigned short count) {400 void BlockInternal(spinlock * locks [], unsigned short count) { 401 401 disable_interrupts(); 402 402 this_processor->finish.action_code = Release_Multi; … … 411 411 } 412 412 413 void BlockInternal(spinlock * * locks, unsigned short lock_count, thread_desc ** thrds, unsigned short thrd_count) {413 void BlockInternal(spinlock * locks [], unsigned short lock_count, thread_desc * thrds [], unsigned short thrd_count) { 414 414 disable_interrupts(); 415 415 this_processor->finish.action_code = Release_Multi_Schedule; … … 618 618 void ^?{}(semaphore & this) {} 619 619 620 void P(semaphore *this) {621 lock( &this ->lock DEBUG_CTX2 );622 this ->count -= 1;623 if ( this ->count < 0 ) {620 void P(semaphore & this) { 621 lock( &this.lock DEBUG_CTX2 ); 622 this.count -= 1; 623 if ( this.count < 0 ) { 624 624 // queue current task 625 append( &this->waiting, (thread_desc *)this_thread );625 append( this.waiting, (thread_desc *)this_thread ); 626 626 627 627 // atomically release spin lock and block 628 BlockInternal( &this ->lock );628 BlockInternal( &this.lock ); 629 629 } 630 630 else { 631 unlock( &this ->lock );632 } 633 } 634 635 void V(semaphore *this) {631 unlock( &this.lock ); 632 } 633 } 634 635 void V(semaphore & this) { 636 636 thread_desc * thrd = NULL; 637 lock( &this ->lock DEBUG_CTX2 );638 this ->count += 1;639 if ( this ->count <= 0 ) {637 lock( &this.lock DEBUG_CTX2 ); 638 this.count += 1; 639 if ( this.count <= 0 ) { 640 640 // remove task at head of waiting list 641 thrd = pop_head( &this->waiting );642 } 643 644 unlock( &this ->lock );641 thrd = pop_head( this.waiting ); 642 } 643 644 unlock( &this.lock ); 645 645 646 646 // make new owner … … 655 655 } 656 656 657 void append( __thread_queue_t *this, thread_desc * t ) {658 verify(this ->tail != NULL);659 *this ->tail = t;660 this ->tail = &t->next;661 } 662 663 thread_desc * pop_head( __thread_queue_t *this ) {664 thread_desc * head = this ->head;657 void append( __thread_queue_t & this, thread_desc * t ) { 658 verify(this.tail != NULL); 659 *this.tail = t; 660 this.tail = &t->next; 661 } 662 663 thread_desc * pop_head( __thread_queue_t & this ) { 664 thread_desc * head = this.head; 665 665 if( head ) { 666 this ->head = head->next;666 this.head = head->next; 667 667 if( !head->next ) { 668 this ->tail = &this->head;668 this.tail = &this.head; 669 669 } 670 670 head->next = NULL; … … 673 673 } 674 674 675 thread_desc * remove( __thread_queue_t *this, thread_desc ** it ) {675 thread_desc * remove( __thread_queue_t & this, thread_desc ** it ) { 676 676 thread_desc * thrd = *it; 677 677 verify( thrd ); … … 679 679 (*it) = thrd->next; 680 680 681 if( this ->tail == &thrd->next ) {682 this ->tail = it;681 if( this.tail == &thrd->next ) { 682 this.tail = it; 683 683 } 684 684 685 685 thrd->next = NULL; 686 686 687 verify( (this ->head == NULL) == (&this->head == this->tail) );688 verify( *this ->tail == NULL );687 verify( (this.head == NULL) == (&this.head == this.tail) ); 688 verify( *this.tail == NULL ); 689 689 return thrd; 690 690 } … … 694 694 } 695 695 696 void push( __condition_stack_t *this, __condition_criterion_t * t ) {696 void push( __condition_stack_t & this, __condition_criterion_t * t ) { 697 697 verify( !t->next ); 698 t->next = this ->top;699 this ->top = t;700 } 701 702 __condition_criterion_t * pop( __condition_stack_t *this ) {703 __condition_criterion_t * top = this ->top;698 t->next = this.top; 699 this.top = t; 700 } 701 702 __condition_criterion_t * pop( __condition_stack_t & this ) { 703 __condition_criterion_t * top = this.top; 704 704 if( top ) { 705 this ->top = top->next;705 this.top = top->next; 706 706 top->next = NULL; 707 707 } -
src/libcfa/concurrency/kernel_private.h
r136ccd7 r4ee36bf0 48 48 void BlockInternal(thread_desc * thrd); 49 49 void BlockInternal(spinlock * lock, thread_desc * thrd); 50 void BlockInternal(spinlock * * locks, unsigned short count);51 void BlockInternal(spinlock * * locks, unsigned short count, thread_desc ** thrds, unsigned short thrd_count);50 void BlockInternal(spinlock * locks [], unsigned short count); 51 void BlockInternal(spinlock * locks [], unsigned short count, thread_desc * thrds [], unsigned short thrd_count); 52 52 void LeaveThread(spinlock * lock, thread_desc * thrd); 53 53 -
src/libcfa/concurrency/monitor
r136ccd7 r4ee36bf0 39 39 } 40 40 41 // static inline int ?<?(monitor_desc* lhs, monitor_desc* rhs) {42 // return ((intptr_t)lhs) < ((intptr_t)rhs);43 // }44 45 41 struct monitor_guard_t { 46 42 monitor_desc ** m; 47 intcount;43 __lock_size_t count; 48 44 monitor_desc ** prev_mntrs; 49 unsigned shortprev_count;45 __lock_size_t prev_count; 50 46 fptr_t prev_func; 51 47 }; 52 48 53 void ?{}( monitor_guard_t & this, monitor_desc ** m, int count, void (*func)() );49 void ?{}( monitor_guard_t & this, monitor_desc ** m, __lock_size_t count, void (*func)() ); 54 50 void ^?{}( monitor_guard_t & this ); 55 51 … … 57 53 monitor_desc * m; 58 54 monitor_desc ** prev_mntrs; 59 unsigned shortprev_count;55 __lock_size_t prev_count; 60 56 fptr_t prev_func; 61 57 }; … … 74 70 75 71 struct __condition_criterion_t { 76 bool ready; //Whether or not the criterion is met (True if met) 77 monitor_desc * target; //The monitor this criterion concerns 78 struct __condition_node_t * owner; //The parent node to which this criterion belongs 79 __condition_criterion_t * next; //Intrusive linked list Next field 72 // Whether or not the criterion is met (True if met) 73 bool ready; 74 75 // The monitor this criterion concerns 76 monitor_desc * target; 77 78 // The parent node to which this criterion belongs 79 struct __condition_node_t * owner; 80 81 // Intrusive linked list Next field 82 __condition_criterion_t * next; 80 83 }; 81 84 82 85 struct __condition_node_t { 83 thread_desc * waiting_thread; //Thread that needs to be woken when all criteria are met 84 __condition_criterion_t * criteria; //Array of criteria (Criterions are contiguous in memory) 85 unsigned short count; //Number of criterions in the criteria 86 __condition_node_t * next; //Intrusive linked list Next field 87 uintptr_t user_info; //Custom user info accessible before signalling 86 // Thread that needs to be woken when all criteria are met 87 thread_desc * waiting_thread; 88 89 // Array of criteria (Criterions are contiguous in memory) 90 __condition_criterion_t * criteria; 91 92 // Number of criterions in the criteria 93 __lock_size_t count; 94 95 // Intrusive linked list Next field 96 __condition_node_t * next; 97 98 // Custom user info accessible before signalling 99 uintptr_t user_info; 88 100 }; 89 101 … … 93 105 }; 94 106 95 void ?{}(__condition_node_t & this, thread_desc * waiting_thread, unsigned short count, uintptr_t user_info );107 void ?{}(__condition_node_t & this, thread_desc * waiting_thread, __lock_size_t count, uintptr_t user_info ); 96 108 void ?{}(__condition_criterion_t & this ); 97 109 void ?{}(__condition_criterion_t & this, monitor_desc * target, __condition_node_t * owner ); 98 110 99 111 void ?{}( __condition_blocked_queue_t & ); 100 void append( __condition_blocked_queue_t *, __condition_node_t * );101 __condition_node_t * pop_head( __condition_blocked_queue_t *);112 void append( __condition_blocked_queue_t &, __condition_node_t * ); 113 __condition_node_t * pop_head( __condition_blocked_queue_t & ); 102 114 103 115 struct condition { 104 __condition_blocked_queue_t blocked; //Link list which contains the blocked threads as-well as the information needed to unblock them 105 monitor_desc ** monitors; //Array of monitor pointers (Monitors are NOT contiguous in memory) 106 unsigned short monitor_count; //Number of monitors in the array 116 // Link list which contains the blocked threads as-well as the information needed to unblock them 117 __condition_blocked_queue_t blocked; 118 119 // Array of monitor pointers (Monitors are NOT contiguous in memory) 120 monitor_desc ** monitors; 121 122 // Number of monitors in the array 123 __lock_size_t monitor_count; 107 124 }; 108 125 … … 116 133 } 117 134 118 void wait( condition *this, uintptr_t user_info = 0 );119 bool signal( condition *this );120 bool signal_block( condition *this );121 static inline bool is_empty ( condition * this ) { return !this->blocked.head; }122 uintptr_t front( condition *this );135 void wait ( condition & this, uintptr_t user_info = 0 ); 136 bool signal ( condition & this ); 137 bool signal_block( condition & this ); 138 static inline bool is_empty ( condition & this ) { return !this.blocked.head; } 139 uintptr_t front ( condition & this ); 123 140 124 141 //----------------------------------------------------------------------------- -
src/libcfa/concurrency/monitor.c
r136ccd7 r4ee36bf0 17 17 18 18 #include <stdlib> 19 #include <inttypes.h> 19 20 20 21 #include "libhdr.h" … … 26 27 // Forward declarations 27 28 static inline void set_owner ( monitor_desc * this, thread_desc * owner ); 28 static inline void set_owner ( monitor_desc * * storage, short count, thread_desc * owner );29 static inline void set_mask ( monitor_desc * * storage, short count, const __waitfor_mask_t & mask );29 static inline void set_owner ( monitor_desc * storage [], __lock_size_t count, thread_desc * owner ); 30 static inline void set_mask ( monitor_desc * storage [], __lock_size_t count, const __waitfor_mask_t & mask ); 30 31 static inline void reset_mask( monitor_desc * this ); 31 32 … … 33 34 static inline bool is_accepted( monitor_desc * this, const __monitor_group_t & monitors ); 34 35 35 static inline void lock_all ( spinlock ** locks, unsigned short count );36 static inline void lock_all ( monitor_desc ** source, spinlock ** /*out*/ locks, unsigned short count );37 static inline void unlock_all( spinlock * * locks, unsigned short count );38 static inline void unlock_all( monitor_desc * * locks, unsigned short count );39 40 static inline void save ( monitor_desc * * ctx, short count, spinlock ** locks, unsigned int * /*out*/ recursions, __waitfor_mask_t * /*out*/ masks);41 static inline void restore( monitor_desc * * ctx, short count, spinlock ** locks, unsigned int * /*in */ recursions, __waitfor_mask_t * /*in */ masks);42 43 static inline void init ( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria);44 static inline void init_push( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria);36 static inline void lock_all ( spinlock * locks [], __lock_size_t count ); 37 static inline void lock_all ( monitor_desc * source [], spinlock * /*out*/ locks [], __lock_size_t count ); 38 static inline void unlock_all( spinlock * locks [], __lock_size_t count ); 39 static inline void unlock_all( monitor_desc * locks [], __lock_size_t count ); 40 41 static inline void save ( monitor_desc * ctx [], __lock_size_t count, spinlock * locks [], unsigned int /*out*/ recursions [], __waitfor_mask_t /*out*/ masks [] ); 42 static inline void restore( monitor_desc * ctx [], __lock_size_t count, spinlock * locks [], unsigned int /*in */ recursions [], __waitfor_mask_t /*in */ masks [] ); 43 44 static inline void init ( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] ); 45 static inline void init_push( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] ); 45 46 46 47 static inline thread_desc * check_condition ( __condition_criterion_t * ); 47 static inline void brand_condition ( condition *);48 static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t &, monitor_desc * * monitors, int count );48 static inline void brand_condition ( condition & ); 49 static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t &, monitor_desc * monitors [], __lock_size_t count ); 49 50 50 51 forall(dtype T | sized( T )) 51 static inline short insert_unique( T ** array, short & size, T * val );52 static inline short count_max ( const __waitfor_mask_t & mask );53 static inline short aggregate ( monitor_desc ** storage, const __waitfor_mask_t & mask );52 static inline __lock_size_t insert_unique( T * array [], __lock_size_t & size, T * val ); 53 static inline __lock_size_t count_max ( const __waitfor_mask_t & mask ); 54 static inline __lock_size_t aggregate ( monitor_desc * storage [], const __waitfor_mask_t & mask ); 54 55 55 56 //----------------------------------------------------------------------------- … … 58 59 __condition_node_t waiter = { thrd, count, user_info }; /* Create the node specific to this wait operation */ \ 59 60 __condition_criterion_t criteria[count]; /* Create the creteria this wait operation needs to wake up */ \ 60 init( count, monitors, &waiter, criteria );/* Link everything together */ \61 init( count, monitors, waiter, criteria ); /* Link everything together */ \ 61 62 62 63 #define wait_ctx_primed(thrd, user_info) /* Create the necessary information to use the signaller stack */ \ 63 64 __condition_node_t waiter = { thrd, count, user_info }; /* Create the node specific to this wait operation */ \ 64 65 __condition_criterion_t criteria[count]; /* Create the creteria this wait operation needs to wake up */ \ 65 init_push( count, monitors, &waiter, criteria );/* Link everything together and push it to the AS-Stack */ \66 init_push( count, monitors, waiter, criteria ); /* Link everything together and push it to the AS-Stack */ \ 66 67 67 68 #define monitor_ctx( mons, cnt ) /* Define that create the necessary struct for internal/external scheduling operations */ \ 68 69 monitor_desc ** monitors = mons; /* Save the targeted monitors */ \ 69 unsigned short count = cnt;/* Save the count to a local variable */ \70 __lock_size_t count = cnt; /* Save the count to a local variable */ \ 70 71 unsigned int recursions[ count ]; /* Save the current recursion levels to restore them later */ \ 71 __waitfor_mask_t masks [ count ];/* Save the current waitfor masks to restore them later */ \72 __waitfor_mask_t masks [ count ]; /* Save the current waitfor masks to restore them later */ \ 72 73 spinlock * locks [ count ]; /* We need to pass-in an array of locks to BlockInternal */ \ 73 74 … … 114 115 115 116 // Some one else has the monitor, wait in line for it 116 append( &this->entry_queue, thrd );117 append( this->entry_queue, thrd ); 117 118 BlockInternal( &this->lock ); 118 119 … … 153 154 } 154 155 155 int count = 1;156 __lock_size_t count = 1; 156 157 monitor_desc ** monitors = &this; 157 158 __monitor_group_t group = { &this, 1, func }; … … 160 161 161 162 // Wake the thread that is waiting for this 162 __condition_criterion_t * urgent = pop( &this->signal_stack );163 __condition_criterion_t * urgent = pop( this->signal_stack ); 163 164 verify( urgent ); 164 165 … … 182 183 183 184 // Some one else has the monitor, wait in line for it 184 append( &this->entry_queue, thrd );185 append( this->entry_queue, thrd ); 185 186 BlockInternal( &this->lock ); 186 187 … … 272 273 // relies on the monitor array being sorted 273 274 static inline void enter( __monitor_group_t monitors ) { 274 for( int i = 0; i < monitors.size; i++) {275 for( __lock_size_t i = 0; i < monitors.size; i++) { 275 276 __enter_monitor_desc( monitors.list[i], monitors ); 276 277 } … … 279 280 // Leave multiple monitor 280 281 // relies on the monitor array being sorted 281 static inline void leave(monitor_desc * * monitors, int count) {282 for( int i = count - 1; i >= 0; i--) {282 static inline void leave(monitor_desc * monitors [], __lock_size_t count) { 283 for( __lock_size_t i = count - 1; i >= 0; i--) { 283 284 __leave_monitor_desc( monitors[i] ); 284 285 } … … 287 288 // Ctor for monitor guard 288 289 // Sorts monitors before entering 289 void ?{}( monitor_guard_t & this, monitor_desc * * m, int count, fptr_t func ) {290 void ?{}( monitor_guard_t & this, monitor_desc * m [], __lock_size_t count, fptr_t func ) { 290 291 // Store current array 291 292 this.m = m; … … 296 297 297 298 // Save previous thread context 298 this.prev_mntrs = this_thread->monitors.list; 299 this.prev_count = this_thread->monitors.size; 300 this.prev_func = this_thread->monitors.func; 299 this.[prev_mntrs, prev_count, prev_func] = this_thread->monitors.[list, size, func]; 301 300 302 301 // Update thread context (needed for conditions) 303 this_thread->monitors.list = m; 304 this_thread->monitors.size = count; 305 this_thread->monitors.func = func; 302 this_thread->monitors.[list, size, func] = [m, count, func]; 306 303 307 304 // LIB_DEBUG_PRINT_SAFE("MGUARD : enter %d\n", count); … … 325 322 326 323 // Restore thread context 327 this_thread->monitors.list = this.prev_mntrs; 328 this_thread->monitors.size = this.prev_count; 329 this_thread->monitors.func = this.prev_func; 330 } 331 324 this_thread->monitors.[list, size, func] = this.[prev_mntrs, prev_count, prev_func]; 325 } 332 326 333 327 // Ctor for monitor guard 334 328 // Sorts monitors before entering 335 void ?{}( monitor_dtor_guard_t & this, monitor_desc * * m, fptr_t func ) {329 void ?{}( monitor_dtor_guard_t & this, monitor_desc * m [], fptr_t func ) { 336 330 // Store current array 337 331 this.m = *m; 338 332 339 333 // Save previous thread context 340 this.prev_mntrs = this_thread->monitors.list; 341 this.prev_count = this_thread->monitors.size; 342 this.prev_func = this_thread->monitors.func; 334 this.[prev_mntrs, prev_count, prev_func] = this_thread->monitors.[list, size, func]; 343 335 344 336 // Update thread context (needed for conditions) 345 this_thread->monitors.list = m; 346 this_thread->monitors.size = 1; 347 this_thread->monitors.func = func; 337 this_thread->monitors.[list, size, func] = [m, 1, func]; 348 338 349 339 __enter_monitor_dtor( this.m, func ); 350 340 } 351 352 341 353 342 // Dtor for monitor guard … … 357 346 358 347 // Restore thread context 359 this_thread->monitors.list = this.prev_mntrs; 360 this_thread->monitors.size = this.prev_count; 361 this_thread->monitors.func = this.prev_func; 348 this_thread->monitors.[list, size, func] = this.[prev_mntrs, prev_count, prev_func]; 362 349 } 363 350 364 351 //----------------------------------------------------------------------------- 365 352 // Internal scheduling types 366 void ?{}(__condition_node_t & this, thread_desc * waiting_thread, unsigned short count, uintptr_t user_info ) {353 void ?{}(__condition_node_t & this, thread_desc * waiting_thread, __lock_size_t count, uintptr_t user_info ) { 367 354 this.waiting_thread = waiting_thread; 368 355 this.count = count; … … 378 365 } 379 366 380 void ?{}(__condition_criterion_t & this, monitor_desc * target, __condition_node_t *owner ) {367 void ?{}(__condition_criterion_t & this, monitor_desc * target, __condition_node_t & owner ) { 381 368 this.ready = false; 382 369 this.target = target; 383 this.owner = owner;370 this.owner = &owner; 384 371 this.next = NULL; 385 372 } … … 387 374 //----------------------------------------------------------------------------- 388 375 // Internal scheduling 389 void wait( condition *this, uintptr_t user_info = 0 ) {376 void wait( condition & this, uintptr_t user_info = 0 ) { 390 377 brand_condition( this ); 391 378 392 379 // Check that everything is as expected 393 assertf( this ->monitors != NULL, "Waiting with no monitors (%p)", this->monitors );394 verifyf( this ->monitor_count != 0, "Waiting with 0 monitors (%i)", this->monitor_count );395 verifyf( this ->monitor_count < 32u, "Excessive monitor count (%i)", this->monitor_count );380 assertf( this.monitors != NULL, "Waiting with no monitors (%p)", this.monitors ); 381 verifyf( this.monitor_count != 0, "Waiting with 0 monitors (%"PRIiFAST16")", this.monitor_count ); 382 verifyf( this.monitor_count < 32u, "Excessive monitor count (%"PRIiFAST16")", this.monitor_count ); 396 383 397 384 // Create storage for monitor context 398 monitor_ctx( this ->monitors, this->monitor_count );385 monitor_ctx( this.monitors, this.monitor_count ); 399 386 400 387 // Create the node specific to this wait operation … … 403 390 // Append the current wait operation to the ones already queued on the condition 404 391 // We don't need locks for that since conditions must always be waited on inside monitor mutual exclusion 405 append( &this->blocked, &waiter );392 append( this.blocked, &waiter ); 406 393 407 394 // Lock all monitors (aggregates the locks as well) … … 409 396 410 397 // Find the next thread(s) to run 411 short thread_count = 0;398 __lock_size_t thread_count = 0; 412 399 thread_desc * threads[ count ]; 413 400 __builtin_memset( threads, 0, sizeof( threads ) ); … … 417 404 418 405 // Remove any duplicate threads 419 for( int i = 0; i < count; i++) {406 for( __lock_size_t i = 0; i < count; i++) { 420 407 thread_desc * new_owner = next_thread( monitors[i] ); 421 408 insert_unique( threads, thread_count, new_owner ); … … 429 416 } 430 417 431 bool signal( condition *this ) {418 bool signal( condition & this ) { 432 419 if( is_empty( this ) ) { return false; } 433 420 434 421 //Check that everything is as expected 435 verify( this ->monitors );436 verify( this ->monitor_count != 0 );422 verify( this.monitors ); 423 verify( this.monitor_count != 0 ); 437 424 438 425 //Some more checking in debug 439 426 LIB_DEBUG_DO( 440 427 thread_desc * this_thrd = this_thread; 441 if ( this ->monitor_count != this_thrd->monitors.size ) {442 abortf( "Signal on condition %p made with different number of monitor(s), expected %i got %i", this, this->monitor_count, this_thrd->monitors.size );443 } 444 445 for(int i = 0; i < this ->monitor_count; i++) {446 if ( this ->monitors[i] != this_thrd->monitors.list[i] ) {447 abortf( "Signal on condition %p made with different monitor, expected %p got %i", this, this->monitors[i], this_thrd->monitors.list[i] );428 if ( this.monitor_count != this_thrd->monitors.size ) { 429 abortf( "Signal on condition %p made with different number of monitor(s), expected %i got %i", &this, this.monitor_count, this_thrd->monitors.size ); 430 } 431 432 for(int i = 0; i < this.monitor_count; i++) { 433 if ( this.monitors[i] != this_thrd->monitors.list[i] ) { 434 abortf( "Signal on condition %p made with different monitor, expected %p got %i", &this, this.monitors[i], this_thrd->monitors.list[i] ); 448 435 } 449 436 } 450 437 ); 451 438 452 unsigned short count = this->monitor_count;439 __lock_size_t count = this.monitor_count; 453 440 454 441 // Lock all monitors 455 lock_all( this ->monitors, NULL, count );442 lock_all( this.monitors, NULL, count ); 456 443 457 444 //Pop the head of the waiting queue 458 __condition_node_t * node = pop_head( &this->blocked );445 __condition_node_t * node = pop_head( this.blocked ); 459 446 460 447 //Add the thread to the proper AS stack … … 462 449 __condition_criterion_t * crit = &node->criteria[i]; 463 450 assert( !crit->ready ); 464 push( &crit->target->signal_stack, crit );451 push( crit->target->signal_stack, crit ); 465 452 } 466 453 467 454 //Release 468 unlock_all( this ->monitors, count );455 unlock_all( this.monitors, count ); 469 456 470 457 return true; 471 458 } 472 459 473 bool signal_block( condition *this ) {474 if( !this ->blocked.head ) { return false; }460 bool signal_block( condition & this ) { 461 if( !this.blocked.head ) { return false; } 475 462 476 463 //Check that everything is as expected 477 verifyf( this ->monitors != NULL, "Waiting with no monitors (%p)", this->monitors );478 verifyf( this ->monitor_count != 0, "Waiting with 0 monitors (%i)", this->monitor_count );464 verifyf( this.monitors != NULL, "Waiting with no monitors (%p)", this.monitors ); 465 verifyf( this.monitor_count != 0, "Waiting with 0 monitors (%"PRIiFAST16")", this.monitor_count ); 479 466 480 467 // Create storage for monitor context 481 monitor_ctx( this ->monitors, this->monitor_count );468 monitor_ctx( this.monitors, this.monitor_count ); 482 469 483 470 // Lock all monitors (aggregates the locks them as well) … … 491 478 492 479 //Find the thread to run 493 thread_desc * signallee = pop_head( &this->blocked )->waiting_thread;480 thread_desc * signallee = pop_head( this.blocked )->waiting_thread; 494 481 set_owner( monitors, count, signallee ); 495 482 496 LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : signal_block condition %p (s: %p)\n", this, signallee );483 LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : signal_block condition %p (s: %p)\n", &this, signallee ); 497 484 498 485 //Everything is ready to go to sleep … … 512 499 513 500 // Access the user_info of the thread waiting at the front of the queue 514 uintptr_t front( condition *this ) {501 uintptr_t front( condition & this ) { 515 502 verifyf( !is_empty(this), 516 503 "Attempt to access user data on an empty condition.\n" 517 504 "Possible cause is not checking if the condition is empty before reading stored data." 518 505 ); 519 return this ->blocked.head->user_info;506 return this.blocked.head->user_info; 520 507 } 521 508 … … 537 524 // This statment doesn't have a contiguous list of monitors... 538 525 // Create one! 539 short max = count_max( mask );526 __lock_size_t max = count_max( mask ); 540 527 monitor_desc * mon_storage[max]; 541 528 __builtin_memset( mon_storage, 0, sizeof( mon_storage ) ); 542 short actual_count = aggregate( mon_storage, mask );543 544 LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : waitfor %d (s: %d, m: %d)\n", actual_count, mask.size, ( short)max);529 __lock_size_t actual_count = aggregate( mon_storage, mask ); 530 531 LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : waitfor %d (s: %d, m: %d)\n", actual_count, mask.size, (__lock_size_t)max); 545 532 546 533 if(actual_count == 0) return; … … 569 556 570 557 __condition_criterion_t * dtor_crit = mon2dtor->dtor_node->criteria; 571 push( &mon2dtor->signal_stack, dtor_crit );558 push( mon2dtor->signal_stack, dtor_crit ); 572 559 573 560 unlock_all( locks, count ); … … 629 616 set_mask( monitors, count, mask ); 630 617 631 for( int i = 0; i < count; i++) {618 for( __lock_size_t i = 0; i < count; i++) { 632 619 verify( monitors[i]->owner == this_thread ); 633 620 } … … 661 648 } 662 649 663 static inline void set_owner( monitor_desc * * monitors, short count, thread_desc * owner ) {650 static inline void set_owner( monitor_desc * monitors [], __lock_size_t count, thread_desc * owner ) { 664 651 monitors[0]->owner = owner; 665 652 monitors[0]->recursion = 1; 666 for( int i = 1; i < count; i++ ) {653 for( __lock_size_t i = 1; i < count; i++ ) { 667 654 monitors[i]->owner = owner; 668 655 monitors[i]->recursion = 0; … … 670 657 } 671 658 672 static inline void set_mask( monitor_desc * * storage, short count, const __waitfor_mask_t & mask ) {673 for( int i = 0; i < count; i++) {659 static inline void set_mask( monitor_desc * storage [], __lock_size_t count, const __waitfor_mask_t & mask ) { 660 for( __lock_size_t i = 0; i < count; i++) { 674 661 storage[i]->mask = mask; 675 662 } … … 685 672 //Check the signaller stack 686 673 LIB_DEBUG_PRINT_SAFE("Kernel : mon %p AS-stack top %p\n", this, this->signal_stack.top); 687 __condition_criterion_t * urgent = pop( &this->signal_stack );674 __condition_criterion_t * urgent = pop( this->signal_stack ); 688 675 if( urgent ) { 689 676 //The signaller stack is not empty, … … 697 684 // No signaller thread 698 685 // Get the next thread in the entry_queue 699 thread_desc * new_owner = pop_head( &this->entry_queue );686 thread_desc * new_owner = pop_head( this->entry_queue ); 700 687 set_owner( this, new_owner ); 701 688 … … 705 692 static inline bool is_accepted( monitor_desc * this, const __monitor_group_t & group ) { 706 693 __acceptable_t * it = this->mask.clauses; // Optim 707 int count = this->mask.size;694 __lock_size_t count = this->mask.size; 708 695 709 696 // Check if there are any acceptable functions … … 714 701 715 702 // For all acceptable functions check if this is the current function. 716 for( short i = 0; i < count; i++, it++ ) {703 for( __lock_size_t i = 0; i < count; i++, it++ ) { 717 704 if( *it == group ) { 718 705 *this->mask.accepted = i; … … 725 712 } 726 713 727 static inline void init( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria) {728 for( int i = 0; i < count; i++) {714 static inline void init( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] ) { 715 for( __lock_size_t i = 0; i < count; i++) { 729 716 (criteria[i]){ monitors[i], waiter }; 730 717 } 731 718 732 waiter ->criteria = criteria;733 } 734 735 static inline void init_push( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria) {736 for( int i = 0; i < count; i++) {719 waiter.criteria = criteria; 720 } 721 722 static inline void init_push( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] ) { 723 for( __lock_size_t i = 0; i < count; i++) { 737 724 (criteria[i]){ monitors[i], waiter }; 738 725 LIB_DEBUG_PRINT_SAFE( "Kernel : target %p = %p\n", criteria[i].target, &criteria[i] ); 739 push( &criteria[i].target->signal_stack, &criteria[i] );740 } 741 742 waiter ->criteria = criteria;743 } 744 745 static inline void lock_all( spinlock * * locks, unsigned short count ) {746 for( int i = 0; i < count; i++ ) {726 push( criteria[i].target->signal_stack, &criteria[i] ); 727 } 728 729 waiter.criteria = criteria; 730 } 731 732 static inline void lock_all( spinlock * locks [], __lock_size_t count ) { 733 for( __lock_size_t i = 0; i < count; i++ ) { 747 734 lock_yield( locks[i] DEBUG_CTX2 ); 748 735 } 749 736 } 750 737 751 static inline void lock_all( monitor_desc * * source, spinlock ** /*out*/ locks, unsigned short count ) {752 for( int i = 0; i < count; i++ ) {738 static inline void lock_all( monitor_desc * source [], spinlock * /*out*/ locks [], __lock_size_t count ) { 739 for( __lock_size_t i = 0; i < count; i++ ) { 753 740 spinlock * l = &source[i]->lock; 754 741 lock_yield( l DEBUG_CTX2 ); … … 757 744 } 758 745 759 static inline void unlock_all( spinlock * * locks, unsigned short count ) {760 for( int i = 0; i < count; i++ ) {746 static inline void unlock_all( spinlock * locks [], __lock_size_t count ) { 747 for( __lock_size_t i = 0; i < count; i++ ) { 761 748 unlock( locks[i] ); 762 749 } 763 750 } 764 751 765 static inline void unlock_all( monitor_desc * * locks, unsigned short count ) {766 for( int i = 0; i < count; i++ ) {752 static inline void unlock_all( monitor_desc * locks [], __lock_size_t count ) { 753 for( __lock_size_t i = 0; i < count; i++ ) { 767 754 unlock( &locks[i]->lock ); 768 755 } 769 756 } 770 757 771 static inline void save( monitor_desc ** ctx, short count, __attribute((unused)) spinlock ** locks, unsigned int * /*out*/ recursions, __waitfor_mask_t * /*out*/ masks ) { 772 for( int i = 0; i < count; i++ ) { 758 static inline void save( 759 monitor_desc * ctx [], 760 __lock_size_t count, 761 __attribute((unused)) spinlock * locks [], 762 unsigned int /*out*/ recursions [], 763 __waitfor_mask_t /*out*/ masks [] 764 ) { 765 for( __lock_size_t i = 0; i < count; i++ ) { 773 766 recursions[i] = ctx[i]->recursion; 774 767 masks[i] = ctx[i]->mask; … … 776 769 } 777 770 778 static inline void restore( monitor_desc ** ctx, short count, spinlock ** locks, unsigned int * /*out*/ recursions, __waitfor_mask_t * /*out*/ masks ) { 771 static inline void restore( 772 monitor_desc * ctx [], 773 __lock_size_t count, 774 spinlock * locks [], 775 unsigned int /*out*/ recursions [], 776 __waitfor_mask_t /*out*/ masks [] 777 ) { 779 778 lock_all( locks, count ); 780 for( int i = 0; i < count; i++ ) {779 for( __lock_size_t i = 0; i < count; i++ ) { 781 780 ctx[i]->recursion = recursions[i]; 782 781 ctx[i]->mask = masks[i]; … … 811 810 } 812 811 813 static inline void brand_condition( condition *this ) {812 static inline void brand_condition( condition & this ) { 814 813 thread_desc * thrd = this_thread; 815 if( !this ->monitors ) {814 if( !this.monitors ) { 816 815 // LIB_DEBUG_PRINT_SAFE("Branding\n"); 817 816 assertf( thrd->monitors.list != NULL, "No current monitor to brand condition %p", thrd->monitors.list ); 818 this ->monitor_count = thrd->monitors.size;819 820 this ->monitors = malloc( this->monitor_count * sizeof( *this->monitors ) );821 for( int i = 0; i < this ->monitor_count; i++ ) {822 this ->monitors[i] = thrd->monitors.list[i];823 } 824 } 825 } 826 827 static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t & mask, monitor_desc * * monitors, int count ) {828 829 __thread_queue_t * entry_queue = &monitors[0]->entry_queue;817 this.monitor_count = thrd->monitors.size; 818 819 this.monitors = malloc( this.monitor_count * sizeof( *this.monitors ) ); 820 for( int i = 0; i < this.monitor_count; i++ ) { 821 this.monitors[i] = thrd->monitors.list[i]; 822 } 823 } 824 } 825 826 static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t & mask, monitor_desc * monitors [], __lock_size_t count ) { 827 828 __thread_queue_t & entry_queue = monitors[0]->entry_queue; 830 829 831 830 // For each thread in the entry-queue 832 for( thread_desc ** thrd_it = &entry_queue ->head;831 for( thread_desc ** thrd_it = &entry_queue.head; 833 832 *thrd_it; 834 833 thrd_it = &(*thrd_it)->next … … 852 851 853 852 forall(dtype T | sized( T )) 854 static inline short insert_unique( T ** array, short & size, T * val ) {853 static inline __lock_size_t insert_unique( T * array [], __lock_size_t & size, T * val ) { 855 854 if( !val ) return size; 856 855 857 for( int i = 0; i <= size; i++) {856 for( __lock_size_t i = 0; i <= size; i++) { 858 857 if( array[i] == val ) return size; 859 858 } … … 864 863 } 865 864 866 static inline short count_max( const __waitfor_mask_t & mask ) {867 short max = 0;868 for( int i = 0; i < mask.size; i++ ) {865 static inline __lock_size_t count_max( const __waitfor_mask_t & mask ) { 866 __lock_size_t max = 0; 867 for( __lock_size_t i = 0; i < mask.size; i++ ) { 869 868 max += mask.clauses[i].size; 870 869 } … … 872 871 } 873 872 874 static inline short aggregate( monitor_desc ** storage, const __waitfor_mask_t & mask ) {875 short size = 0;876 for( int i = 0; i < mask.size; i++ ) {873 static inline __lock_size_t aggregate( monitor_desc * storage [], const __waitfor_mask_t & mask ) { 874 __lock_size_t size = 0; 875 for( __lock_size_t i = 0; i < mask.size; i++ ) { 877 876 __libcfa_small_sort( mask.clauses[i].list, mask.clauses[i].size ); 878 for( int j = 0; j < mask.clauses[i].size; j++) {877 for( __lock_size_t j = 0; j < mask.clauses[i].size; j++) { 879 878 insert_unique( storage, size, mask.clauses[i].list[j] ); 880 879 } … … 890 889 } 891 890 892 void append( __condition_blocked_queue_t *this, __condition_node_t * c ) {893 verify(this ->tail != NULL);894 *this ->tail = c;895 this ->tail = &c->next;896 } 897 898 __condition_node_t * pop_head( __condition_blocked_queue_t *this ) {899 __condition_node_t * head = this ->head;891 void append( __condition_blocked_queue_t & this, __condition_node_t * c ) { 892 verify(this.tail != NULL); 893 *this.tail = c; 894 this.tail = &c->next; 895 } 896 897 __condition_node_t * pop_head( __condition_blocked_queue_t & this ) { 898 __condition_node_t * head = this.head; 900 899 if( head ) { 901 this ->head = head->next;900 this.head = head->next; 902 901 if( !head->next ) { 903 this ->tail = &this->head;902 this.tail = &this.head; 904 903 } 905 904 head->next = NULL; -
src/tests/.expect/32/literals.txt
r136ccd7 r4ee36bf0 5 5 __attribute__ ((__nothrow__,__leaf__,__noreturn__)) extern void exit(signed int __status); 6 6 extern signed int printf(const char *__restrict __format, ...); 7 void __for_each__A 2_0_0_0____operator_assign__PFt0_Rt0t0____constructor__PF_Rt0____constructor__PF_Rt0t0____destructor__PF_Rt0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_preincr__PFt0_Rt0____operator_predecr__PFt0_Rt0____operator_equal__PFi_t0t0____operator_notequal__PFi_t0t0____operator_deref__PFRt1_t0__F_t0t0PF_t1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object0)(), void *__anonymous_object1), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object2)(), void *__anonymous_object3), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object4)(), void *__anonymous_object5, void *__anonymous_object6), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object7)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object8), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object9)(), void *__anonymous_object10, void *__anonymous_object11), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object12)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object13, void *__anonymous_object14), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object15)(), void *__anonymous_object16, void *__anonymous_object17), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object18)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object19, void *__anonymous_object20), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object21, void *__anonymous_object22), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object23), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object24, void *__anonymous_object25), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object26), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object27, void *__anonymous_object28), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object29), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object30, void *__anonymous_object31), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object32), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object33), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object34), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object35, void *__anonymous_object36), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object37, void *__anonymous_object38), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object39), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object40));8 void __for_each_reverse__A 2_0_0_0____operator_assign__PFt0_Rt0t0____constructor__PF_Rt0____constructor__PF_Rt0t0____destructor__PF_Rt0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_preincr__PFt0_Rt0____operator_predecr__PFt0_Rt0____operator_equal__PFi_t0t0____operator_notequal__PFi_t0t0____operator_deref__PFRt1_t0__F_t0t0PF_t1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object41)(), void *__anonymous_object42), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object43)(), void *__anonymous_object44), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object45)(), void *__anonymous_object46, void *__anonymous_object47), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object48)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object49), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object50)(), void *__anonymous_object51, void *__anonymous_object52), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object53)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object54, void *__anonymous_object55), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object56)(), void *__anonymous_object57, void *__anonymous_object58), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object59)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object60, void *__anonymous_object61), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object62, void *__anonymous_object63), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object64), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object65, void *__anonymous_object66), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object67), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object68, void *__anonymous_object69), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object70), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object71, void *__anonymous_object72), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object73), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object74), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object75), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object76, void *__anonymous_object77), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object78, void *__anonymous_object79), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object80), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object81));7 void __for_each__A0_2_0_0____operator_assign__PFd0_Rd0d0____constructor__PF_Rd0____constructor__PF_Rd0d0____destructor__PF_Rd0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_preincr__PFd0_Rd0____operator_predecr__PFd0_Rd0____operator_equal__PFi_d0d0____operator_notequal__PFi_d0d0____operator_deref__PFRd1_d0__F_d0d0PF_d1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object0)(), void *__anonymous_object1), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object2)(), void *__anonymous_object3), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object4)(), void *__anonymous_object5, void *__anonymous_object6), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object7)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object8), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object9)(), void *__anonymous_object10, void *__anonymous_object11), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object12)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object13, void *__anonymous_object14), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object15)(), void *__anonymous_object16, void *__anonymous_object17), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object18)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object19, void *__anonymous_object20), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object21, void *__anonymous_object22), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object23), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object24, void *__anonymous_object25), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object26), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object27, void *__anonymous_object28), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object29), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object30, void *__anonymous_object31), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object32), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object33), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object34), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object35, void *__anonymous_object36), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object37, void *__anonymous_object38), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object39), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object40)); 8 void __for_each_reverse__A0_2_0_0____operator_assign__PFd0_Rd0d0____constructor__PF_Rd0____constructor__PF_Rd0d0____destructor__PF_Rd0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_preincr__PFd0_Rd0____operator_predecr__PFd0_Rd0____operator_equal__PFi_d0d0____operator_notequal__PFi_d0d0____operator_deref__PFRd1_d0__F_d0d0PF_d1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object41)(), void *__anonymous_object42), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object43)(), void *__anonymous_object44), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object45)(), void *__anonymous_object46, void *__anonymous_object47), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object48)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object49), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object50)(), void *__anonymous_object51, void *__anonymous_object52), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object53)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object54, void *__anonymous_object55), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object56)(), void *__anonymous_object57, void *__anonymous_object58), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object59)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object60, void *__anonymous_object61), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object62, void *__anonymous_object63), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object64), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object65, void *__anonymous_object66), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object67), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object68, void *__anonymous_object69), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object70), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object71, void *__anonymous_object72), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object73), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object74), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object75), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object76, void *__anonymous_object77), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object78, void *__anonymous_object79), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object80), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object81)); 9 9 void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0c__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object82), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object83), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object84, _Bool __anonymous_object85), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object86), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object87, const char *__anonymous_object88), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object89), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object90, _Bool __anonymous_object91), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object92), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object93), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object94), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object95), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object96), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object97, const char *__anonymous_object98), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object99), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object100, const char *__anonymous_object101), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object102), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object103), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object104, const char *__anonymous_object105, unsigned long int __anonymous_object106), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object107, const char *__fmt__PCc_1, ...), void *__anonymous_object108, char __anonymous_object109); 10 10 void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Sc__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object110), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object111), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object112, _Bool __anonymous_object113), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object114), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object115, const char *__anonymous_object116), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object117), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object118, _Bool __anonymous_object119), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object120), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object121), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object122), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object123), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object124), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object125, const char *__anonymous_object126), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object127), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object128, const char *__anonymous_object129), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object130), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object131), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object132, const char *__anonymous_object133, unsigned long int __anonymous_object134), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object135, const char *__fmt__PCc_1, ...), void *__anonymous_object136, signed char __anonymous_object137); … … 29 29 void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCl__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object642), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object643), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object644, _Bool __anonymous_object645), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object646), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object647, const char *__anonymous_object648), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object649), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object650, _Bool __anonymous_object651), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object652), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object653), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object654), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object655), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object656), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object657, const char *__anonymous_object658), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object659), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object660, const char *__anonymous_object661), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object662), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object663), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object664, const char *__anonymous_object665, unsigned long int __anonymous_object666), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object667, const char *__fmt__PCc_1, ...), void *__anonymous_object668, const signed long int *__anonymous_object669); 30 30 void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCv__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object670), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object671), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object672, _Bool __anonymous_object673), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object674), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object675, const char *__anonymous_object676), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object677), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object678, _Bool __anonymous_object679), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object680), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object681), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object682), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object683), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object684), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object685, const char *__anonymous_object686), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object687), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object688, const char *__anonymous_object689), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object690), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object691), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object692, const char *__anonymous_object693, unsigned long int __anonymous_object694), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object695, const char *__fmt__PCc_1, ...), void *__anonymous_object696, const void *__anonymous_object697); 31 void *___operator_bitor__A 1_1_0_1____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_bitor__PFPd0_Pd0t1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_bitor__PFPd0_Pd0tVARGS2__FPd0_Pd0t1tVARGS2__1(__attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype7tParams_M_MP)(void (*__anonymous_object698)(), void *__anonymous_object699, void *__anonymous_object700), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype2tT_M_MP)(void (*__anonymous_object701)(), void *__anonymous_object702, void *__anonymous_object703), __attribute__ ((unused)) void (*_adapterF_P2tT2tT__MP)(void (*__anonymous_object704)(), void *__anonymous_object705, void *__anonymous_object706), __attribute__ ((unused)) void (*_adapterF2tT_P2tT2tT_P_MP)(void (*__anonymous_object707)(), __attribute__ ((unused)) void *___retval__operator_assign__2tT_1, void *__anonymous_object708, void *__anonymous_object709), __attribute__ ((unused)) unsigned long int _sizeof_2tT, __attribute__ ((unused)) unsigned long int _alignof_2tT, __attribute__ ((unused)) unsigned long int _sizeof_7tParams, __attribute__ ((unused)) unsigned long int _alignof_7tParams, __attribute__ ((unused)) void *(*___operator_assign__PF2tT_R2tT2tT__1)(void *__anonymous_object710, void *__anonymous_object711), __attribute__ ((unused)) void (*___constructor__PF_R2tT__1)(void *__anonymous_object712), __attribute__ ((unused)) void (*___constructor__PF_R2tT2tT__1)(void *__anonymous_object713, void *__anonymous_object714), __attribute__ ((unused)) void (*___destructor__PF_R2tT__1)(void *__anonymous_object715), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype2tT__1)(void *__anonymous_object716, void *__anonymous_object717), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object718), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object719), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object720, _Bool __anonymous_object721), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object722), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object723, const char *__anonymous_object724), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object725), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object726, _Bool __anonymous_object727), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object728), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object729), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object730), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object731), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object732), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object733, const char *__anonymous_object734), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object735), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object736, const char *__anonymous_object737), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object738), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object739), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object740, const char *__anonymous_object741, unsigned long int __anonymous_object742), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object743, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype7tParams__1)(void *__anonymous_object744, void *__anonymous_object745), void *__os__P7tostype_1, void *__arg__2tT_1, void *__rest__7tParams_1);31 void *___operator_bitor__A0_2_0_1____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_bitor__PFPd0_Pd0d1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_bitor__PFPd0_Pd0tVARGS2__FPd0_Pd0d1tVARGS2__1(__attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype7tParams_M_MP)(void (*__anonymous_object698)(), void *__anonymous_object699, void *__anonymous_object700), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype2tT_M_MP)(void (*__anonymous_object701)(), void *__anonymous_object702, void *__anonymous_object703), __attribute__ ((unused)) void (*_adapterF_P2tT2tT__MP)(void (*__anonymous_object704)(), void *__anonymous_object705, void *__anonymous_object706), __attribute__ ((unused)) void (*_adapterF2tT_P2tT2tT_P_MP)(void (*__anonymous_object707)(), __attribute__ ((unused)) void *___retval__operator_assign__2tT_1, void *__anonymous_object708, void *__anonymous_object709), __attribute__ ((unused)) unsigned long int _sizeof_2tT, __attribute__ ((unused)) unsigned long int _alignof_2tT, __attribute__ ((unused)) unsigned long int _sizeof_7tParams, __attribute__ ((unused)) unsigned long int _alignof_7tParams, __attribute__ ((unused)) void *(*___operator_assign__PF2tT_R2tT2tT__1)(void *__anonymous_object710, void *__anonymous_object711), __attribute__ ((unused)) void (*___constructor__PF_R2tT__1)(void *__anonymous_object712), __attribute__ ((unused)) void (*___constructor__PF_R2tT2tT__1)(void *__anonymous_object713, void *__anonymous_object714), __attribute__ ((unused)) void (*___destructor__PF_R2tT__1)(void *__anonymous_object715), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype2tT__1)(void *__anonymous_object716, void *__anonymous_object717), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object718), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object719), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object720, _Bool __anonymous_object721), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object722), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object723, const char *__anonymous_object724), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object725), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object726, _Bool __anonymous_object727), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object728), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object729), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object730), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object731), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object732), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object733, const char *__anonymous_object734), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object735), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object736, const char *__anonymous_object737), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object738), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object739), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object740, const char *__anonymous_object741, unsigned long int __anonymous_object742), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object743, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype7tParams__1)(void *__anonymous_object744, void *__anonymous_object745), void *__os__P7tostype_1, void *__arg__2tT_1, void *__rest__7tParams_1); 32 32 void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object746), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object747), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object748, _Bool __anonymous_object749), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object750), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object751, const char *__anonymous_object752), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object753), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object754, _Bool __anonymous_object755), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object756), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object757), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object758), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object759), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object760), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object761, const char *__anonymous_object762), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object763), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object764, const char *__anonymous_object765), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object766), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object767), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object768, const char *__anonymous_object769, unsigned long int __anonymous_object770), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object771, const char *__fmt__PCc_1, ...), void *__anonymous_object772, void *(*__anonymous_object773)(void *__anonymous_object774)); 33 33 void *__endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object775), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object776), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object777, _Bool __anonymous_object778), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object779), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object780, const char *__anonymous_object781), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object782), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object783, _Bool __anonymous_object784), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object785), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object786), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object787), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object788), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object789), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object790, const char *__anonymous_object791), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object792), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object793, const char *__anonymous_object794), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object795), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object796), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object797, const char *__anonymous_object798, unsigned long int __anonymous_object799), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object800, const char *__fmt__PCc_1, ...), void *__anonymous_object801); … … 38 38 void *__sepDisable__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object910), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object911), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object912, _Bool __anonymous_object913), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object914), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object915, const char *__anonymous_object916), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object917), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object918, _Bool __anonymous_object919), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object920), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object921), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object922), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object923), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object924), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object925, const char *__anonymous_object926), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object927), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object928, const char *__anonymous_object929), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object930), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object931), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object932, const char *__anonymous_object933, unsigned long int __anonymous_object934), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object935, const char *__fmt__PCc_1, ...), void *__anonymous_object936); 39 39 void *__sepEnable__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object937), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object938), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object939, _Bool __anonymous_object940), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object941), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object942, const char *__anonymous_object943), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object944), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object945, _Bool __anonymous_object946), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object947), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object948), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object949), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object950), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object951), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object952, const char *__anonymous_object953), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object954), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object955, const char *__anonymous_object956), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object957), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object958), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object959, const char *__anonymous_object960, unsigned long int __anonymous_object961), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object962, const char *__fmt__PCc_1, ...), void *__anonymous_object963); 40 void __write__A 2_1_0_0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_bitor__PFPd0_Pd0t1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFt2_Rt2t2____constructor__PF_Rt2____constructor__PF_Rt2t2____destructor__PF_Rt2____operator_preincr__PFt2_Rt2____operator_predecr__PFt2_Rt2____operator_equal__PFi_t2t2____operator_notequal__PFi_t2t2____operator_deref__PFRt1_t2__F_t2t2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object964)(), void *__anonymous_object965), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object966)(), void *__anonymous_object967, void *__anonymous_object968), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object969)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object970), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object971)(), void *__anonymous_object972, void *__anonymous_object973), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object974)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object975, void *__anonymous_object976), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object977)(), void *__anonymous_object978, void *__anonymous_object979), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object980)(), void *__anonymous_object981, void *__anonymous_object982), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object983)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object984, void *__anonymous_object985), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object986, void *__anonymous_object987), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object988), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object989, void *__anonymous_object990), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object991), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object992, void *__anonymous_object993), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object994), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object995), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object996, _Bool __anonymous_object997), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object998), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object999, const char *__anonymous_object1000), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1001), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1002, _Bool __anonymous_object1003), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1004), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1005), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1006), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1007), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1008), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1009, const char *__anonymous_object1010), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1011), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1012, const char *__anonymous_object1013), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1014), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1015), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1016, const char *__anonymous_object1017, unsigned long int __anonymous_object1018), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1019, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1020, void *__anonymous_object1021), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1022), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1023, void *__anonymous_object1024), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1025), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1026), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1027), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1028, void *__anonymous_object1029), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1030, void *__anonymous_object1031), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1032), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1);41 void __write_reverse__A 2_1_0_0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_bitor__PFPd0_Pd0t1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFt2_Rt2t2____constructor__PF_Rt2____constructor__PF_Rt2t2____destructor__PF_Rt2____operator_preincr__PFt2_Rt2____operator_predecr__PFt2_Rt2____operator_equal__PFi_t2t2____operator_notequal__PFi_t2t2____operator_deref__PFRt1_t2__F_t2t2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object1033)(), void *__anonymous_object1034), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object1035)(), void *__anonymous_object1036, void *__anonymous_object1037), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object1038)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object1039), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object1040)(), void *__anonymous_object1041, void *__anonymous_object1042), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object1043)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object1044, void *__anonymous_object1045), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object1046)(), void *__anonymous_object1047, void *__anonymous_object1048), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object1049)(), void *__anonymous_object1050, void *__anonymous_object1051), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object1052)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object1053, void *__anonymous_object1054), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object1055, void *__anonymous_object1056), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object1057), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object1058, void *__anonymous_object1059), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object1060), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object1061, void *__anonymous_object1062), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object1063), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object1064), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object1065, _Bool __anonymous_object1066), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object1067), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object1068, const char *__anonymous_object1069), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1070), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1071, _Bool __anonymous_object1072), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1073), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1074), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1075), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1076), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1077), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1078, const char *__anonymous_object1079), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1080), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1081, const char *__anonymous_object1082), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1083), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1084), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1085, const char *__anonymous_object1086, unsigned long int __anonymous_object1087), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1088, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1089, void *__anonymous_object1090), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1091), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1092, void *__anonymous_object1093), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1094), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1095), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1096), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1097, void *__anonymous_object1098), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1099, void *__anonymous_object1100), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1101), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1);40 void __write__A0_3_0_0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_bitor__PFPd0_Pd0d1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFd2_Rd2d2____constructor__PF_Rd2____constructor__PF_Rd2d2____destructor__PF_Rd2____operator_preincr__PFd2_Rd2____operator_predecr__PFd2_Rd2____operator_equal__PFi_d2d2____operator_notequal__PFi_d2d2____operator_deref__PFRd1_d2__F_d2d2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object964)(), void *__anonymous_object965), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object966)(), void *__anonymous_object967, void *__anonymous_object968), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object969)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object970), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object971)(), void *__anonymous_object972, void *__anonymous_object973), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object974)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object975, void *__anonymous_object976), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object977)(), void *__anonymous_object978, void *__anonymous_object979), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object980)(), void *__anonymous_object981, void *__anonymous_object982), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object983)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object984, void *__anonymous_object985), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object986, void *__anonymous_object987), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object988), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object989, void *__anonymous_object990), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object991), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object992, void *__anonymous_object993), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object994), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object995), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object996, _Bool __anonymous_object997), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object998), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object999, const char *__anonymous_object1000), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1001), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1002, _Bool __anonymous_object1003), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1004), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1005), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1006), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1007), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1008), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1009, const char *__anonymous_object1010), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1011), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1012, const char *__anonymous_object1013), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1014), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1015), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1016, const char *__anonymous_object1017, unsigned long int __anonymous_object1018), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1019, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1020, void *__anonymous_object1021), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1022), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1023, void *__anonymous_object1024), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1025), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1026), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1027), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1028, void *__anonymous_object1029), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1030, void *__anonymous_object1031), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1032), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1); 41 void __write_reverse__A0_3_0_0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_bitor__PFPd0_Pd0d1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFd2_Rd2d2____constructor__PF_Rd2____constructor__PF_Rd2d2____destructor__PF_Rd2____operator_preincr__PFd2_Rd2____operator_predecr__PFd2_Rd2____operator_equal__PFi_d2d2____operator_notequal__PFi_d2d2____operator_deref__PFRd1_d2__F_d2d2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object1033)(), void *__anonymous_object1034), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object1035)(), void *__anonymous_object1036, void *__anonymous_object1037), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object1038)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object1039), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object1040)(), void *__anonymous_object1041, void *__anonymous_object1042), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object1043)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object1044, void *__anonymous_object1045), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object1046)(), void *__anonymous_object1047, void *__anonymous_object1048), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object1049)(), void *__anonymous_object1050, void *__anonymous_object1051), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object1052)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object1053, void *__anonymous_object1054), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object1055, void *__anonymous_object1056), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object1057), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object1058, void *__anonymous_object1059), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object1060), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object1061, void *__anonymous_object1062), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object1063), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object1064), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object1065, _Bool __anonymous_object1066), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object1067), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object1068, const char *__anonymous_object1069), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1070), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1071, _Bool __anonymous_object1072), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1073), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1074), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1075), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1076), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1077), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1078, const char *__anonymous_object1079), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1080), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1081, const char *__anonymous_object1082), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1083), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1084), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1085, const char *__anonymous_object1086, unsigned long int __anonymous_object1087), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1088, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1089, void *__anonymous_object1090), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1091), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1092, void *__anonymous_object1093), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1094), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1095), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1096), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1097, void *__anonymous_object1098), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1099, void *__anonymous_object1100), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1101), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1); 42 42 void *___operator_bitor__A0_1_0_0___fail__PFi_Pd0___eof__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___read__PFPd0_Pd0PcUl___ungetc__PFPd0_Pd0c___fmt__PFi_Pd0PCc__FPd0_Pd0Rc__1(__attribute__ ((unused)) signed int (*__fail__PFi_P7tistype__1)(void *__anonymous_object1102), __attribute__ ((unused)) signed int (*__eof__PFi_P7tistype__1)(void *__anonymous_object1103), __attribute__ ((unused)) void (*__open__PF_P7tistypePCcPCc__1)(void *__is__P7tistype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tistype__1)(void *__is__P7tistype_1), __attribute__ ((unused)) void *(*__read__PFP7tistype_P7tistypePcUl__1)(void *__anonymous_object1104, char *__anonymous_object1105, unsigned long int __anonymous_object1106), __attribute__ ((unused)) void *(*__ungetc__PFP7tistype_P7tistypec__1)(void *__anonymous_object1107, char __anonymous_object1108), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tistypePCc__1)(void *__anonymous_object1109, const char *__fmt__PCc_1, ...), void *__anonymous_object1110, char *__anonymous_object1111); 43 43 void *___operator_bitor__A0_1_0_0___fail__PFi_Pd0___eof__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___read__PFPd0_Pd0PcUl___ungetc__PFPd0_Pd0c___fmt__PFi_Pd0PCc__FPd0_Pd0RSc__1(__attribute__ ((unused)) signed int (*__fail__PFi_P7tistype__1)(void *__anonymous_object1112), __attribute__ ((unused)) signed int (*__eof__PFi_P7tistype__1)(void *__anonymous_object1113), __attribute__ ((unused)) void (*__open__PF_P7tistypePCcPCc__1)(void *__is__P7tistype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tistype__1)(void *__is__P7tistype_1), __attribute__ ((unused)) void *(*__read__PFP7tistype_P7tistypePcUl__1)(void *__anonymous_object1114, char *__anonymous_object1115, unsigned long int __anonymous_object1116), __attribute__ ((unused)) void *(*__ungetc__PFP7tistype_P7tistypec__1)(void *__anonymous_object1117, char __anonymous_object1118), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tistypePCc__1)(void *__anonymous_object1119, const char *__fmt__PCc_1, ...), void *__anonymous_object1120, signed char *__anonymous_object1121); … … 466 466 struct ofstream *_tmp_cp_ret2; 467 467 __attribute__ ((unused)) struct ofstream *_thunk0(struct ofstream *_p0){ 468 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1322))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1323))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1324, _Bool __anonymous_object1325))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1326))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1327, const char *__anonymous_object1328))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1329))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1330, _Bool __anonymous_object1331))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1332))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1333))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1334))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1335))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1336))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1337, const char *__anonymous_object1338))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1339))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1340, const char *__anonymous_object1341))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1342))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1343))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1344, const char *__anonymous_object1345, unsigned long int __anonymous_object1346))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1347, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);469 } 470 ((void)(((void)(_tmp_cp_ret2=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1348))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1349))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1350, _Bool __anonymous_object1351))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1352))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1353, const char *__anonymous_object1354))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1355))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1356, _Bool __anonymous_object1357))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1358))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1359))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1360))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1361))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1362))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1363, const char *__anonymous_object1364))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1365))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1366, const char *__anonymous_object1367))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1368))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1369))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1370, const char *__anonymous_object1371, unsigned long int __anonymous_object1372))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1373, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (( (void)(_tmp_cp_ret1=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0c__1(((_Bool (*)(void *__anonymous_object1374))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1375))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1376, _Bool __anonymous_object1377))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1378))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1379, const char *__anonymous_object1380))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1381))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1382, _Bool __anonymous_object1383))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1384))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1385))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1386))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1387))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1388))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1389, const char *__anonymous_object1390))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1391))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1392, const char *__anonymous_object1393))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1394))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1395))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1396, const char *__anonymous_object1397, unsigned long int __anonymous_object1398))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1399, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret0=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1400))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1401))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1402, _Bool __anonymous_object1403))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1404))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1405, const char *__anonymous_object1406))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1407))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1408, _Bool __anonymous_object1409))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1410))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1411))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1412))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1413))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1414))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1415, const char *__anonymous_object1416))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1417))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1418, const char *__anonymous_object1419))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1420))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1421))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1422, const char *__anonymous_object1423, unsigned long int __anonymous_object1424))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1425, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "char "))) , _tmp_cp_ret0), __v__c_1))) , _tmp_cp_ret1), ((void *(*)(void *__anonymous_object1426))(&_thunk0))))) , _tmp_cp_ret2));468 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1322))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1323))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1324, _Bool __anonymous_object1325))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1326))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1327, const char *__anonymous_object1328))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1329))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1330, _Bool __anonymous_object1331))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1332))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1333))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1334))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1335))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1336))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1337, const char *__anonymous_object1338))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1339))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1340, const char *__anonymous_object1341))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1342))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1343))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1344, const char *__anonymous_object1345, unsigned long int __anonymous_object1346))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1347, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0)); 469 } 470 ((void)(((void)(_tmp_cp_ret2=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1348))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1349))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1350, _Bool __anonymous_object1351))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1352))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1353, const char *__anonymous_object1354))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1355))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1356, _Bool __anonymous_object1357))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1358))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1359))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1360))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1361))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1362))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1363, const char *__anonymous_object1364))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1365))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1366, const char *__anonymous_object1367))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1368))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1369))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1370, const char *__anonymous_object1371, unsigned long int __anonymous_object1372))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1373, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret1=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0c__1(((_Bool (*)(void *__anonymous_object1374))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1375))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1376, _Bool __anonymous_object1377))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1378))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1379, const char *__anonymous_object1380))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1381))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1382, _Bool __anonymous_object1383))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1384))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1385))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1386))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1387))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1388))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1389, const char *__anonymous_object1390))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1391))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1392, const char *__anonymous_object1393))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1394))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1395))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1396, const char *__anonymous_object1397, unsigned long int __anonymous_object1398))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1399, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret0=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1400))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1401))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1402, _Bool __anonymous_object1403))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1404))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1405, const char *__anonymous_object1406))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1407))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1408, _Bool __anonymous_object1409))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1410))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1411))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1412))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1413))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1414))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1415, const char *__anonymous_object1416))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1417))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1418, const char *__anonymous_object1419))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1420))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1421))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1422, const char *__anonymous_object1423, unsigned long int __anonymous_object1424))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1425, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "char "))) , _tmp_cp_ret0)), __v__c_1))) , _tmp_cp_ret1)), ((void *(*)(void *__anonymous_object1426))(&_thunk0))))) , _tmp_cp_ret2)); 471 471 ((void)(_tmp_cp_ret0) /* ^?{} */); 472 472 ((void)(_tmp_cp_ret1) /* ^?{} */); … … 478 478 struct ofstream *_tmp_cp_ret5; 479 479 __attribute__ ((unused)) struct ofstream *_thunk1(struct ofstream *_p0){ 480 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1427))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1428))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1429, _Bool __anonymous_object1430))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1431))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1432, const char *__anonymous_object1433))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1434))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1435, _Bool __anonymous_object1436))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1437))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1438))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1439))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1440))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1441))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1442, const char *__anonymous_object1443))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1444))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1445, const char *__anonymous_object1446))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1447))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1448))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1449, const char *__anonymous_object1450, unsigned long int __anonymous_object1451))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1452, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);481 } 482 ((void)(((void)(_tmp_cp_ret5=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1453))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1454))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1455, _Bool __anonymous_object1456))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1457))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1458, const char *__anonymous_object1459))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1460))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1461, _Bool __anonymous_object1462))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1463))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1464))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1465))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1466))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1467))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1468, const char *__anonymous_object1469))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1470))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1471, const char *__anonymous_object1472))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1473))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1474))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1475, const char *__anonymous_object1476, unsigned long int __anonymous_object1477))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1478, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (( (void)(_tmp_cp_ret4=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Sc__1(((_Bool (*)(void *__anonymous_object1479))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1480))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1481, _Bool __anonymous_object1482))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1483))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1484, const char *__anonymous_object1485))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1486))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1487, _Bool __anonymous_object1488))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1489))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1490))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1491))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1492))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1493))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1494, const char *__anonymous_object1495))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1496))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1497, const char *__anonymous_object1498))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1499))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1500))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1501, const char *__anonymous_object1502, unsigned long int __anonymous_object1503))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1504, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret3=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1505))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1506))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1507, _Bool __anonymous_object1508))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1509))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1510, const char *__anonymous_object1511))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1512))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1513, _Bool __anonymous_object1514))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1515))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1516))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1517))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1518))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1519))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1520, const char *__anonymous_object1521))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1522))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1523, const char *__anonymous_object1524))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1525))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1526))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1527, const char *__anonymous_object1528, unsigned long int __anonymous_object1529))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1530, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "signed char "))) , _tmp_cp_ret3), __v__Sc_1))) , _tmp_cp_ret4), ((void *(*)(void *__anonymous_object1531))(&_thunk1))))) , _tmp_cp_ret5));480 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1427))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1428))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1429, _Bool __anonymous_object1430))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1431))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1432, const char *__anonymous_object1433))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1434))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1435, _Bool __anonymous_object1436))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1437))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1438))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1439))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1440))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1441))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1442, const char *__anonymous_object1443))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1444))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1445, const char *__anonymous_object1446))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1447))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1448))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1449, const char *__anonymous_object1450, unsigned long int __anonymous_object1451))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1452, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0)); 481 } 482 ((void)(((void)(_tmp_cp_ret5=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1453))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1454))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1455, _Bool __anonymous_object1456))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1457))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1458, const char *__anonymous_object1459))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1460))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1461, _Bool __anonymous_object1462))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1463))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1464))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1465))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1466))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1467))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1468, const char *__anonymous_object1469))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1470))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1471, const char *__anonymous_object1472))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1473))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1474))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1475, const char *__anonymous_object1476, unsigned long int __anonymous_object1477))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1478, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret4=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Sc__1(((_Bool (*)(void *__anonymous_object1479))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1480))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1481, _Bool __anonymous_object1482))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1483))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1484, const char *__anonymous_object1485))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1486))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1487, _Bool __anonymous_object1488))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1489))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1490))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1491))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1492))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1493))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1494, const char *__anonymous_object1495))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1496))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1497, const char *__anonymous_object1498))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1499))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1500))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1501, const char *__anonymous_object1502, unsigned long int __anonymous_object1503))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1504, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret3=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1505))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1506))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1507, _Bool __anonymous_object1508))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1509))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1510, const char *__anonymous_object1511))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1512))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1513, _Bool __anonymous_object1514))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1515))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1516))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1517))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1518))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1519))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1520, const char *__anonymous_object1521))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1522))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1523, const char *__anonymous_object1524))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1525))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1526))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1527, const char *__anonymous_object1528, unsigned long int __anonymous_object1529))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1530, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "signed char "))) , _tmp_cp_ret3)), __v__Sc_1))) , _tmp_cp_ret4)), ((void *(*)(void *__anonymous_object1531))(&_thunk1))))) , _tmp_cp_ret5)); 483 483 ((void)(_tmp_cp_ret3) /* ^?{} */); 484 484 ((void)(_tmp_cp_ret4) /* ^?{} */); … … 490 490 struct ofstream *_tmp_cp_ret8; 491 491 __attribute__ ((unused)) struct ofstream *_thunk2(struct ofstream *_p0){ 492 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1532))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1533))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1534, _Bool __anonymous_object1535))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1536))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1537, const char *__anonymous_object1538))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1539))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1540, _Bool __anonymous_object1541))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1542))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1543))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1544))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1545))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1546))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1547, const char *__anonymous_object1548))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1549))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1550, const char *__anonymous_object1551))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1552))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1553))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1554, const char *__anonymous_object1555, unsigned long int __anonymous_object1556))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1557, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);493 } 494 ((void)(((void)(_tmp_cp_ret8=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1558))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1559))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1560, _Bool __anonymous_object1561))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1562))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1563, const char *__anonymous_object1564))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1565))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1566, _Bool __anonymous_object1567))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1568))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1569))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1570))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1571))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1572))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1573, const char *__anonymous_object1574))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1575))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1576, const char *__anonymous_object1577))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1578))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1579))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1580, const char *__anonymous_object1581, unsigned long int __anonymous_object1582))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1583, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (( (void)(_tmp_cp_ret7=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Uc__1(((_Bool (*)(void *__anonymous_object1584))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1585))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1586, _Bool __anonymous_object1587))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1588))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1589, const char *__anonymous_object1590))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1591))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1592, _Bool __anonymous_object1593))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1594))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1595))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1596))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1597))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1598))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1599, const char *__anonymous_object1600))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1601))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1602, const char *__anonymous_object1603))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1604))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1605))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1606, const char *__anonymous_object1607, unsigned long int __anonymous_object1608))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1609, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret6=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1610))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1611))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1612, _Bool __anonymous_object1613))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1614))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1615, const char *__anonymous_object1616))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1617))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1618, _Bool __anonymous_object1619))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1620))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1621))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1622))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1623))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1624))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1625, const char *__anonymous_object1626))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1627))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1628, const char *__anonymous_object1629))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1630))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1631))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1632, const char *__anonymous_object1633, unsigned long int __anonymous_object1634))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1635, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "unsigned char "))) , _tmp_cp_ret6), __v__Uc_1))) , _tmp_cp_ret7), ((void *(*)(void *__anonymous_object1636))(&_thunk2))))) , _tmp_cp_ret8));492 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1532))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1533))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1534, _Bool __anonymous_object1535))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1536))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1537, const char *__anonymous_object1538))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1539))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1540, _Bool __anonymous_object1541))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1542))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1543))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1544))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1545))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1546))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1547, const char *__anonymous_object1548))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1549))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1550, const char *__anonymous_object1551))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1552))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1553))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1554, const char *__anonymous_object1555, unsigned long int __anonymous_object1556))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1557, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0)); 493 } 494 ((void)(((void)(_tmp_cp_ret8=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1558))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1559))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1560, _Bool __anonymous_object1561))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1562))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1563, const char *__anonymous_object1564))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1565))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1566, _Bool __anonymous_object1567))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1568))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1569))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1570))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1571))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1572))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1573, const char *__anonymous_object1574))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1575))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1576, const char *__anonymous_object1577))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1578))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1579))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1580, const char *__anonymous_object1581, unsigned long int __anonymous_object1582))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1583, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret7=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Uc__1(((_Bool (*)(void *__anonymous_object1584))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1585))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1586, _Bool __anonymous_object1587))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1588))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1589, const char *__anonymous_object1590))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1591))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1592, _Bool __anonymous_object1593))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1594))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1595))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1596))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1597))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1598))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1599, const char *__anonymous_object1600))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1601))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1602, const char *__anonymous_object1603))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1604))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1605))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1606, const char *__anonymous_object1607, unsigned long int __anonymous_object1608))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1609, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret6=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1610))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1611))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1612, _Bool __anonymous_object1613))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1614))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1615, const char *__anonymous_object1616))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1617))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1618, _Bool __anonymous_object1619))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1620))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1621))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1622))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1623))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1624))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1625, const char *__anonymous_object1626))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1627))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1628, const char *__anonymous_object1629))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1630))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1631))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1632, const char *__anonymous_object1633, unsigned long int __anonymous_object1634))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1635, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "unsigned char "))) , _tmp_cp_ret6)), __v__Uc_1))) , _tmp_cp_ret7)), ((void *(*)(void *__anonymous_object1636))(&_thunk2))))) , _tmp_cp_ret8)); 495 495 ((void)(_tmp_cp_ret6) /* ^?{} */); 496 496 ((void)(_tmp_cp_ret7) /* ^?{} */); … … 502 502 struct ofstream *_tmp_cp_ret11; 503 503 __attribute__ ((unused)) struct ofstream *_thunk3(struct ofstream *_p0){ 504 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1637))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1638))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1639, _Bool __anonymous_object1640))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1641))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1642, const char *__anonymous_object1643))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1644))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1645, _Bool __anonymous_object1646))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1647))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1648))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1649))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1650))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1651))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1652, const char *__anonymous_object1653))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1654))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1655, const char *__anonymous_object1656))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1657))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1658))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1659, const char *__anonymous_object1660, unsigned long int __anonymous_object1661))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1662, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);505 } 506 ((void)(((void)(_tmp_cp_ret11=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1663))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1664))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1665, _Bool __anonymous_object1666))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1667))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1668, const char *__anonymous_object1669))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1670))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1671, _Bool __anonymous_object1672))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1673))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1674))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1675))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1676))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1677))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1678, const char *__anonymous_object1679))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1680))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1681, const char *__anonymous_object1682))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1683))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1684))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1685, const char *__anonymous_object1686, unsigned long int __anonymous_object1687))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1688, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (( (void)(_tmp_cp_ret10=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0s__1(((_Bool (*)(void *__anonymous_object1689))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1690))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1691, _Bool __anonymous_object1692))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1693))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1694, const char *__anonymous_object1695))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1696))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1697, _Bool __anonymous_object1698))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1699))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1700))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1701))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1702))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1703))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1704, const char *__anonymous_object1705))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1706))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1707, const char *__anonymous_object1708))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1709))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1710))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1711, const char *__anonymous_object1712, unsigned long int __anonymous_object1713))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1714, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret9=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1715))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1716))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1717, _Bool __anonymous_object1718))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1719))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1720, const char *__anonymous_object1721))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1722))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1723, _Bool __anonymous_object1724))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1725))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1726))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1727))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1728))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1729))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1730, const char *__anonymous_object1731))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1732))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1733, const char *__anonymous_object1734))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1735))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1736))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1737, const char *__anonymous_object1738, unsigned long int __anonymous_object1739))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1740, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "signed short int"))) , _tmp_cp_ret9), __v__s_1))) , _tmp_cp_ret10), ((void *(*)(void *__anonymous_object1741))(&_thunk3))))) , _tmp_cp_ret11));504 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1637))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1638))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1639, _Bool __anonymous_object1640))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1641))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1642, const char *__anonymous_object1643))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1644))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1645, _Bool __anonymous_object1646))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1647))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1648))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1649))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1650))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1651))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1652, const char *__anonymous_object1653))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1654))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1655, const char *__anonymous_object1656))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1657))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1658))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1659, const char *__anonymous_object1660, unsigned long int __anonymous_object1661))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1662, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0)); 505 } 506 ((void)(((void)(_tmp_cp_ret11=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1663))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1664))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1665, _Bool __anonymous_object1666))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1667))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1668, const char *__anonymous_object1669))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1670))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1671, _Bool __anonymous_object1672))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1673))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1674))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1675))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1676))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1677))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1678, const char *__anonymous_object1679))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1680))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1681, const char *__anonymous_object1682))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1683))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1684))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1685, const char *__anonymous_object1686, unsigned long int __anonymous_object1687))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1688, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret10=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0s__1(((_Bool (*)(void *__anonymous_object1689))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1690))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1691, _Bool __anonymous_object1692))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1693))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1694, const char *__anonymous_object1695))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1696))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1697, _Bool __anonymous_object1698))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1699))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1700))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1701))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1702))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1703))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1704, const char *__anonymous_object1705))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1706))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1707, const char *__anonymous_object1708))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1709))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1710))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1711, const char *__anonymous_object1712, unsigned long int __anonymous_object1713))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1714, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret9=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1715))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1716))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1717, _Bool __anonymous_object1718))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1719))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1720, const char *__anonymous_object1721))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1722))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1723, _Bool __anonymous_object1724))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1725))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1726))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1727))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1728))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1729))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1730, const char *__anonymous_object1731))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1732))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1733, const char *__anonymous_object1734))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1735))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1736))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1737, const char *__anonymous_object1738, unsigned long int __anonymous_object1739))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1740, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "signed short int"))) , _tmp_cp_ret9)), __v__s_1))) , _tmp_cp_ret10)), ((void *(*)(void *__anonymous_object1741))(&_thunk3))))) , _tmp_cp_ret11)); 507 507 ((void)(_tmp_cp_ret9) /* ^?{} */); 508 508 ((void)(_tmp_cp_ret10) /* ^?{} */); … … 514 514 struct ofstream *_tmp_cp_ret14; 515 515 __attribute__ ((unused)) struct ofstream *_thunk4(struct ofstream *_p0){ 516 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1742))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1743))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1744, _Bool __anonymous_object1745))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1746))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1747, const char *__anonymous_object1748))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1749))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1750, _Bool __anonymous_object1751))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1752))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1753))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1754))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1755))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1756))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1757, const char *__anonymous_object1758))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1759))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1760, const char *__anonymous_object1761))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1762))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1763))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1764, const char *__anonymous_object1765, unsigned long int __anonymous_object1766))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1767, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);517 } 518 ((void)(((void)(_tmp_cp_ret14=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1768))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1769))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1770, _Bool __anonymous_object1771))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1772))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1773, const char *__anonymous_object1774))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1775))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1776, _Bool __anonymous_object1777))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1778))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1779))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1780))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1781))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1782))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1783, const char *__anonymous_object1784))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1785))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1786, const char *__anonymous_object1787))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1788))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1789))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1790, const char *__anonymous_object1791, unsigned long int __anonymous_object1792))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1793, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (( (void)(_tmp_cp_ret13=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Us__1(((_Bool (*)(void *__anonymous_object1794))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1795))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1796, _Bool __anonymous_object1797))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1798))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1799, const char *__anonymous_object1800))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1801))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1802, _Bool __anonymous_object1803))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1804))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1805))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1806))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1807))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1808))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1809, const char *__anonymous_object1810))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1811))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1812, const char *__anonymous_object1813))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1814))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1815))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1816, const char *__anonymous_object1817, unsigned long int __anonymous_object1818))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1819, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret12=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1820))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1821))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1822, _Bool __anonymous_object1823))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1824))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1825, const char *__anonymous_object1826))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1827))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1828, _Bool __anonymous_object1829))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1830))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1831))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1832))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1833))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1834))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1835, const char *__anonymous_object1836))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1837))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1838, const char *__anonymous_object1839))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1840))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1841))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1842, const char *__anonymous_object1843, unsigned long int __anonymous_object1844))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1845, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "unsigned short int"))) , _tmp_cp_ret12), __v__Us_1))) , _tmp_cp_ret13), ((void *(*)(void *__anonymous_object1846))(&_thunk4))))) , _tmp_cp_ret14));516 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1742))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1743))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1744, _Bool __anonymous_object1745))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1746))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1747, const char *__anonymous_object1748))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1749))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1750, _Bool __anonymous_object1751))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1752))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1753))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1754))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1755))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1756))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1757, const char *__anonymous_object1758))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1759))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1760, const char *__anonymous_object1761))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1762))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1763))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1764, const char *__anonymous_object1765, unsigned long int __anonymous_object1766))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1767, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0)); 517 } 518 ((void)(((void)(_tmp_cp_ret14=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1768))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1769))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1770, _Bool __anonymous_object1771))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1772))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1773, const char *__anonymous_object1774))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1775))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1776, _Bool __anonymous_object1777))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1778))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1779))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1780))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1781))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1782))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1783, const char *__anonymous_object1784))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1785))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1786, const char *__anonymous_object1787))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1788))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1789))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1790, const char *__anonymous_object1791, unsigned long int __anonymous_object1792))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1793, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret13=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Us__1(((_Bool (*)(void *__anonymous_object1794))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1795))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1796, _Bool __anonymous_object1797))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1798))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1799, const char *__anonymous_object1800))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1801))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1802, _Bool __anonymous_object1803))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1804))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1805))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1806))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1807))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1808))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1809, const char *__anonymous_object1810))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1811))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1812, const char *__anonymous_object1813))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1814))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1815))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1816, const char *__anonymous_object1817, unsigned long int __anonymous_object1818))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1819, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret12=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1820))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1821))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1822, _Bool __anonymous_object1823))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1824))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1825, const char *__anonymous_object1826))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1827))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1828, _Bool __anonymous_object1829))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1830))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1831))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1832))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1833))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1834))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1835, const char *__anonymous_object1836))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1837))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1838, const char *__anonymous_object1839))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1840))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1841))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1842, const char *__anonymous_object1843, unsigned long int __anonymous_object1844))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1845, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "unsigned short int"))) , _tmp_cp_ret12)), __v__Us_1))) , _tmp_cp_ret13)), ((void *(*)(void *__anonymous_object1846))(&_thunk4))))) , _tmp_cp_ret14)); 519 519 ((void)(_tmp_cp_ret12) /* ^?{} */); 520 520 ((void)(_tmp_cp_ret13) /* ^?{} */); … … 526 526 struct ofstream *_tmp_cp_ret17; 527 527 __attribute__ ((unused)) struct ofstream *_thunk5(struct ofstream *_p0){ 528 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1847))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1848))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1849, _Bool __anonymous_object1850))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1851))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1852, const char *__anonymous_object1853))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1854))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1855, _Bool __anonymous_object1856))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1857))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1858))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1859))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1860))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1861))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1862, const char *__anonymous_object1863))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1864))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1865, const char *__anonymous_object1866))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1867))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1868))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1869, const char *__anonymous_object1870, unsigned long int __anonymous_object1871))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1872, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);529 } 530 ((void)(((void)(_tmp_cp_ret17=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1873))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1874))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1875, _Bool __anonymous_object1876))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1877))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1878, const char *__anonymous_object1879))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1880))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1881, _Bool __anonymous_object1882))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1883))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1884))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1885))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1886))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1887))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1888, const char *__anonymous_object1889))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1890))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1891, const char *__anonymous_object1892))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1893))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1894))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1895, const char *__anonymous_object1896, unsigned long int __anonymous_object1897))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1898, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (( (void)(_tmp_cp_ret16=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Ui__1(((_Bool (*)(void *__anonymous_object1899))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1900))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1901, _Bool __anonymous_object1902))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1903))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1904, const char *__anonymous_object1905))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1906))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1907, _Bool __anonymous_object1908))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1909))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1910))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1911))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1912))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1913))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1914, const char *__anonymous_object1915))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1916))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1917, const char *__anonymous_object1918))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1919))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1920))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1921, const char *__anonymous_object1922, unsigned long int __anonymous_object1923))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1924, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret15=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1925))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1926))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1927, _Bool __anonymous_object1928))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1929))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1930, const char *__anonymous_object1931))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1932))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1933, _Bool __anonymous_object1934))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1935))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1936))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1937))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1938))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1939))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1940, const char *__anonymous_object1941))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1942))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1943, const char *__anonymous_object1944))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1945))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1946))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1947, const char *__anonymous_object1948, unsigned long int __anonymous_object1949))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1950, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "size_t"))) , _tmp_cp_ret15), __v__Ui_1))) , _tmp_cp_ret16), ((void *(*)(void *__anonymous_object1951))(&_thunk5))))) , _tmp_cp_ret17));528 return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1847))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1848))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1849, _Bool __anonymous_object1850))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1851))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1852, const char *__anonymous_object1853))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1854))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1855, _Bool __anonymous_object1856))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1857))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1858))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1859))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1860))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1861))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1862, const char *__anonymous_object1863))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1864))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1865, const char *__anonymous_object1866))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1867))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1868))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1869, const char *__anonymous_object1870, unsigned long int __anonymous_object1871))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1872, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0)); 529 } 530 ((void)(((void)(_tmp_cp_ret17=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1873))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1874))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1875, _Bool __anonymous_object1876))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1877))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1878, const char *__anonymous_object1879))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1880))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1881, _Bool __anonymous_object1882))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1883))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1884))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1885))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1886))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1887))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1888, const char *__anonymous_object1889))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1890))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1891, const char *__anonymous_object1892))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1893))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1894))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1895, const char *__anonymous_object1896, unsigned long int __anonymous_object1897))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1898, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret16=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Ui__1(((_Bool (*)(void *__anonymous_object1899))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1900))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1901, _Bool __anonymous_object1902))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1903))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1904, const char *__anonymous_object1905))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1906))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1907, _Bool __anonymous_object1908))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1909))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1910))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1911))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1912))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1913))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1914, const char *__anonymous_object1915))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1916))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1917, const char *__anonymous_object1918))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1919))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1920))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1921, const char *__anonymous_object1922, unsigned long int __anonymous_object1923))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1924, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret15=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1925))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1926))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1927, _Bool __anonymous_object1928))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1929))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1930, const char *__anonymous_object1931))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1932))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1933, _Bool __anonymous_object1934))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1935))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1936))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1937))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1938))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1939))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1940, const char *__anonymous_object1941))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1942))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1943, const char *__anonymous_object1944))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1945))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1946))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1947, const char *__anonymous_object1948, unsigned long int __anonymous_object1949))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1950, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "size_t"))) , _tmp_cp_ret15)), __v__Ui_1))) , _tmp_cp_ret16)), ((void *(*)(void *__anonymous_object1951))(&_thunk5))))) , _tmp_cp_ret17)); 531 531 ((void)(_tmp_cp_ret15) /* ^?{} */); 532 532 ((void)(_tmp_cp_ret16) /* ^?{} */); -
src/tests/boundedBuffer.c
r136ccd7 r4ee36bf0 1 // 1 // 2 2 // The contents of this file are covered under the licence agreement in the 3 3 // file "LICENCE" distributed with Cforall. 4 // 5 // boundedBuffer.c -- 6 // 4 // 5 // boundedBuffer.c -- 6 // 7 7 // Author : Peter A. Buhr 8 8 // Created On : Mon Oct 30 12:45:13 2017 … … 10 10 // Last Modified On : Mon Oct 30 23:02:46 2017 11 11 // Update Count : 9 12 // 12 // 13 13 14 14 #include <stdlib> … … 31 31 32 32 void insert( Buffer & mutex buffer, int elem ) { 33 if ( buffer.count == 20 ) wait( &buffer.empty );33 if ( buffer.count == 20 ) wait( buffer.empty ); 34 34 buffer.elements[buffer.back] = elem; 35 35 buffer.back = ( buffer.back + 1 ) % 20; 36 36 buffer.count += 1; 37 signal( &buffer.full );37 signal( buffer.full ); 38 38 } 39 39 int remove( Buffer & mutex buffer ) { 40 if ( buffer.count == 0 ) wait( &buffer.full );40 if ( buffer.count == 0 ) wait( buffer.full ); 41 41 int elem = buffer.elements[buffer.front]; 42 42 buffer.front = ( buffer.front + 1 ) % 20; 43 43 buffer.count -= 1; 44 signal( &buffer.empty );44 signal( buffer.empty ); 45 45 return elem; 46 46 } -
src/tests/datingService.c
r136ccd7 r4ee36bf0 1 // -*- Mode: C -*- 2 // 1 // -*- Mode: C -*- 2 // 3 3 // The contents of this file are covered under the licence agreement in the 4 4 // file "LICENCE" distributed with Cforall. 5 // 6 // datingService.c -- 7 // 5 // 6 // datingService.c -- 7 // 8 8 // Author : Peter A. Buhr 9 9 // Created On : Mon Oct 30 12:56:20 2017 … … 11 11 // Last Modified On : Mon Oct 30 23:02:11 2017 12 12 // Update Count : 15 13 // 13 // 14 14 15 15 #include <stdlib> // random … … 18 18 #include <thread> 19 19 #include <unistd.h> // getpid 20