Changeset 4ee36bf0


Ignore:
Timestamp:
Nov 8, 2017, 2:50:35 PM (7 years ago)
Author:
Rob Schluntz <rschlunt@…>
Branches:
ADT, arm-eh, ast-experimental, cleanup-dtors, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
049ead9
Parents:
136ccd7 (diff), e35f30a (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' into cleanup-dtors

Files:
2 added
1 deleted
40 edited

Legend:

Unmodified
Added
Removed
  • doc/proposals/concurrency/.gitignore

    r136ccd7 r4ee36bf0  
    1616build/*.out
    1717build/*.ps
     18build/*.pstex
     19build/*.pstex_t
    1820build/*.tex
    1921build/*.toc
  • doc/proposals/concurrency/Makefile

    r136ccd7 r4ee36bf0  
    1313annex/glossary \
    1414text/intro \
     15text/basics \
    1516text/cforall \
    16 text/basics \
    1717text/concurrency \
    1818text/internals \
    1919text/parallelism \
     20text/results \
    2021text/together \
    2122text/future \
     
    2930}}
    3031
    31 PICTURES = ${addsuffix .pstex, \
    32 }
     32PICTURES = ${addprefix build/, ${addsuffix .pstex, \
     33        system \
     34}}
    3335
    3436PROGRAMS = ${addsuffix .tex, \
     
    6769        build/*.out     \
    6870        build/*.ps      \
     71        build/*.pstex   \
    6972        build/*.pstex_t \
    7073        build/*.tex     \
  • doc/proposals/concurrency/style/cfa-format.tex

    r136ccd7 r4ee36bf0  
    254254}{}
    255255
     256\lstnewenvironment{gocode}[1][]{
     257  \lstset{
     258    language = Golang,
     259    style=defaultStyle,
     260    #1
     261  }
     262}{}
     263
    256264\newcommand{\zero}{\lstinline{zero_t}\xspace}
    257265\newcommand{\one}{\lstinline{one_t}\xspace}
  • doc/proposals/concurrency/text/basics.tex

    r136ccd7 r4ee36bf0  
    99At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution.
    1010
    11 Indeed, while execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining, execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency.
    12 
    13 Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. Indeed, concurrency challenges appear with non-determinism. Using mutual-exclusion or synchronisation are ways of limiting the lack of determinism in a system. A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Now it is important to understand that uncertainty is not undesireable; uncertainty can often be used by systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit.
     11Execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining. Execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency.
     12
     13Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption.
     14
     15A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Mutual-exclusion and synchronisation are ways of limiting non-determinism in a concurrent system. Now it is important to understand that uncertainty is desireable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit.
    1416
    1517\section{\protect\CFA 's Thread Building Blocks}
    16 One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
     18One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
    1719
    1820\section{Coroutines: A stepping stone}\label{coroutine}
    19 While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switchs and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.
    20 
    21 A good example of a problem made easier with coroutines is genereting the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{fig:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence will be used, while the rightmost approach requires to user to hold internal state between calls on behalf of th sequence generator and makes it much harder to handle corner cases like the Fibonacci seed.
     21While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switches and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.
     22
    2223\begin{figure}
    23 \label{fig:fibonacci-c}
    2424\begin{center}
    2525\begin{tabular}{c @{\hskip 0.025in}|@{\hskip 0.025in} c @{\hskip 0.025in}|@{\hskip 0.025in} c}
     
    4545        }
    4646}
     47
     48int main() {
     49        void print_fib(int n) {
     50                printf("%d\n", n);
     51        }
     52
     53        fibonacci_func(
     54                10, print_fib
     55        );
     56
     57
     58
     59}
    4760\end{ccode}&\begin{ccode}[tabsize=2]
    4861//Using output array
     
    6275                        f2 = next;
    6376                }
    64                 *array = next;
    65                 array++;
    66         }
     77                array[i] = next;
     78        }
     79}
     80
     81
     82int main() {
     83        int a[10];
     84
     85        fibonacci_func(
     86                10, a
     87        );
     88
     89        for(int i=0;i<10;i++){
     90                printf("%d\n", a[i]);
     91        }
     92
    6793}
    6894\end{ccode}&\begin{ccode}[tabsize=2]
     
    7096typedef struct {
    7197        int f1, f2;
    72 } iterator_t;
     98} Iterator_t;
    7399
    74100int fibonacci_state(
    75         iterator_t * it
     101        Iterator_t * it
    76102) {
    77103        int f;
    78104        f = it->f1 + it->f2;
    79105        it->f2 = it->f1;
    80         it->f1 = f;
     106        it->f1 = max(f,1);
    81107        return f;
    82108}
     
    87113
    88114
     115
     116int main() {
     117        Iterator_t it={0,0};
     118
     119        for(int i=0;i<10;i++){
     120                printf("%d\n",
     121                        fibonacci_state(
     122                                &it
     123                        );
     124                );
     125        }
     126
     127}
    89128\end{ccode}
    90129\end{tabular}
    91130\end{center}
    92131\caption{Different implementations of a fibonacci sequence generator in C.}
     132\label{lst:fibonacci-c}
    93133\end{figure}
    94134
    95 
    96 Figure \ref{fig:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, using the coroutine stack to hold sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is a easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example.
     135A good example of a problem made easier with coroutines is generators, like the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed.
     136
     137Figure \ref{lst:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example.
    97138
    98139\begin{figure}
    99 \label{fig:fibonacci-cfa}
    100140\begin{cfacode}
    101141coroutine Fibonacci {
     
    108148
    109149//main automacically called on first resume
    110 void main(Fibonacci & this) {
     150void main(Fibonacci & this) with (this) {
    111151        int fn1, fn2;           //retained between resumes
    112         this.fn = 0;
    113         fn1 = this.fn;
     152        fn = 0;
     153        fn1 = fn;
    114154        suspend(this);          //return to last resume
    115155
    116         this.fn = 1;
     156        fn = 1;
    117157        fn2 = fn1;
    118         fn1 = this.fn;
     158        fn1 = fn;
    119159        suspend(this);          //return to last resume
    120160
    121161        for ( ;; ) {
    122                 this.fn = fn1 + fn2;
     162                fn = fn1 + fn2;
    123163                fn2 = fn1;
    124                 fn1 = this.fn;
     164                fn1 = fn;
    125165                suspend(this);  //return to last resume
    126166        }
     
    140180\end{cfacode}
    141181\caption{Implementation of fibonacci using coroutines}
     182\label{lst:fibonacci-cfa}
    142183\end{figure}
    143184
    144 \subsection{Construction}
    145 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
    146 
    147 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
    148 
    149 Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
    150 
    151 \begin{cfacode}
    152 //async: Runs function asynchronously on another thread
    153 forall(otype T)
    154 extern void async(void (*func)(T*), T* obj);
    155 
    156 forall(otype T)
    157 void noop(T *) {}
    158 
    159 void bar() {
    160         int a;
    161         async(noop, &a);
    162 }
    163 \end{cfacode}
    164 
    165 The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
    166 
    167 \begin{ccode}
    168 extern void async(/* omitted */, void (*func)(void *), void *obj);
    169 
    170 void noop(/* omitted */, void *obj){}
    171 
    172 void bar(){
    173         int a;
    174         void _thunk0(int *_p0){
    175                 /* omitted */
    176                 noop(/* omitted */, _p0);
    177         }
    178         /* omitted */
    179         async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a));
    180 }
    181 \end{ccode}
    182 The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block. This extra challenge limits which solutions are viable because storing the function pointer for too long causes undefined behavior; i.e. the stack based thunk being destroyed before it was used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that the routines cannot be passed outside of the scope of the functions these were declared in. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
    183 
    184 \subsection{Alternative: Composition}
    185 One solution to this challenge is to use composition/containement, where uses add insert a coroutine field which contains the necessary information to manage the coroutine.
    186 
    187 \begin{cfacode}
    188 struct Fibonacci {
    189         int fn; //used for communication
    190         coroutine c; //composition
    191 };
    192 
    193 void ?{}(Fibonacci & this) {
    194         this.fn = 0;
    195         (this.c){}; //Call constructor to initialize coroutine
    196 }
    197 \end{cfacode}
    198 There are two downsides to this approach. The first, which is relatively minor, made aware of the main routine pointer. This information must either be store in the coroutine runtime data or in its static type structure. When using composition, all coroutine handles have the same static type structure which means the pointer to the main needs to be part of the runtime data. This requirement means the coroutine data must be made larger to store a value that is actually a compile time constant (address of the main routine). The second problem, which is both subtle and significant, is that now users can get the initialisation order of coroutines wrong. Indeed, every field of a \CFA struct is constructed but in declaration order, unless users explicitly write otherwise. This semantics means that users who forget to initialize the coroutine handle may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem. Figure \ref{fig:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. This is a good example where the control flow is made much simpler from being able to resume the coroutine from the constructor and highlights the idea that interesting control flow can occor in the constructor.
     185Figure \ref{lst:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
     186
    199187\begin{figure}
    200 \label{fig:fmt-line}
    201188\begin{cfacode}[tabsize=3]
    202189//format characters into blocks of 4 and groups of 5 blocks per line
     
    244231\end{cfacode}
    245232\caption{Formatting text into lines of 5 blocks of 4 characters.}
     233\label{lst:fmt-line}
    246234\end{figure}
    247235
     236\subsection{Construction}
     237One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
     238
     239The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
     240
     241Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
     242
     243\begin{cfacode}
     244//async: Runs function asynchronously on another thread
     245forall(otype T)
     246extern void async(void (*func)(T*), T* obj);
     247
     248forall(otype T)
     249void noop(T*) {}
     250
     251void bar() {
     252        int a;
     253        async(noop, &a); //start thread running noop with argument a
     254}
     255\end{cfacode}
     256
     257The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
     258
     259\begin{ccode}
     260extern void async(/* omitted */, void (*func)(void *), void *obj);
     261
     262void noop(/* omitted */, void *obj){}
     263
     264void bar(){
     265        int a;
     266        void _thunk0(int *_p0){
     267                /* omitted */
     268                noop(/* omitted */, _p0);
     269        }
     270        /* omitted */
     271        async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a));
     272}
     273\end{ccode}
     274The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
     275
     276\subsection{Alternative: Composition}
     277One solution to this challenge is to use composition/containement, where coroutine fields are added to manage the coroutine.
     278
     279\begin{cfacode}
     280struct Fibonacci {
     281        int fn; //used for communication
     282        coroutine c; //composition
     283};
     284
     285void FibMain(void *) {
     286        //...
     287}
     288
     289void ?{}(Fibonacci & this) {
     290        this.fn = 0;
     291        //Call constructor to initialize coroutine
     292        (this.c){myMain};
     293}
     294\end{cfacode}
     295The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, doing so the users carefully choose construction order to prevent usage of unconstructed objects. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically.
    248296
    249297\subsection{Alternative: Reserved keyword}
     
    255303};
    256304\end{cfacode}
    257 This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can both be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
     305The \code{coroutine} keyword means the compiler can find and inject code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users wantint to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
    258306
    259307\subsection{Alternative: Lamda Objects}
     
    268316Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
    269317
    270 A variation of this would be to use an simple function pointer in the same way pthread does for threads :
     318A variation of this would be to use a simple function pointer in the same way pthread does for threads :
    271319\begin{cfacode}
    272320void foo( coroutine_t cid, void * arg ) {
     
    281329}
    282330\end{cfacode}
    283 This semantic is more common for thread interfaces than coroutines but would work equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.
     331This semantics is more common for thread interfaces than coroutines works equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.
    284332
    285333\subsection{Alternative: Trait-based coroutines}
     
    350398\end{cfacode}
    351399
    352 In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously
     400In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With the static semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously.
    353401\begin{cfacode}
    354402typedef void (*voidFunc)(int);
     
    361409void ?{}(FuncRunner & this, voidFunc inFunc, int arg) {
    362410        this.func = inFunc;
     411        this.arg  = arg;
    363412}
    364413
    365414void main(FuncRunner & this) {
     415        //thread starts here and runs the function
    366416        this.func( this.arg );
    367417}
    368418\end{cfacode}
    369419
    370 An consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}.
     420A consequence of the strongly-typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \acrshort{api}.
    371421
    372422Of course for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \acrshort{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \acrshort{raii} principles and have threads \code{fork} after the constructor has completed and \code{join} before the destructor runs.
     
    389439\end{cfacode}
    390440
    391 This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once and users cannot make any progamming errors and it naturally scales to multiple threads meaning basic synchronisation is very simple
     441This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once, users cannot make any progamming errors, and it naturally scales to multiple threads meaning basic synchronisation is very simple.
    392442
    393443\begin{cfacode}
     
    411461\end{cfacode}
    412462
    413 However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created
     463However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
    414464
    415465\begin{cfacode}
  • doc/proposals/concurrency/text/cforall.tex

    r136ccd7 r4ee36bf0  
    11% ======================================================================
    22% ======================================================================
    3 \chapter{Cforall crash course}
     3\chapter{Cforall Overview}
    44% ======================================================================
    55% ======================================================================
    66
    7 This thesis presents the design for a set of concurrency features in \CFA. Since it is a new dialect of C, the following is a quick introduction to the language, specifically tailored to the features needed to support concurrency.
     7The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
    88
    9 \CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a received (e.g.: this), it does have some notion of objects\footnote{C defines the term objects as : [Where to I get the C11 reference manual?]}, most importantly construction and destruction of objects. Most of the following pieces of code can be found on the \CFA website \cite{www-cfa}
     9\CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (e.g., this), it does have some notion of objects\footnote{C defines the term objects as : ``region of data storage in the execution environment, the contents of which can represent
     10values''\cite[3.15]{C11}}, most importantly construction and destruction of objects. Most of the following code examples can be found on the \CFA website \cite{www-cfa}
    1011
    1112\section{References}
    1213
    13 Like \CC, \CFA introduces references as an alternative to pointers. In regards to concurrency, the semantics difference between pointers and references are not particularly relevant but since this document uses mostly references here is a quick overview of the semantics :
     14Like \CC, \CFA introduces rebindable references providing multiple dereferecing as an alternative to pointers. In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics:
    1415\begin{cfacode}
    1516int x, *p1 = &x, **p2 = &p1, ***p3 = &p2,
    16 &r1 = x,    &&r2 = r1,   &&&r3 = r2;
     17        &r1 = x,    &&r2 = r1,   &&&r3 = r2;
    1718***p3 = 3;                                                      //change x
    1819r3    = 3;                                                      //change x, ***r3
     
    2526sizeof(&ar[1]) == sizeof(int *);        //is true, i.e., the size of a reference
    2627\end{cfacode}
    27 The important thing to take away from this code snippet is that references offer a handle to an object much like pointers but which is automatically derefferenced when convinient.
     28The important take away from this code example is that references offer a handle to an object, much like pointers, but which is automatically dereferenced for convinience.
    2829
    2930\section{Overloading}
    3031
    31 Another important feature of \CFA is function overloading as in Java and \CC, where routine with the same name are selected based on the numbers and type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada\cite{Ada}. For routines with multiple parameters and returns, the selection is complex.
     32Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada\cite{Ada}. For routines with multiple parameters and returns, the selection is complex.
    3233\begin{cfacode}
    3334//selection based on type and number of parameters
     
    4546double d = f(4);                //select (2)
    4647\end{cfacode}
    47 This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes. As seen in chapter \ref{basics}, routines main is an example that benefits from overloading.
     48This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes. As seen in chapter \ref{basics}, routine \code{main} is an example that benefits from overloading.
    4849
    4950\section{Operators}
    50 Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation would be, like so :
     51Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation occur, e.g.:
    5152\begin{cfacode}
    5253int ++? (int op);                       //unary prefix increment
     
    101102
    102103\section{Parametric Polymorphism}
    103 Routines in \CFA can also be reused for multiple types. This is done using the \code{forall} clause which gives \CFA it's name. \code{forall} clauses allow seperatly compiled routines to support generic usage over multiple types. For example, the following sum function will work for any type which support construction from 0 and addition :
     104Routines in \CFA can also be reused for multiple types. This capability is done using the \code{forall} clause which gives \CFA its name. \code{forall} clauses allow separately compiled routines to support generic usage over multiple types. For example, the following sum function works for any type that supports construction from 0 and addition :
    104105\begin{cfacode}
    105106//constraint type, 0 and +
     
    116117\end{cfacode}
    117118
    118 Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits. Traits are named collection of constraints which can be used both instead and in addition to regular constraints:
     119Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits. Traits are named collection of constraints that can be used both instead and in addition to regular constraints:
    119120\begin{cfacode}
    120121trait sumable( otype T ) {
     
    130131
    131132\section{with Clause/Statement}
    132 Since \CFA lacks the concept of a receiver, certain functions end-up needing to repeat variable names often, to solve this \CFA offers the \code{with} statement which opens an aggregate scope making its fields directly accessible (like Pascal).
     133Since \CFA lacks the concept of a receiver, certain functions end-up needing to repeat variable names often. To remove this inconvenience, \CFA provides the \code{with} statement, which opens an aggregate scope making its fields directly accessible (like Pascal).
    133134\begin{cfacode}
    134135struct S { int i, j; };
    135 int mem(S & this) with this             //with clause
     136int mem(S & this) with (this)           //with clause
    136137        i = 1;                                          //this->i
    137138        j = 2;                                          //this->j
     
    140141        struct S1 { ... } s1;
    141142        struct S2 { ... } s2;
    142         with s1                                         //with statement
     143        with (s1)                                       //with statement
    143144        {
    144145                //access fields of s1
    145146                //without qualification
    146                 with s2                                 //nesting
     147                with (s2)                                       //nesting
    147148                {
    148149                        //access fields of s1 and s2
     
    150151                }
    151152        }
    152         with s1, s2                             //scopes open in parallel
     153        with (s1, s2)                           //scopes open in parallel
    153154        {
    154155                //access fields of s1 and s2
  • doc/proposals/concurrency/text/concurrency.tex

    r136ccd7 r4ee36bf0  
    44% ======================================================================
    55% ======================================================================
    6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account.
     6Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels\cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account.
    77
    88Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}.
    99
    10 An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA.
     10An approach that is worth mentioning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA.
    1111
    1212One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared-memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct.
     
    1919
    2020\subsection{Synchronization}
    21 As for mutual-exclusion, low-level synchronisation primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering simple solution to otherwise involved challenges. An example is barging. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronisation happens around a critical section, where threads must acquire critical sections in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use status flags and other flag variables to detect barging threads are said to be using barging avoidance while algorithms that baton-passing locks between threads instead of releasing the locks are said to be using barging prevention.
     21As for mutual-exclusion, low-level synchronisation primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronisation happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic exmaple is the thread that finishes using a ressource and unblocks a thread waiting to use the resource, but the unblocked thread must compete again to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use status flags and other flag variables to detect barging threads are said to be using barging avoidance while algorithms that baton-passing locks between threads instead of releasing the locks are said to be using barging prevention.
    2222
    2323% ======================================================================
     
    7171\end{tabular}
    7272\end{center}
    73 Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting.
    74 
    75 Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation.
    76 
    77 For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire multiple times the same monitor without deadlock. For example, figure \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree.
     73Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting, which is similar in usage to \CC \code{atomic} template.
     74
     75Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation.
     76
     77For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, figure \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree.
    7878\begin{figure}
    7979\label{fig:search}
     
    9595\end{figure}
    9696
    97 Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is making exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}.
     97Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the ``normal'' parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that ``this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}.
    9898
    9999The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations:
     
    113113int f5(monitor * mutex m []); //Not Okay : Array of unkown length
    114114\end{cfacode}
    115 Note that not all array functions are actually distinct in the type system sense. However, even the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.
    116 
    117 Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion often receives an object, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls.
     115Note that not all array functions are actually distinct in the type system. However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.
     116
     117Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion of the receiver object, \CFA uses an explicit mechanism to acquire mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls.
    118118\begin{cfacode}
    119119int f(MonitorA & mutex a, MonitorB & mutex b);
     
    123123f(a,b);
    124124\end{cfacode}
    125 The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use \gls{multi-acq} locks, users can effectively force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order:
     125While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors in the way is safe from deadlock. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order:
    126126\begin{cfacode}
    127127void foo(A & mutex a, B & mutex b) { //acquire a & b
     
    139139The \gls{multi-acq} monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order.
    140140
    141 However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown on several occasion\cit, solving this problem requires:
     141However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown\cit, solving this problem requires:
    142142\begin{enumerate}
    143143        \item Dynamically tracking of the monitor-call order.
    144144        \item Implement rollback semantics.
    145145\end{enumerate}
    146 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors.
    147 
    148 \Gls{multi-acq} and \gls{bulk-acq} can be used together in interesting ways, for example:
     146While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is prohibitively complex \cit. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. While \CFA provides only a partial solution, many system provide no solution and the \CFA partial solution handles many useful cases.
     147
     148For example, \gls{multi-acq} and \gls{bulk-acq} can be used together in interesting ways:
    149149\begin{cfacode}
    150150monitor bank { ... };
     
    157157}
    158158\end{cfacode}
    159 This example shows a trivial solution to the bank account transfer problem\cit. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering.
    160 
    161 \subsubsection{\code{mutex} statement} \label{mutex-stmt}
     159This example shows a trivial solution to the bank-account transfer-problem\cit. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering.
     160
     161\subsection{\code{mutex} statement} \label{mutex-stmt}
    162162
    163163The call semantics discussed aboved have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem\cit. Listing \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters.
     
    218218\end{cfacode}
    219219
    220 
    221 % ======================================================================
    222 % ======================================================================
    223 \section{Internal scheduling} \label{insched}
     220Like threads and coroutines, monitors are defined in terms of traits with some additional language support in the form of the \code{monitor} keyword. The monitor trait is :
     221\begin{cfacode}
     222trait is_monitor(dtype T) {
     223        monitor_desc * get_monitor( T & );
     224        void ^?{}( T & mutex );
     225};
     226\end{cfacode}
     227Note that the destructor of a monitor must be a \code{mutex} routine. This requirement ensures that the destructor has mutual-exclusion. As with any object, any call to a monitor, using \code{mutex} or otherwise, is Undefined Behaviour after the destructor has run.
     228
     229% ======================================================================
     230% ======================================================================
     231\section{Internal scheduling} \label{intsched}
    224232% ======================================================================
    225233% ======================================================================
     
    248256\end{cfacode}
    249257
    250 There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion. Second, in \CFA, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering.
    251 
    252 An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, foo is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency.
     258There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering.
     259
     260An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency.
    253261
    254262% ======================================================================
     
    257265% ======================================================================
    258266% ======================================================================
    259 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use a single condition as paremeter and waits on the monitors associated with the condition.
     267It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition as paremeter and explicitly names the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a set of monitors on first use (called branding) which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors.
    260268
    261269\begin{multicols}{2}
     
    295303\end{pseudo}
    296304\end{multicols}
    297 This version uses \gls{bulk-acq} (denoted using the \& symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate.
    298 
    299 While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem\cit, which occurs when a \code{wait} is made on a thread that holds more than one monitor. For example, the following pseudo-code will run into the nested monitor problem :
     305This version uses \gls{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate.
     306
     307While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem\cit, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem :
    300308\begin{multicols}{2}
    301309\begin{pseudo}
     
    317325\end{pseudo}
    318326\end{multicols}
     327
     328The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} results in another set of problems such as releasing monitor \code{C}, which has nothing to do with the \code{signal}.
     329
    319330However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the nested monitor problem.
    320331
     
    339350\end{multicols}
    340351
    341 Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics. Listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. Note that listing \ref{lst:int-bulk-cfa} uses non-\code{mutex} parameter to introduce monitor \code{b} into context. However, for the purpose of translating the given pseudo-code into \CFA-code any method of introducing new monitors into context, other than a \code{mutex} parameter, is acceptable, e.g. global variables, pointer parameters or using locals with the \code{mutex}-statement.
     352% ======================================================================
     353% ======================================================================
     354\subsection{Internal Scheduling - in depth}
     355% ======================================================================
     356% ======================================================================
     357
     358A larger example is presented to show complex issuesfor \gls{bulk-acq} and all the implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code any method of introducing monitor into context, other than a \code{mutex} parameter, is acceptable, e.g., global variables, pointer parameters or using locals with the \code{mutex}-statement.
    342359
    343360\begin{figure}[!b]
     
    376393
    377394\begin{figure}[!b]
     395\begin{center}
     396\begin{cfacode}[xleftmargin=.4\textwidth]
     397monitor A a;
     398monitor B b;
     399condition c;
     400\end{cfacode}
     401\end{center}
    378402\begin{multicols}{2}
    379403Waiting thread
    380404\begin{cfacode}
    381 monitor A;
    382 monitor B;
    383 extern condition c;
    384 void foo(A & mutex a, B & b) {
     405mutex(a) {
    385406        //Code Section 1
    386407        mutex(a, b) {
     
    397418Signalling thread
    398419\begin{cfacode}
    399 monitor A;
    400 monitor B;
    401 extern condition c;
    402 void foo(A & mutex a, B & b) {
     420mutex(a) {
    403421        //Code Section 5
    404422        mutex(a, b) {
     
    415433\end{figure}
    416434
    417 It is particularly important to pay attention to code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 16), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. There are three options.
     435The complexity begins at code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should ``release \code{A & B}'' (line 16), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor \code{A}, simply waking up the waiting thread is not an option because it violates mutual exclusion. There are three options.
    418436
    419437\subsubsection{Delaying signals}
    420 The first more obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is the correct time to transfer ownership when the last lock is no longer needed because this semantics fits most closely to the behaviour of single monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of objects, effectively making the existing single monitor semantic viable by simply changing monitors to monitor groups.
     438The obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups.
    421439\begin{multicols}{2}
    422440Waiter
     
    443461\end{multicols}
    444462However, this solution can become much more complicated depending on what is executed while secretly holding B (at line 10). Indeed, nothing prevents signalling monitor A on a different condition variable:
    445 \begin{multicols}{2}
    446 Thread 1
     463\begin{figure}
     464\begin{multicols}{3}
     465Thread $\alpha$
    447466\begin{pseudo}[numbers=left, firstnumber=1]
    448467acquire A
     
    453472\end{pseudo}
    454473
    455 Thread 2
    456 \begin{pseudo}[numbers=left, firstnumber=6]
    457 acquire A
    458         wait A
    459 release A
    460 \end{pseudo}
    461 
    462474\columnbreak
    463475
    464 Thread 3
    465 \begin{pseudo}[numbers=left, firstnumber=9]
     476Thread $\gamma$
     477\begin{pseudo}[numbers=left, firstnumber=1]
    466478acquire A
    467479        acquire A & B
    468480                signal A & B
    469481        release A & B
    470         //Secretly keep B here
    471482        signal A
    472483release A
    473 //Wakeup thread 1 or 2?
    474 //Who wakes up the other thread?
    475 \end{pseudo}
     484\end{pseudo}
     485
     486\columnbreak
     487
     488Thread $\beta$
     489\begin{pseudo}[numbers=left, firstnumber=1]
     490acquire A
     491        wait A
     492release A
     493\end{pseudo}
     494
    476495\end{multicols}
     496\caption{Dependency graph}
     497\label{lst:dependency}
     498\end{figure}
    477499
    478500The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example. Depending on the order of signals (line 12 and 15) two cases can happen.
     
    484506Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect.
    485507
    486 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogenous group and therefore invalidates the main benefit of this approach.
     508In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogenous group and therefore effectively precludes this approach.
    487509
    488510\subsubsection{Dependency graphs}
    489 In the Listing 1 pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases A and then the waiter transfers back ownership of A when it releases it, then the problem is solved. Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions:
     511In the listing \ref{lst:int-bulk-pseudo} pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions:
    490512
    491513\begin{multicols}{2}
     
    514536
    515537\begin{figure}
    516 \begin{multicols}{3}
    517 Thread $\alpha$
    518 \begin{pseudo}[numbers=left, firstnumber=1]
    519 acquire A
    520         acquire A & B
    521                 wait A & B
    522         release A & B
    523 release A
    524 \end{pseudo}
    525 
    526 \columnbreak
    527 
    528 Thread $\gamma$
    529 \begin{pseudo}[numbers=left, firstnumber=1]
    530 acquire A
    531         acquire A & B
    532                 signal A & B
    533         release A & B
    534         signal A
    535 release A
    536 \end{pseudo}
    537 
    538 \columnbreak
    539 
    540 Thread $\beta$
    541 \begin{pseudo}[numbers=left, firstnumber=1]
    542 acquire A
    543         wait A
    544 release A
    545 \end{pseudo}
    546 
    547 \end{multicols}
    548 \caption{Dependency graph}
    549 \label{lst:dependency}
    550 \end{figure}
    551 
    552 \begin{figure}
    553538\begin{center}
    554539\input{dependency}
    555540\end{center}
     541\caption{Dependency graph of the statements in listing \ref{lst:dependency}}
    556542\label{fig:dependency}
    557 \caption{Dependency graph of the statements in listing \ref{lst:dependency}}
    558543\end{figure}
    559544
    560 Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs as well as the corresponding dependency graph. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement. The extra challenge is that this dependency graph is effectively post-mortem, but the run time system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one.
     545Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one.
    561546
    562547\subsubsection{Partial signalling} \label{partial-sig}
    563 Finally, the solution that is chosen for \CFA is to use partial signalling. Consider the following case:
    564 
    565 \begin{multicols}{2}
    566 \begin{pseudo}[numbers=left]
    567 acquire A
    568         acquire A & B
    569                 wait A & B
    570         release A & B
    571 release A
    572 \end{pseudo}
    573 
    574 \columnbreak
    575 
    576 \begin{pseudo}[numbers=left, firstnumber=6]
    577 acquire A
    578         acquire A & B
    579                 signal A & B
    580         release A & B
    581         //... More code
    582 release A
    583 \end{pseudo}
    584 \end{multicols}
    585 The partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm which is why it was chosen.
     548Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any downsides worth mentionning.
    586549
    587550% ======================================================================
     
    590553% ======================================================================
    591554% ======================================================================
    592 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine\footnote{name to be discussed}.
    593 
    594 The example in listing \ref{lst:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour cause the need for additional synchronisation when a two-way handshake is needed. To avoid this extraneous synchronisation, the \code{condition} type offers the \code{signal_block} routine which handle two-way handshakes as shown in the example. This removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor neither before nor after the call.
    595555\begin{figure}
    596556\begin{tabular}{|c|c|}
     
    622582                girlPhoneNo = phoneNo;
    623583
    624                 //wake boy fron chair
     584                //wake boy from chair
    625585                signal(exchange);
    626586        }
     
    669629                girlPhoneNo = phoneNo;
    670630
    671                 //wake boy fron chair
     631                //wake boy from chair
    672632                signal(exchange);
    673633        }
     
    696656\label{lst:datingservice}
    697657\end{figure}
     658An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine\footnote{name to be discussed}.
     659
     660The example in listing \ref{lst:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour requires additional synchronisation when a two-way handshake is needed. To avoid this extraneous synchronisation, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor neither before nor after the call.
    698661
    699662% ======================================================================
     
    702665% ======================================================================
    703666% ======================================================================
    704 An alternative to internal scheduling is to use external scheduling.
     667An alternative to internal scheduling is external scheduling, e.g., in \uC.
    705668\begin{center}
    706 \begin{tabular}{|c|c|}
    707 Internal Scheduling & External Scheduling \\
     669\begin{tabular}{|c|c|c|}
     670Internal Scheduling & External Scheduling & Go\\
    708671\hline
    709 \begin{ucppcode}
     672\begin{ucppcode}[tabsize=3]
    710673_Monitor Semaphore {
    711674        condition c;
     
    713676public:
    714677        void P() {
    715                 if(inUse) wait(c);
     678                if(inUse)
     679                        wait(c);
    716680                inUse = true;
    717681        }
     
    721685        }
    722686}
    723 \end{ucppcode}&\begin{ucppcode}
     687\end{ucppcode}&\begin{ucppcode}[tabsize=3]
    724688_Monitor Semaphore {
    725689
     
    727691public:
    728692        void P() {
    729                 if(inUse) _Accept(V);
     693                if(inUse)
     694                        _Accept(V);
    730695                inUse = true;
    731696        }
     
    735700        }
    736701}
    737 \end{ucppcode}
     702\end{ucppcode}&\begin{gocode}[tabsize=3]
     703type MySem struct {
     704        inUse bool
     705        c     chan bool
     706}
     707
     708// acquire
     709func (s MySem) P() {
     710        if s.inUse {
     711                select {
     712                case <-s.c:
     713                }
     714        }
     715        s.inUse = true
     716}
     717
     718// release
     719func (s MySem) V() {
     720        s.inUse = false
     721
     722        //This actually deadlocks
     723        //when single thread
     724        s.c <- false
     725}
     726\end{gocode}
    738727\end{tabular}
    739728\end{center}
    740 This method is more constrained and explicit, which helps users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC with \code{_Accept}) or in terms of data (e.g. Go with channels). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s.
    741 
    742 In the case of internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor. This entails that a third routine, say \code{isInUse()}, may have acquired mutual exclusion several times while routine \code{P} was waiting. On the other hand, external scheduling guarantees that while routine \code{P} was waiting, no routine other than \code{V} could acquire the monitor.
     729This method is more constrained and explicit, which helps users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s.
     730
     731For the \code{P} member above using internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor, allowing a third routine, say \code{isInUse()}, acquire mutual exclusion several times while routine \code{P} is waiting. On the other hand, external scheduling guarantees that while routine \code{P} is waiting, no routine other than \code{V} can acquire the monitor.
    743732
    744733% ======================================================================
     
    747736% ======================================================================
    748737% ======================================================================
    749 In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented it becomes both more difficult to implement but also less clear for the user:
     738In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user:
    750739
    751740\begin{cfacode}
     
    786775\end{center}
    787776
    788 There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Indeed simply updating a bitmask when the acceptor queue changes is enough to have a check that executes in a single instruction, even with a fairly large number (e.g. 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type declares all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. Its important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units.
    789 The alternative is to have a picture like this one:
     777There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type enumerates (declares) all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. It is important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units.
     778The alternative is to alter the implementeation like this:
    790779
    791780\begin{center}
     
    793782\end{center}
    794783
    795 Not storing the mask inside the monitor means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of function-pointers would solve the issue of uniquely identifying acceptable routines. However, the single instruction bitmask compare has been replaced by dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling may now require additionnal searches on calls to waitfor to check if a routine is already queued in.
    796 
    797 Note that in the second picture, tasks need to always keep track of through which routine they are attempting to acquire the monitor and the routine mask needs to have both a function pointer and a set of monitors, as will be discussed in the next section. These details where omitted from the picture for the sake of simplifying the representation.
    798 
    799 At this point we must make a decision between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitively hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA.
     784Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function-pointers replaces the single instruction bitmask compare with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additionnal searches on calls to \code{waitfor} statement to check if a routine is already queued in.
     785
     786\begin{figure}
     787\begin{cfacode}
     788monitor M {};
     789void foo( M & mutex a ) {}
     790void bar( M & mutex b ) {
     791        //Nested in the waitfor(bar, c) call
     792        waitfor(foo, b);
     793}
     794void baz( M & mutex c ) {
     795        waitfor(bar, c);
     796}
     797
     798\end{cfacode}
     799\caption{Example of nested external scheduling}
     800\label{lst:nest-ext}
     801\end{figure}
     802
     803Note that in the second picture, tasks need to always keep track of which routine they are attempting to acquire the monitor and the routine mask needs to have both a function pointer and a set of monitors, as will be discussed in the next section. These details where omitted from the picture for the sake of simplifying the representation.
     804
     805At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitively hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA.
    800806
    801807% ======================================================================
     
    811817void f(M & mutex a);
    812818
    813 void g(M & mutex a, M & mutex b) {
    814         waitfor(f); //ambiguous, keep a pass b or other way around?
     819void g(M & mutex b, M & mutex c) {
     820        waitfor(f); //two monitors M => unkown which to pass to f(M & mutex)
    815821}
    816822\end{cfacode}
     
    828834\end{cfacode}
    829835
    830 This syntax is unambiguous. Both locks are acquired and kept. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor waitfor statement as follows.
     836This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor \code{waitfor} statement as follows.
    831837
    832838\begin{cfacode}
     
    842848Note that the set of monitors passed to the \code{waitfor} statement must be entirely contained in the set of monitors already acquired in the routine. \code{waitfor} used in any other context is Undefined Behaviour.
    843849
    844 An important behavior to note is that what happens when a set of monitors only match partially :
     850An important behavior to note is when a set of monitors only match partially :
    845851
    846852\begin{cfacode}
     
    865871\end{cfacode}
    866872
    867 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is important; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are to distinct waiting condition.
     873While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition.
    868874
    869875% ======================================================================
     
    873879% ======================================================================
    874880
    875 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted. This is because the compiler validates at compile time the validity of the waitfor statement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading.
     881Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading.
    876882\begin{figure}
    877883\begin{cfacode}
     
    898904        waitfor(f2, a1, a2); //Incorrect : Mutex arguments don't match
    899905        waitfor(f1, 1);      //Incorrect : 1 not a mutex argument
    900         waitfor(f4, a1);     //Incorrect : f9 not a function
    901         waitfor(*fp, a1 );   //Incorrect : fp not a identifier
     906        waitfor(f9, a1);     //Incorrect : f9 function does not exist
     907        waitfor(*fp, a1 );   //Incorrect : fp not an identifier
    902908        waitfor(f4, a1);     //Incorrect : f4 ambiguous
    903909
     
    909915\end{figure}
    910916
    911 Finally, for added flexibility, \CFA supports constructing complex waitfor mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain will form a single statement which will baton-pass to any one function that fits one of the function+monitor set which was passed in. To eanble users to tell which was the accepted function, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching function already arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the mask based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones.
     917Finally, for added flexibility, \CFA supports constructing complex \code{waitfor} mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain forms a single statement that uses baton-pass to any one function that fits one of the function+monitor set passed in. To eanble users to tell which accepted function is accepted, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching function call already arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the mask based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones.
    912918
    913919\begin{figure}
     
    973979\label{lst:waitfor2}
    974980\end{figure}
     981
     982% ======================================================================
     983% ======================================================================
     984\subsection{Waiting for the destructor}
     985% ======================================================================
     986% ======================================================================
     987An interesting use for the \code{waitfor} statement is destructor semantics. Indeed, the \code{waitfor} statement can accept any \code{mutex} routine, which includes the destructor (see section \ref{data}). However, with the semantics discussed until now, waiting for the destructor does not make any sense since using an object after its destructor is called is undefined behaviour. The simplest approach is to disallow \code{waitfor} on a destructor. However, a more expressive approach is to flip execution ordering when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current \code{mutex} routine, similarly to how a condition is signalled.
     988\begin{figure}
     989\begin{cfacode}
     990monitor Executer {};
     991struct  Action;
     992
     993void ^?{}   (Executer & mutex this);
     994void execute(Executer & mutex this, const Action & );
     995void run    (Executer & mutex this) {
     996        while(true) {
     997                   waitfor(execute, this);
     998                or waitfor(^?{}   , this) {
     999                        break;
     1000                }
     1001        }
     1002}
     1003\end{cfacode}
     1004\caption{Example of an executor which executes action in series until the destructor is called.}
     1005\label{lst:dtor-order}
     1006\end{figure}
     1007For example, listing \ref{lst:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction.
  • doc/proposals/concurrency/text/future.tex

    r136ccd7 r4ee36bf0  
    55% ======================================================================
    66
    7 Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA.
    8 \section{Non-Blocking IO}
     7\section{Flexible Scheduling} \label{futur:sched}
    98
    109
    11 \section{Other concurrency tools}
     10\section{Non-Blocking IO} \label{futur:nbio}
     11While most of the parallelism tools
     12However, many modern workloads are not bound on computation but on IO operations, an common case being webservers and XaaS (anything as a service). These type of workloads often require significant engineering around amortising costs of blocking IO operations. While improving throughtput of these operations is outside what \CFA can do as a language, it can help users to make better use of the CPU time otherwise spent waiting on IO operations. The current trend is to use asynchronous programming using tools like callbacks and/or futurs and promises\cit. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear
    1213
    1314
    14 \section{Implicit threading}
    15 % Finally, simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the system level.
    16 %
    17 % \begin{center}
    18 % \begin{tabular}[t]{|c|c|c|}
    19 % Sequential & System Parallel & Language Parallel \\
    20 % \begin{lstlisting}
    21 % void big_sum(int* a, int* b,
    22 %                int* out,
    23 %                size_t length)
    24 % {
    25 %       for(int i = 0; i < length; ++i ) {
    26 %               out[i] = a[i] + b[i];
    27 %       }
    28 % }
    29 %
    30 %
    31 %
    32 %
    33 %
    34 % int* a[10000];
    35 % int* b[10000];
    36 % int* c[10000];
    37 % //... fill in a and b ...
    38 % big_sum(a, b, c, 10000);
    39 % \end{lstlisting} &\begin{lstlisting}
    40 % void big_sum(int* a, int* b,
    41 %                int* out,
    42 %                size_t length)
    43 % {
    44 %       range ar(a, a + length);
    45 %       range br(b, b + length);
    46 %       range or(out, out + length);
    47 %       parfor( ai, bi, oi,
    48 %       [](int* ai, int* bi, int* oi) {
    49 %               oi = ai + bi;
    50 %       });
    51 % }
    52 %
    53 % int* a[10000];
    54 % int* b[10000];
    55 % int* c[10000];
    56 % //... fill in a and b ...
    57 % big_sum(a, b, c, 10000);
    58 % \end{lstlisting}&\begin{lstlisting}
    59 % void big_sum(int* a, int* b,
    60 %                int* out,
    61 %                size_t length)
    62 % {
    63 %       for (ai, bi, oi) in (a, b, out) {
    64 %               oi = ai + bi;
    65 %       }
    66 % }
    67 %
    68 %
    69 %
    70 %
    71 %
    72 % int* a[10000];
    73 % int* b[10000];
    74 % int* c[10000];
    75 % //... fill in a and b ...
    76 % big_sum(a, b, c, 10000);
    77 % \end{lstlisting}
    78 % \end{tabular}
    79 % \end{center}
    80 %
     15
     16\section{Other concurrency tools} \label{futur:tools}
    8117
    8218
    83 \section{Multiple Paradigms}
     19\section{Implicit threading} \label{futur:implcit}
     20Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The cannonical example of implcit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithm\cit. Listing \ref{lst:parfor} shows three different code examples that accomplish pointwise sums of large arrays. Note that none of these example explicitly declare any concurrency or parallelism objects.
     21
     22\begin{figure}
     23\begin{center}
     24\begin{tabular}[t]{|c|c|c|}
     25Sequential & Library Parallel & Language Parallel \\
     26\begin{cfacode}[tabsize=3]
     27void big_sum(
     28        int* a, int* b,
     29        int* o,
     30        size_t len)
     31{
     32        for(
     33                int i = 0;
     34                i < len;
     35                ++i )
     36        {
     37                o[i]=a[i]+b[i];
     38        }
     39}
    8440
    8541
    86 \section{Transactions}
     42
     43
     44
     45int* a[10000];
     46int* b[10000];
     47int* c[10000];
     48//... fill in a & b
     49big_sum(a,b,c,10000);
     50\end{cfacode} &\begin{cfacode}[tabsize=3]
     51void big_sum(
     52        int* a, int* b,
     53        int* o,
     54        size_t len)
     55{
     56        range ar(a, a+len);
     57        range br(b, b+len);
     58        range or(o, o+len);
     59        parfor( ai, bi, oi,
     60        [](     int* ai,
     61                int* bi,
     62                int* oi)
     63        {
     64                oi=ai+bi;
     65        });
     66}
     67
     68
     69int* a[10000];
     70int* b[10000];
     71int* c[10000];
     72//... fill in a & b
     73big_sum(a,b,c,10000);
     74\end{cfacode}&\begin{cfacode}[tabsize=3]
     75void big_sum(
     76        int* a, int* b,
     77        int* o,
     78        size_t len)
     79{
     80        parfor (ai,bi,oi)
     81            in (a, b, o )
     82        {
     83                oi = ai + bi;
     84        }
     85}
     86
     87
     88
     89
     90
     91
     92
     93int* a[10000];
     94int* b[10000];
     95int* c[10000];
     96//... fill in a & b
     97big_sum(a,b,c,10000);
     98\end{cfacode}
     99\end{tabular}
     100\end{center}
     101\caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.}
     102\label{lst:parfor}
     103\end{figure}
     104
     105Implicit parallelism is a general solution and therefore is
     106
     107\section{Multiple Paradigms} \label{futur:paradigms}
     108
     109
     110\section{Transactions} \label{futur:transaction}
     111Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA.
  • doc/proposals/concurrency/text/internals.tex

    r136ccd7 r4ee36bf0  
    11
    22\chapter{Behind the scene}
    3 
    4 
    5 % ======================================================================
    6 % ======================================================================
    7 \section{Implementation Details: Interaction with polymorphism}
    8 % ======================================================================
    9 % ======================================================================
    10 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be complex to support. However, it is shown that entry-point locking solves most of the issues.
    11 
    12 First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. Since a monitor's main purpose is to ensure mutual exclusion when accessing shared data, this implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handles incomplete types (by definition), no \code{dtype} polymorphic routine can access shared data since the data requires knowledge about the type. Therefore, the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines.
    13 
    14 Before looking into complex control-flow, it is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example:
     3There are several challenges specific to \CFA when implementing concurrency. These challenges are direct results of \gls{bulk-acq} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system.
     4
     5The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since many conconcurrency operations can use an unbound amount of memory (depending on \gls{bulk-acq}) statically defining information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively.
     6
     7Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. The threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though the later is preferable in terms of performance).
     8
     9Note that since the major contributions of this thesis are extending monitor semantics to \gls{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristiques of \CFA are consired as problems which have already been solved and therefore will not be discussed further.
     10
     11% ======================================================================
     12% ======================================================================
     13\section{Mutex routines}
     14% ======================================================================
     15% ======================================================================
     16
     17The first step towards the monitor implementation is simple mutex-routines using monitors. In the single monitor case, this is done using the entry/exit procedure highlighted in listing \ref{lst:entry1}. This entry/exit procedure doesn't actually have to be extended to support multiple monitors, indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlocks\cit. In \CFA, ordering of monitor relies on memory ordering, this is sufficient because all objects are guaranteed to have distinct non-overlaping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behavior. When a mutex call is made, the concerned monitors are agregated into an variable-length pointer array and sorted based on pointer values. This array is concerved during the entire duration of the mutual-exclusion and it's ordering reused extensively.
    1518\begin{figure}
    16 \label{fig:locking-site}
     19\begin{multicols}{2}
     20Entry
     21\begin{pseudo}
     22if monitor is free
     23        enter
     24elif already own the monitor
     25        continue
     26else
     27        block
     28increment recursions
     29\end{pseudo}
     30\columnbreak
     31Exit
     32\begin{pseudo}
     33decrement recursion
     34if recursion == 0
     35        if entry queue not empty
     36                wake-up thread
     37\end{pseudo}
     38\end{multicols}
     39\caption{Initial entry and exit routine for monitors}
     40\label{lst:entry1}
     41\end{figure}
     42
     43\subsection{ Details: Interaction with polymorphism}
     44Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues.
     45
     46First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. It is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example:
     47\begin{figure}[H]
    1748\begin{center}
    18 \setlength\tabcolsep{1.5pt}
    1949\begin{tabular}{|c|c|c|}
    2050Mutex & \gls{callsite-locking} & \gls{entry-point-locking} \\
     
    6797\end{center}
    6898\caption{Callsite vs entry-point locking for mutex calls}
    69 \end{figure}
    70 
    71 
    72 Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait, which is possible because monitors are designed in terms a trait. For example:
     99\label{fig:locking-site}
     100\end{figure}
     101
     102Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is actually desired, writing a mutex routine is possible with the proper trait, for example:
    73103\begin{cfacode}
    74104//Incorrect: T is not a monitor
     
    81111\end{cfacode}
    82112
    83 
    84 % ======================================================================
    85 % ======================================================================
    86 \section{Internal scheduling: Implementation} \label{inschedimpl}
    87 % ======================================================================
    88 % ======================================================================
    89 There are several challenges specific to \CFA when implementing internal scheduling. These challenges are direct results of \gls{bulk-acq} and loose object definitions. These two constraints are to root cause of most design decisions in the implementation of internal scheduling. Furthermore, to avoid the head-aches of dynamically allocating memory in a concurrent environment, the internal-scheduling design is entirely free of mallocs and other dynamic memory allocation scheme. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system.
    90 
    91 The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. These queues need to be intrinsic\cit to avoid the need memory allocation. This entails that all the fields needed to keep track of all needed information. Since internal scheduling can use an unbound amount of memory (depending on \gls{bulk-acq}) statically defining information information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly the GCC extension variable length arrays which is used extensively.
    92 
    93 Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. In the case of external scheduling, the threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though adding too much to the mutex routine stack size can become expansive faster).
    94 
    95 The following figure is the traditionnal illustration of a monitor :
     113Both entry-point and callsite locking are valid implementations. The current \CFA implementations uses entry-point locking because it seems to require less work if done using \gls{raii}, effectively transferring the burden of implementation to object construction/destruction. The same could be said of callsite locking, the difference being that the later does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, i.e.: the function body.
     114
     115% ======================================================================
     116% ======================================================================
     117\section{Threading} \label{impl:thread}
     118% ======================================================================
     119% ======================================================================
     120
     121Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency.
     122
     123\begin{figure}
     124\begin{center}
     125{\resizebox{\textwidth}{!}{\input{system.pstex_t}}}
     126\end{center}
     127\caption{Overview of the entire system}
     128\label{fig:system1}
     129\end{figure}
     130
     131\subsection{Context Switching}
     132As mentionned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading. This is because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. This assumptions means that the basic recipe for context-switch is only to copy all callee-saved registers unto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that instruction pointer can be left untouched since the context-switch always inside the same function. In the case of coroutines, that is the entire story. Threads however do not simply context-switch between each other directly. The context-switch to processors which is where the scheduling happens. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the "kernel" where scheduling and other system operation happen. Obiously, this has the cost of doubling the context-switch cost from because threads must context-switch to an intermediate stack. However, the performance of the 2-step context-switch is still superior to a \code{pthread_yield}(see section \ref{results}). additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch to use manually (or as part of monitors). This option is not currently present in \CFA but the changes required to add it are strictly additive.
     133
     134\subsection{Processors}
     135Parallelism in \CFA are built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically pthreads in the current implementation of \CFA. Indeed, any parallelism must go through operatiing system librairies. However, \gls{cfathread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor kernel threads simply fetch a user-level thread from the scheduler and run, they are effectively executers for user-threads. The main benefit of this approach is that it offers a well defined boundary between kernel code and user-code, for example kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics.
     136
     137\subsection{Stack management}
     138One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all pthreads created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create there own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the kernel thread stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, i.e. the initial kernel thread that is given to any program. In order to respect user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor.
     139
     140\subsection{Preemption}
     141Finally, an important aspect for any complete threading system is preemption. As mentionned in chapter \ref{basics}, preemption introduces an extra degree of unceretainty, which enables users to have multiple threads interleave transparrently between eachother, rather than having to cooperate between thread for proper scheduling and CPU distribution. Indeed, preemption is desireable because it adds a degree of isolation between tasks. In a fully cooperative system, any thread that runs into a long loop can starve other threads, while in a preemptive system starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly programmer burden. Obviously, preemption is not optimal for every workload, however any preemptive system can become a cooperative system by making the time-slices extremely large. Which is why \CFA uses a preemptive threading system.
     142
     143Preemption in \CFA is based on kernel timers which are used to run a discreet event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption it sorts these expiration times in a list and sets a kernel timer for the closest one, effectiveling stepping between preemption events on each signals sent by the timer. These timers use the linux signal {\tt SIGALRM}, which is delivered to the process. This is important because when delivering signals to a process, the kernel documentation states that the signal can be delivered to any kernel thread for which the signal isn't block i.e. :
     144\begin{quote}
     145A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
     146SIGNAL(7) - Linux Programmer's Manual
     147\end{quote}
     148For the sake of simplicity and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every thread except one. Now because of how involontary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread.
     149
     150Involontary context-switching is done by sending {\tt SIGUSER1} to the corresponding processor and having the thread yield from inside the signal handler. Effectively context-switch away from the signal-handler back to the kernel and the signal-handler frame will be unwound when the thread is scheduled again. This means that a signal-handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal-handlers save and restore signal masks because user-thread migration can cause signal mask to migrate from one kernel thread to another. This is only a problem if all kernel threads among which a user thread can migrate differ in terms of signal masks. However, since the kernel thread hanlding preemption requires a different signal mask, executing user threads on the kernel alarm thread can cause deadlocks. For this reason, the alarm thread is on a tight loop around a system call to \code{sigwait} or more specifically \code{sigwaitinfo}, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (e.g. on thread termination). This is also done using {\tt SIGALRM}, but sent throught the \code{pthread_sigqueue}. Indeed, \code{sigwait} can differentiate signals sent from \code{pthread_sigqueue} from signals sent from alarms or the kernel.
     151
     152\subsection{Scheduler} \footnote{ I'm not sure what to write here, is this section even needed. }
     153Finally, an aspect that was not mentionned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors. Will this is not the highest performance algorithm, it has the significant advantage of being robust to heterogenous workloads. This is a very simple scheduling approach but is sufficient to for the context of this thesis.
     154
     155What to do here?
     156
     157However, when
     158As will be mentionned \ref{futur:sched} it needs to be updated when clusters will be
     159
     160clusters
     161
     162
     163
     164Among the most pressing updates to the \CFA
     165uses single queue
     166in future should move to multiple queues with workstealing
     167general purpouse means robust > fast
     168worksharing can higher standard deviation in performance
     169
     170
     171% ======================================================================
     172% ======================================================================
     173\section{Internal scheduling} \label{impl:intsched}
     174% ======================================================================
     175% ======================================================================
     176To ease the understanding of monitors, like many other concepts, they are generelly represented graphically. While non-scheduled monitors are simple enough for a graphical representation to be useful, internal scheduling is complex enough to justify a visual representation. The following figure is the traditionnal illustration of a monitor :
    96177
    97178\begin{center}
     
    99180\end{center}
    100181
    101 For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{inschedimpl}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields :
     182This picture has several components, the two most important being the entry-queue and the AS-stack. The entry-queue is a (almost) FIFO list where threads waiting to enter are parked, while the AS-stack is a FILO list used for threads that have been signaled or otherwise marked as running next. For \CFA, the previous picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it doesn't make sense to tie the condition to a single monitor since blocking two monitors as one would require arbitrarily picking a monitor to hold the condition. Secondly, the object waiting on the conditions and AS-stack cannot simply contain the waiting thread since a single thread can potentially wait on multiple monitors. As mentionned in section \ref{intsched}, the handling in multiple monitors is done by partially passing, which entails that each concerned monitor needs to have a node object. However, for waiting on the condition, since all threads need to wait together, a single object needs to be queued in the condition. Moving out the condition and updating the node types yields :
    102183
    103184\begin{center}
     
    105186\end{center}
    106187
    107 \newpage
    108 
    109 This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling.
    110 
     188This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling (see listing \ref{lst:entry2}).
     189
     190\begin{figure}[b]
    111191\begin{multicols}{2}
    112192Entry
    113 \begin{pseudo}[numbers=left]
     193\begin{pseudo}
    114194if monitor is free
    115195        enter
    116 elif I already own the monitor
     196elif already own the monitor
    117197        continue
    118198else
     
    123203\columnbreak
    124204Exit
    125 \begin{pseudo}[numbers=left, firstnumber=8]
     205\begin{pseudo}
    126206decrement recursion
    127207if recursion == 0
     
    135215\end{pseudo}
    136216\end{multicols}
    137 
    138 Some important things to notice about the exit routine. The solution discussed in \ref{inschedimpl} can be seen on line 11 of the previous pseudo code. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has trasnferred ownership. This solution is safe as well as preventing any potential barging.
    139 
    140 % ======================================================================
    141 % ======================================================================
    142 \section{Implementation Details: External scheduling queues}
    143 % ======================================================================
    144 % ======================================================================
    145 To support multi-monitor external scheduling means that some kind of entry-queues must be used that is aware of both monitors. However, acceptable routines must be aware of the entry queues which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement a systematic algorithm of disambiguating which queue is relavant regardless of user ordering. The proposed algorithm is to fall back on monitors lock ordering and specify that the monitor that is acquired first is the lock with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order arrival, they also contain the second mutex. Therefore, another thread with the same highest priority monitor but a different lowest priority monitor may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it may not be known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that half the multi-monitor queues will go unused for the entire duration of the program.
    146 
    147 
    148 \section{Internals}
    149 The complete mask can be pushed to any one, we are in a context where we already have full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what.
     217\caption{Entry and exit routine for monitors with internal scheduling}
     218\label{lst:entry2}
     219\end{figure}
     220
     221Some important things to notice about the exit routine. The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{lst:entry2}. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging.
     222
     223The data structure used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the callstack of the \code{wait} and \code{signal_block} routines.
     224
     225% ======================================================================
     226% ======================================================================
     227\section{External scheduling}
     228% ======================================================================
     229% ======================================================================
     230Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that entry-queues are no longer specific to a single monitor, as mentionned in section \ref{extsched}. This means that some kind of entry-queues must be used that is aware of both monitors and which holds threads that are currently waiting to enter the critical section. This challenge is solved for internal scheduling by having the entry-queues in conditions no longer be tied to a monitor, effectively allowing conditions to be moved outside of monitors. However, in the case of external scheduling, acceptable routines must be aware of the entry queues, which means they must be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement that a systematic algorithm of disambiguating which monitor holds the relevant queue regardless of user ordering. The proposed algorithm is to fall back on monitor lock ordering and specify that the monitor that is acquired first is the one with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint.
     231
     232This algorithm choice has two consequences, the entry queue of the highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order of arrival, they also contain a set of monitors. Therefore, another thread whos set contains the same highest priority monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it is not known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probable that some queues will go unused for the entire duration of the program, for example if a monitor is only used in a pair.
     233
     234Therefore, the following modifications need to be made to support external scheduling :
     235\begin{itemize}
     236        \item The threads waiting on the entry-queue need to keep track of which routine is trying to enter, and using which set of monitors. The \code{mutex} routine already has all the required information on it's stack so the thread only needs to keep a pointer to that information.
     237        \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that the complete mask can be pushed to any owned monitors, regardless of \code{when} statements, the \code{waitfor} statement is used in a context where the thread already has full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what.
     238        \item The entry/exit routine need to be updated as shown in listing \ref{lst:entry3}.
     239\end{itemize}
     240
     241Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is need for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular \code{waitfor} statements, the callstack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The waitfor semantics can then be adjusted correspondingly, as seen in listing \ref{lst:entry-dtor}
     242
     243\begin{figure}
     244\begin{multicols}{2}
     245Entry
     246\begin{pseudo}
     247if monitor is free
     248        enter
     249elif already own the monitor
     250        continue
     251elif matches waitfor mask
     252        push waiter to AS-stack
     253        continue
     254else
     255        block
     256increment recursion
     257\end{pseudo}
     258\columnbreak
     259Exit
     260\begin{pseudo}
     261decrement recursion
     262if recursion == 0
     263        if signal_stack not empty
     264                set_owner to thread
     265                if all monitors ready
     266                        wake-up thread
     267
     268        if entry queue not empty
     269                wake-up thread
     270\end{pseudo}
     271\end{multicols}
     272\caption{Entry and exit routine for monitors with internal scheduling and external scheduling}
     273\label{lst:entry3}
     274\end{figure}
     275
     276\begin{figure}
     277\begin{multicols}{2}
     278Destructor Entry
     279\begin{pseudo}
     280if monitor is free
     281        enter
     282elif already own the monitor
     283        increment recursion
     284        return
     285create wait context
     286if matches waitfor mask
     287        reset mask
     288        push self to AS-stack
     289        baton pass
     290else
     291        wait
     292increment recursion
     293\end{pseudo}
     294\columnbreak
     295Waitfor
     296\begin{pseudo}
     297lock all monitors
     298if matching thread is already there
     299        if found destructor
     300                push destructor to AS-stack
     301                unlock all monitors
     302        else
     303                push self to AS-stack
     304                baton pass
     305        return
     306
     307if non-blocking
     308        Unlock all monitors
     309        Return
     310
     311push self to AS-stack
     312set waitfor mask
     313block
     314return
     315\end{pseudo}
     316\end{multicols}
     317\caption{Pseudo code for the \code{waitfor} routine and the \code{mutex} entry routine for destructors}
     318\label{lst:entry-dtor}
     319\end{figure}
  • doc/proposals/concurrency/text/intro.tex

    r136ccd7 r4ee36bf0  
    33% ======================================================================
    44
    5 This thesis provides a minimal concurrency \acrshort{api} that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. The high-level approach and its minimal \acrshort{api} are tested in a dialect of C, call \CFA. [Is there value to say that this thesis is also an early definition of the \CFA language and library in regards to concurrency?]
     5This thesis provides a minimal concurrency \acrshort{api} that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. The high-level approach and its minimal \acrshort{api} are tested in a dialect of C, call \CFA. Furthermore, the proposed \acrshort{api} doubles as an early definition of the \CFA language and library. This thesis also comes with an implementation of the concurrency library for \CFA as well as all the required language features added to the source-to-source translator.
    66
    77There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency and which parallelism tools are available to the programmer. While these two concepts are often combined, they are in fact distinct, requiring different tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization.
  • doc/proposals/concurrency/text/parallelism.tex

    r136ccd7 r4ee36bf0  
    1616
    1717\subsection{Fibers : user-level threads without preemption}
    18 A popular varient of \glspl{uthread} is what is often refered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignore fibers.
     18A popular varient of \glspl{uthread} is what is often refered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. The significant difference between \glspl{uthread} and \glspl{fiber} is the lack of \gls{preemption} in the later one. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignores fibers.
    1919
    2020An example of a language that uses fibers is Go~\cite{Go}
     
    2626
    2727\subsection{Paradigm performance}
    28 While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., not thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortised by the actual work done.
    29 
    30 \TODO
     28While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., no thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortised by the actual work done.
    3129
    3230\section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel}
    3331
     32\Glspl{cfacluster} have not been fully implmented in the context of this thesis, currently \CFA only supports one \gls{cfacluster}, the initial one. The objective of \gls{cfacluster} is to group \gls{kthread} with identical settings together. \Glspl{uthread} can be scheduled on a \glspl{kthread} of a given \gls{cfacluster}, allowing organization between \glspl{kthread} and \glspl{uthread}. It is important that \glspl{kthread} belonging to a same \glspl{cfacluster} have homogenous settings, otherwise migrating a \gls{uthread} from one \gls{kthread} to the other can cause issues.
    3433
    3534\subsection{Future Work: Machine setup}\label{machine}
     
    3736
    3837\subsection{Paradigms}\label{cfaparadigms}
    39 Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \glspl{uthread} is the default paradigm in \CFA. However, disabling \glspl{preemption} on the \gls{cfacluster} means \glspl{cfathread} effectively become \glspl{fiber}. Since several \glspl{cfacluster} with different scheduling policy can coexist in the same application, this allows \glspl{fiber} and \glspl{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \glspl{uthread} or \glspl{fiber}.
     38Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \glspl{uthread} is the default paradigm in \CFA. However, disabling \gls{preemption} on the \gls{cfacluster} means \glspl{cfathread} effectively become \glspl{fiber}. Since several \glspl{cfacluster} with different scheduling policy can coexist in the same application, this allows \glspl{fiber} and \glspl{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \glspl{uthread} or \glspl{fiber}.
  • doc/proposals/concurrency/text/together.tex

    r136ccd7 r4ee36bf0  
    3636}
    3737\end{cfacode}
    38 One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on for ever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner :
     38One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner :
    3939\begin{cfacode}
    4040// Visualization declaration
  • doc/proposals/concurrency/thesis.tex

    r136ccd7 r4ee36bf0  
    3535\usepackage[pagewise]{lineno}
    3636\usepackage{fancyhdr}
     37\usepackage{float}
    3738\renewcommand{\linenumberfont}{\scriptsize\sffamily}
     39\usepackage{siunitx}
     40\sisetup{ binary-units=true }
    3841\input{style}                                                   % bespoke macros used in the document
    3942\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref}
     
    107110\input{together}
    108111
     112\input{results}
     113
    109114\input{future}
    110115
  • doc/proposals/concurrency/version

    r136ccd7 r4ee36bf0  
    1 0.10.212
     10.11.47
  • src/InitTweak/GenInit.cc

    r136ccd7 r4ee36bf0  
    214214                }
    215215                // a type is managed if it appears in the map of known managed types, or if it contains any polymorphism (is a type variable or generic type containing a type variable)
    216                 return managedTypes.find( SymTab::Mangler::mangle( type ) ) != managedTypes.end() || GenPoly::isPolyType( type );
     216                return managedTypes.find( SymTab::Mangler::mangleConcrete( type ) ) != managedTypes.end() || GenPoly::isPolyType( type );
    217217        }
    218218
     
    232232                        Type * type = InitTweak::getPointerBase( params.front()->get_type() );
    233233                        assert( type );
    234                         managedTypes.insert( SymTab::Mangler::mangle( type ) );
     234                        managedTypes.insert( SymTab::Mangler::mangleConcrete( type ) );
    235235                }
    236236        }
     
    242242                        if ( ObjectDecl * field = dynamic_cast< ObjectDecl * >( member ) ) {
    243243                                if ( isManaged( field ) ) {
     244                                        // generic parameters should not play a role in determining whether a generic type is constructed - construct all generic types, so that
     245                                        // polymorphic constructors make generic types managed types
    244246                                        StructInstType inst( Type::Qualifiers(), aggregateDecl );
    245                                         managedTypes.insert( SymTab::Mangler::mangle( &inst ) );
     247                                        managedTypes.insert( SymTab::Mangler::mangleConcrete( &inst ) );
    246248                                        break;
    247249                                }
  • src/ResolvExpr/AlternativeFinder.cc

    r136ccd7 r4ee36bf0  
    312312                Cost convCost = conversionCost( actualType, formalType, indexer, env );
    313313                PRINT(
    314                         std::cerr << std::endl << "cost is" << convCost << std::endl;
     314                        std::cerr << std::endl << "cost is " << convCost << std::endl;
    315315                )
    316316                if ( convCost == Cost::infinity ) {
     
    318318                }
    319319                convCost.incPoly( polyCost( formalType, env, indexer ) + polyCost( actualType, env, indexer ) );
     320                PRINT(
     321                        std::cerr << "cost with polycost is " << convCost << std::endl;
     322                )
    320323                return convCost;
    321324        }
     
    370373                                if ( function->get_isVarArgs() ) {
    371374                                        convCost.incUnsafe();
     375                                        PRINT( std::cerr << "end of formals with varargs function: inc unsafe: " << convCost << std::endl; ; )
    372376                                        // convert reference-typed expressions to value-typed expressions
    373377                                        referenceToRvalueConversion( *actualExpr );
     
    378382                        }
    379383                        Type * formalType = (*formal)->get_type();
    380                         PRINT(
    381                                 std::cerr << std::endl << "converting ";
    382                                 actualType->print( std::cerr, 8 );
    383                                 std::cerr << std::endl << " to ";
    384                                 formalType->print( std::cerr, 8 );
    385                                 std::cerr << std::endl << "environment is: ";
    386                                 alt.env.print( std::cerr, 8 );
    387                                 std::cerr << std::endl;
    388                         )
    389384                        convCost += computeExpressionConversionCost( *actualExpr, formalType, indexer, alt.env );
    390385                        ++formal; // can't be in for-loop update because of the continue
  • src/ResolvExpr/CurrentObject.cc

    r136ccd7 r4ee36bf0  
    260260
    261261                AggregateIterator( const std::string & kind, const std::string & name, Type * inst, const MemberList & members ) : kind( kind ), name( name ), inst( inst ), members( members ), curMember( members.begin() ), sub( makeGenericSubstitution( inst ) ) {
     262                        PRINT( std::cerr << "Creating " << kind << "(" << name << ")"; )
    262263                        init();
    263264                }
  • src/SymTab/Mangler.cc

    r136ccd7 r4ee36bf0  
    3232namespace SymTab {
    3333        std::string Mangler::mangleType( Type * ty ) {
    34                 Mangler mangler( false, true );
     34                Mangler mangler( false, true, true );
    3535                maybeAccept( ty, mangler );
    3636                return mangler.get_mangleName();
    3737        }
    3838
    39         Mangler::Mangler( bool mangleOverridable, bool typeMode )
    40                 : nextVarNum( 0 ), isTopLevel( true ), mangleOverridable( mangleOverridable ), typeMode( typeMode ) {}
     39        std::string Mangler::mangleConcrete( Type* ty ) {
     40                Mangler mangler( false, false, false );
     41                maybeAccept( ty, mangler );
     42                return mangler.get_mangleName();
     43        }
     44
     45        Mangler::Mangler( bool mangleOverridable, bool typeMode, bool mangleGenericParams )
     46                : nextVarNum( 0 ), isTopLevel( true ), mangleOverridable( mangleOverridable ), typeMode( typeMode ), mangleGenericParams( mangleGenericParams ) {}
    4147
    4248        Mangler::Mangler( const Mangler &rhs ) : mangleName() {
     
    166172
    167173                mangleName << ( refType->get_name().length() + prefix.length() ) << prefix << refType->get_name();
    168         }
    169 
    170         void Mangler::mangleGenericRef( ReferenceToType * refType, std::string prefix ) {
    171                 printQualifiers( refType );
    172 
    173                 std::ostringstream oldName( mangleName.str() );
    174                 mangleName.clear();
    175 
    176                 mangleName << prefix << refType->get_name();
    177 
    178                 std::list< Expression* >& params = refType->get_parameters();
    179                 if ( ! params.empty() ) {
    180                         mangleName << "_";
    181                         for ( std::list< Expression* >::const_iterator param = params.begin(); param != params.end(); ++param ) {
    182                                 TypeExpr *paramType = dynamic_cast< TypeExpr* >( *param );
    183                                 assertf(paramType, "Aggregate parameters should be type expressions: %s", toString(*param).c_str());
    184                                 maybeAccept( paramType->get_type(), *this );
     174
     175                if ( mangleGenericParams ) {
     176                        std::list< Expression* >& params = refType->get_parameters();
     177                        if ( ! params.empty() ) {
     178                                mangleName << "_";
     179                                for ( std::list< Expression* >::const_iterator param = params.begin(); param != params.end(); ++param ) {
     180                                        TypeExpr *paramType = dynamic_cast< TypeExpr* >( *param );
     181                                        assertf(paramType, "Aggregate parameters should be type expressions: %s", toString(*param).c_str());
     182                                        maybeAccept( paramType->get_type(), *this );
     183                                }
     184                                mangleName << "_";
    185185                        }
    186                         mangleName << "_";
    187186                }
    188 
    189                 oldName << mangleName.str().length() << mangleName.str();
    190                 mangleName.str( oldName.str() );
    191187        }
    192188
    193189        void Mangler::visit( StructInstType * aggregateUseType ) {
    194                 if ( typeMode ) mangleGenericRef( aggregateUseType, "s" );
    195                 else mangleRef( aggregateUseType, "s" );
     190                mangleRef( aggregateUseType, "s" );
    196191        }
    197192
    198193        void Mangler::visit( UnionInstType * aggregateUseType ) {
    199                 if ( typeMode ) mangleGenericRef( aggregateUseType, "u" );
    200                 else mangleRef( aggregateUseType, "u" );
     194                mangleRef( aggregateUseType, "u" );
    201195        }
    202196
     
    285279                                varNums[ (*i)->name ] = std::pair< int, int >( nextVarNum++, (int)(*i)->get_kind() );
    286280                                for ( std::list< DeclarationWithType* >::iterator assert = (*i)->assertions.begin(); assert != (*i)->assertions.end(); ++assert ) {
    287                                         Mangler sub_mangler( mangleOverridable, typeMode );
     281                                        Mangler sub_mangler( mangleOverridable, typeMode, mangleGenericParams );
    288282                                        sub_mangler.nextVarNum = nextVarNum;
    289283                                        sub_mangler.isTopLevel = false;
  • src/SymTab/Mangler.h

    r136ccd7 r4ee36bf0  
    3030                /// Mangle syntax tree object; primary interface to clients
    3131                template< typename SynTreeClass >
    32             static std::string mangle( SynTreeClass *decl, bool mangleOverridable = true, bool typeMode = false );
     32            static std::string mangle( SynTreeClass *decl, bool mangleOverridable = true, bool typeMode = false, bool mangleGenericParams = true );
    3333                /// Mangle a type name; secondary interface
    3434                static std::string mangleType( Type* ty );
     35                /// Mangle ignoring generic type parameters
     36                static std::string mangleConcrete( Type* ty );
     37
    3538
    3639                virtual void visit( ObjectDecl *declaration );
     
    6265                bool mangleOverridable;         ///< Specially mangle overridable built-in methods
    6366                bool typeMode;                  ///< Produce a unique mangled name for a type
     67                bool mangleGenericParams;       ///< Include generic parameters in name mangling if true
    6468
    65                 Mangler( bool mangleOverridable, bool typeMode );
     69                Mangler( bool mangleOverridable, bool typeMode, bool mangleGenericParams );
    6670                Mangler( const Mangler & );
    6771
    6872                void mangleDecl( DeclarationWithType *declaration );
    6973                void mangleRef( ReferenceToType *refType, std::string prefix );
    70                 void mangleGenericRef( ReferenceToType *refType, std::string prefix );
    7174
    7275                void printQualifiers( Type *type );
     
    7477
    7578        template< typename SynTreeClass >
    76         std::string Mangler::mangle( SynTreeClass *decl, bool mangleOverridable, bool typeMode ) {
    77                 Mangler mangler( mangleOverridable, typeMode );
     79        std::string Mangler::mangle( SynTreeClass *decl, bool mangleOverridable, bool typeMode, bool mangleGenericParams ) {
     80                Mangler mangler( mangleOverridable, typeMode, mangleGenericParams );
    7881                maybeAccept( decl, mangler );
    7982                return mangler.get_mangleName();
  • src/SymTab/Validate.cc

    r136ccd7 r4ee36bf0  
    268268                HoistStruct::hoistStruct( translationUnit ); // must happen after EliminateTypedef, so that aggregate typedefs occur in the correct order
    269269                ReturnTypeFixer::fix( translationUnit ); // must happen before autogen
     270                acceptAll( translationUnit, epc ); // must happen before VerifyCtorDtorAssign, because void return objects should not exist; before LinkReferenceToTypes because it is an indexer and needs correct types for mangling
    270271                acceptAll( translationUnit, lrt ); // must happen before autogen, because sized flag needs to propagate to generated functions
    271272                acceptAll( translationUnit, genericParams );  // check as early as possible - can't happen before LinkReferenceToTypes
    272                 acceptAll( translationUnit, epc ); // must happen before VerifyCtorDtorAssign, because void return objects should not exist
    273273                VerifyCtorDtorAssign::verify( translationUnit );  // must happen before autogen, because autogen examines existing ctor/dtors
    274274                ReturnChecker::checkFunctionReturns( translationUnit );
  • src/benchmark/Makefile.am

    r136ccd7 r4ee36bf0  
    2727
    2828noinst_PROGRAMS =
     29
     30all : ctxswitch$(EXEEXT) mutex$(EXEEXT) signal$(EXEEXT) waitfor$(EXEEXT) creation$(EXEEXT)
    2931
    3032bench$(EXEEXT) :
     
    6365ctxswitch-pthread$(EXEEXT):
    6466        @BACKEND_CC@ ctxswitch/pthreads.c  -DBENCH_N=50000000  -I. -lrt -pthread                    ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    65 
    66 ## =========================================================================================================
    67 creation$(EXEEXT) :\
    68         creation-pthread.run            \
    69         creation-cfa_coroutine.run      \
    70         creation-cfa_thread.run         \
    71         creation-upp_coroutine.run      \
    72         creation-upp_thread.run
    73 
    74 creation-cfa_coroutine$(EXEEXT):
    75         ${CC}        creation/cfa_cor.c   -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    76 
    77 creation-cfa_thread$(EXEEXT):
    78         ${CC}        creation/cfa_thrd.c  -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    79 
    80 creation-upp_coroutine$(EXEEXT):
    81         u++          creation/upp_cor.cc  -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    82 
    83 creation-upp_thread$(EXEEXT):
    84         u++          creation/upp_thrd.cc -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    85 
    86 creation-pthread$(EXEEXT):
    87         @BACKEND_CC@ creation/pthreads.c  -DBENCH_N=250000     -I. -lrt -pthread                    ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    8867
    8968## =========================================================================================================
     
    153132
    154133## =========================================================================================================
     134creation$(EXEEXT) :\
     135        creation-pthread.run            \
     136        creation-cfa_coroutine.run      \
     137        creation-cfa_thread.run         \
     138        creation-upp_coroutine.run      \
     139        creation-upp_thread.run
     140
     141creation-cfa_coroutine$(EXEEXT):
     142        ${CC}        creation/cfa_cor.c   -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     143
     144creation-cfa_thread$(EXEEXT):
     145        ${CC}        creation/cfa_thrd.c  -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     146
     147creation-upp_coroutine$(EXEEXT):
     148        u++          creation/upp_cor.cc  -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     149
     150creation-upp_thread$(EXEEXT):
     151        u++          creation/upp_thrd.cc -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     152
     153creation-pthread$(EXEEXT):
     154        @BACKEND_CC@ creation/pthreads.c  -DBENCH_N=250000     -I. -lrt -pthread                    ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     155
     156## =========================================================================================================
    155157
    156158%.run : %$(EXEEXT) ${REPEAT}
  • src/benchmark/Makefile.in

    r136ccd7 r4ee36bf0  
    444444.NOTPARALLEL:
    445445
     446all : ctxswitch$(EXEEXT) mutex$(EXEEXT) signal$(EXEEXT) waitfor$(EXEEXT) creation$(EXEEXT)
     447
    446448bench$(EXEEXT) :
    447449        @for ccflags in "-debug" "-nodebug"; do \
     
    479481        @BACKEND_CC@ ctxswitch/pthreads.c  -DBENCH_N=50000000  -I. -lrt -pthread                    ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    480482
    481 creation$(EXEEXT) :\
    482         creation-pthread.run            \
    483         creation-cfa_coroutine.run      \
    484         creation-cfa_thread.run         \
    485         creation-upp_coroutine.run      \
    486         creation-upp_thread.run
    487 
    488 creation-cfa_coroutine$(EXEEXT):
    489         ${CC}        creation/cfa_cor.c   -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    490 
    491 creation-cfa_thread$(EXEEXT):
    492         ${CC}        creation/cfa_thrd.c  -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    493 
    494 creation-upp_coroutine$(EXEEXT):
    495         u++          creation/upp_cor.cc  -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    496 
    497 creation-upp_thread$(EXEEXT):
    498         u++          creation/upp_thrd.cc -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    499 
    500 creation-pthread$(EXEEXT):
    501         @BACKEND_CC@ creation/pthreads.c  -DBENCH_N=250000     -I. -lrt -pthread                    ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    502 
    503483mutex$(EXEEXT) :\
    504484        mutex-function.run      \
     
    562542waitfor-cfa4$(EXEEXT):
    563543        ${CC}        schedext/cfa4.c     -DBENCH_N=500000      -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     544
     545creation$(EXEEXT) :\
     546        creation-pthread.run            \
     547        creation-cfa_coroutine.run      \
     548        creation-cfa_thread.run         \
     549        creation-upp_coroutine.run      \
     550        creation-upp_thread.run
     551
     552creation-cfa_coroutine$(EXEEXT):
     553        ${CC}        creation/cfa_cor.c   -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     554
     555creation-cfa_thread$(EXEEXT):
     556        ${CC}        creation/cfa_thrd.c  -DBENCH_N=10000000   -I. -nodebug -lrt -quiet @CFA_FLAGS@ ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     557
     558creation-upp_coroutine$(EXEEXT):
     559        u++          creation/upp_cor.cc  -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     560
     561creation-upp_thread$(EXEEXT):
     562        u++          creation/upp_thrd.cc -DBENCH_N=50000000   -I. -nodebug -lrt -quiet             ${AM_CFLAGS} ${CFLAGS} ${ccflags}
     563
     564creation-pthread$(EXEEXT):
     565        @BACKEND_CC@ creation/pthreads.c  -DBENCH_N=250000     -I. -lrt -pthread                    ${AM_CFLAGS} ${CFLAGS} ${ccflags}
    564566
    565567%.run : %$(EXEEXT) ${REPEAT}
  • src/benchmark/csv-data.c

    r136ccd7 r4ee36bf0  
    111111        StartTime = Time();
    112112        for( int i = 0;; i++ ) {
    113                 signal(&cond1a);
    114                 if( i > N ) break;
    115                 wait(&cond1b);
     113                signal(cond1a);
     114                if( i > N ) break;
     115                wait(cond1b);
    116116        }
    117117        EndTime = Time();
     
    122122void side1B( mon_t & mutex a ) {
    123123        for( int i = 0;; i++ ) {
    124                 signal(&cond1b);
    125                 if( i > N ) break;
    126                 wait(&cond1a);
     124                signal(cond1b);
     125                if( i > N ) break;
     126                wait(cond1a);
    127127        }
    128128}
     
    159159        StartTime = Time();
    160160        for( int i = 0;; i++ ) {
    161                 signal(&cond2a);
    162                 if( i > N ) break;
    163                 wait(&cond2b);
     161                signal(cond2a);
     162                if( i > N ) break;
     163                wait(cond2b);
    164164        }
    165165        EndTime = Time();
     
    170170void side2B( mon_t & mutex a, mon_t & mutex b ) {
    171171        for( int i = 0;; i++ ) {
    172                 signal(&cond2b);
    173                 if( i > N ) break;
    174                 wait(&cond2a);
     172                signal(cond2b);
     173                if( i > N ) break;
     174                wait(cond2a);
    175175        }
    176176}
  • src/benchmark/schedint/cfa1.c

    r136ccd7 r4ee36bf0  
    1515
    1616void __attribute__((noinline)) call( M & mutex a1 ) {
    17         signal(&c);
     17        signal(c);
    1818}
    1919
     
    2222        BENCH(
    2323                for (size_t i = 0; i < n; i++) {
    24                         wait(&c);
     24                        wait(c);
    2525                },
    2626                result
  • src/benchmark/schedint/cfa2.c

    r136ccd7 r4ee36bf0  
    1515
    1616void __attribute__((noinline)) call( M & mutex a1, M & mutex a2 ) {
    17         signal(&c);
     17        signal(c);
    1818}
    1919
     
    2222        BENCH(
    2323                for (size_t i = 0; i < n; i++) {
    24                         wait(&c);
     24                        wait(c);
    2525                },
    2626                result
  • src/benchmark/schedint/cfa4.c

    r136ccd7 r4ee36bf0  
    1515
    1616void __attribute__((noinline)) call( M & mutex a1, M & mutex a2, M & mutex a3, M & mutex a4 ) {
    17         signal(&c);
     17        signal(c);
    1818}
    1919
     
    2222        BENCH(
    2323                for (size_t i = 0; i < n; i++) {
    24                         wait(&c);
     24                        wait(c);
    2525                },
    2626                result
  • src/libcfa/concurrency/coroutine.c

    r136ccd7 r4ee36bf0  
    156156                this->limit = (char *)libCeiling( (unsigned long)this->storage, 16 ); // minimum alignment
    157157        } // if
    158         assertf( this->size >= MinStackSize, "Stack size %d provides less than minimum of %d bytes for a stack.", this->size, MinStackSize );
     158        assertf( this->size >= MinStackSize, "Stack size %zd provides less than minimum of %d bytes for a stack.", this->size, MinStackSize );
    159159
    160160        this->base = (char *)this->limit + this->size;
  • src/libcfa/concurrency/invoke.h

    r136ccd7 r4ee36bf0  
    2525#define _INVOKE_H_
    2626
    27       #define unlikely(x)    __builtin_expect(!!(x), 0)
    28       #define thread_local _Thread_local
    29 
    30       typedef void (*fptr_t)();
    31 
    32       struct spinlock {
    33             volatile int lock;
    34             #ifdef __CFA_DEBUG__
    35                   const char * prev_name;
    36                   void* prev_thrd;
    37             #endif
    38       };
    39 
    40       struct __thread_queue_t {
    41             struct thread_desc * head;
    42             struct thread_desc ** tail;
    43       };
    44 
    45       struct __condition_stack_t {
    46             struct __condition_criterion_t * top;
    47       };
    48 
    49       #ifdef __CFORALL__
    50       extern "Cforall" {
    51             void ?{}( struct __thread_queue_t & );
    52             void append( struct __thread_queue_t *, struct thread_desc * );
    53             struct thread_desc * pop_head( struct __thread_queue_t * );
    54             struct thread_desc * remove( struct __thread_queue_t *, struct thread_desc ** );
    55 
    56             void ?{}( struct __condition_stack_t & );
    57             void push( struct __condition_stack_t *, struct __condition_criterion_t * );
    58             struct __condition_criterion_t * pop( struct __condition_stack_t * );
    59 
    60             void ?{}(spinlock & this);
    61             void ^?{}(spinlock & this);
    62       }
    63       #endif
    64 
    65       struct coStack_t {
    66             unsigned int size;                        // size of stack
    67             void *storage;                            // pointer to stack
    68             void *limit;                              // stack grows towards stack limit
    69             void *base;                               // base of stack
    70             void *context;                            // address of cfa_context_t
    71             void *top;                                // address of top of storage
    72             bool userStack;                           // whether or not the user allocated the stack
    73       };
    74 
    75       enum coroutine_state { Halted, Start, Inactive, Active, Primed };
    76 
    77       struct coroutine_desc {
    78             struct coStack_t stack;                   // stack information of the coroutine
    79             const char *name;                         // textual name for coroutine/task, initialized by uC++ generated code
    80             int errno_;                               // copy of global UNIX variable errno
    81             enum coroutine_state state;               // current execution status for coroutine
    82             struct coroutine_desc * starter;          // first coroutine to resume this one
    83             struct coroutine_desc * last;             // last coroutine to resume this one
    84       };
    85 
    86       struct __waitfor_mask_t {
    87             short * accepted;                         // the index of the accepted function, -1 if none
    88             struct __acceptable_t * clauses;          // list of acceptable functions, null if any
    89             short size;                               // number of acceptable functions
    90       };
    91 
    92       struct monitor_desc {
    93             struct spinlock lock;                     // spinlock to protect internal data
    94             struct thread_desc * owner;               // current owner of the monitor
    95             struct __thread_queue_t entry_queue;      // queue of threads that are blocked waiting for the monitor
    96             struct __condition_stack_t signal_stack;  // stack of conditions to run next once we exit the monitor
    97             unsigned int recursion;                   // monitor routines can be called recursively, we need to keep track of that
    98             struct __waitfor_mask_t mask;             // mask used to know if some thread is waiting for something while holding the monitor
    99             struct __condition_node_t * dtor_node;    // node used to signal the dtor in a waitfor dtor
    100       };
    101 
    102       struct __monitor_group_t {
    103             struct monitor_desc ** list;              // currently held monitors
    104             short                  size;              // number of currently held monitors
    105             fptr_t                 func;              // last function that acquired monitors
    106       };
    107 
    108       struct thread_desc {
    109             // Core threading fields
    110             struct coroutine_desc  self_cor;          // coroutine body used to store context
    111             struct monitor_desc    self_mon;          // monitor body used for mutual exclusion
    112             struct monitor_desc *  self_mon_p;        // pointer to monitor with sufficient lifetime for current monitors
    113             struct __monitor_group_t monitors;        // monitors currently held by this thread
    114 
    115             // Link lists fields
    116             struct thread_desc * next;                // instrusive link field for threads
    117 
    118 
     27        #define unlikely(x)    __builtin_expect(!!(x), 0)
     28        #define thread_local _Thread_local
     29
     30        typedef void (*fptr_t)();
     31        typedef int_fast16_t __lock_size_t;
     32
     33        struct spinlock {
     34                volatile int lock;
     35                #ifdef __CFA_DEBUG__
     36                        const char * prev_name;
     37                        void* prev_thrd;
     38                #endif
     39        };
     40
     41        struct __thread_queue_t {
     42                struct thread_desc * head;
     43                struct thread_desc ** tail;
     44        };
     45
     46        struct __condition_stack_t {
     47                struct __condition_criterion_t * top;
     48        };
     49
     50        #ifdef __CFORALL__
     51        extern "Cforall" {
     52                void ?{}( struct __thread_queue_t & );
     53                void append( struct __thread_queue_t &, struct thread_desc * );
     54                struct thread_desc * pop_head( struct __thread_queue_t & );
     55                struct thread_desc * remove( struct __thread_queue_t &, struct thread_desc ** );
     56
     57                void ?{}( struct __condition_stack_t & );
     58                void push( struct __condition_stack_t &, struct __condition_criterion_t * );
     59                struct __condition_criterion_t * pop( struct __condition_stack_t & );
     60
     61                void  ?{}(spinlock & this);
     62                void ^?{}(spinlock & this);
     63        }
     64        #endif
     65
     66        struct coStack_t {
     67                // size of stack
     68                size_t size;
     69
     70                // pointer to stack
     71                void *storage;
     72
     73                // stack grows towards stack limit
     74                void *limit;
     75
     76                // base of stack
     77                void *base;
     78
     79                // address of cfa_context_t
     80                void *context;
     81
     82                // address of top of storage
     83                void *top;
     84
     85                // whether or not the user allocated the stack
     86                bool userStack;
     87        };
     88
     89        enum coroutine_state { Halted, Start, Inactive, Active, Primed };
     90
     91        struct coroutine_desc {
     92                // stack information of the coroutine
     93                struct coStack_t stack;
     94
     95                // textual name for coroutine/task, initialized by uC++ generated code
     96                const char *name;
     97
     98                // copy of global UNIX variable errno
     99                int errno_;
     100
     101                // current execution status for coroutine
     102                enum coroutine_state state;
     103
     104                // first coroutine to resume this one
     105                struct coroutine_desc * starter;
     106
     107                // last coroutine to resume this one
     108                struct coroutine_desc * last;
     109        };
     110
     111        struct __waitfor_mask_t {
     112                // the index of the accepted function, -1 if none
     113                short * accepted;
     114
     115                // list of acceptable functions, null if any
     116                struct __acceptable_t * clauses;
     117
     118                // number of acceptable functions
     119                __lock_size_t size;
     120        };
     121
     122        struct monitor_desc {
     123                // spinlock to protect internal data
     124                struct spinlock lock;
     125
     126                // current owner of the monitor
     127                struct thread_desc * owner;
     128
     129                // queue of threads that are blocked waiting for the monitor
     130                struct __thread_queue_t entry_queue;
     131
     132                // stack of conditions to run next once we exit the monitor
     133                struct __condition_stack_t signal_stack;
     134
     135                // monitor routines can be called recursively, we need to keep track of that
     136                unsigned int recursion;
     137
     138                // mask used to know if some thread is waiting for something while holding the monitor
     139                struct __waitfor_mask_t mask;
     140
     141                // node used to signal the dtor in a waitfor dtor
     142                struct __condition_node_t * dtor_node;
     143        };
     144
     145        struct __monitor_group_t {
     146                // currently held monitors
     147                struct monitor_desc ** list;
     148
     149                // number of currently held monitors
     150                __lock_size_t size;
     151
     152                // last function that acquired monitors
     153                fptr_t func;
     154        };
     155
     156        struct thread_desc {
     157                // Core threading fields
     158                // coroutine body used to store context
     159                struct coroutine_desc  self_cor;
     160
     161                // monitor body used for mutual exclusion
     162                struct monitor_desc    self_mon;
     163
     164                // pointer to monitor with sufficient lifetime for current monitors
     165                struct monitor_desc *  self_mon_p;
     166
     167                // monitors currently held by this thread
     168                struct __monitor_group_t monitors;
     169
     170                // Link lists fields
     171                // instrusive link field for threads
     172                struct thread_desc * next;
    119173     };
    120174
    121175     #ifdef __CFORALL__
    122176     extern "Cforall" {
    123             static inline monitor_desc * ?[?]( const __monitor_group_t & this, ptrdiff_t index ) {
    124                   return this.list[index];
    125             }
    126 
    127             static inline bool ?==?( const __monitor_group_t & lhs, const __monitor_group_t & rhs ) {
    128                   if( (lhs.list != 0) != (rhs.list != 0) ) return false;
    129                   if( lhs.size != rhs.size ) return false;
    130                   if( lhs.func != rhs.func ) return false;
    131 
    132                   // Check that all the monitors match
    133                   for( int i = 0; i < lhs.size; i++ ) {
    134                         // If not a match, check next function
    135                         if( lhs[i] != rhs[i] ) return false;
    136                   }
    137 
    138                   return true;
    139             }
    140       }
    141       #endif
     177                static inline monitor_desc * ?[?]( const __monitor_group_t & this, ptrdiff_t index ) {
     178                        return this.list[index];
     179                }
     180
     181                static inline bool ?==?( const __monitor_group_t & lhs, const __monitor_group_t & rhs ) {
     182                        if( (lhs.list != 0) != (rhs.list != 0) ) return false;
     183                        if( lhs.size != rhs.size ) return false;
     184                        if( lhs.func != rhs.func ) return false;
     185
     186                        // Check that all the monitors match
     187                        for( int i = 0; i < lhs.size; i++ ) {
     188                                // If not a match, check next function
     189                                if( lhs[i] != rhs[i] ) return false;
     190                        }
     191
     192                        return true;
     193                }
     194        }
     195        #endif
    142196
    143197#endif //_INVOKE_H_
     
    146200#define _INVOKE_PRIVATE_H_
    147201
    148       struct machine_context_t {
    149             void *SP;
    150             void *FP;
    151             void *PC;
    152       };
    153 
    154       // assembler routines that performs the context switch
    155       extern void CtxInvokeStub( void );
    156       void CtxSwitch( void * from, void * to ) asm ("CtxSwitch");
    157 
    158       #if   defined( __x86_64__ )
    159       #define CtxGet( ctx ) __asm__ ( \
    160                   "movq %%rsp,%0\n"   \
    161                   "movq %%rbp,%1\n"   \
    162             : "=rm" (ctx.SP), "=rm" (ctx.FP) )
    163       #elif defined( __i386__ )
    164       #define CtxGet( ctx ) __asm__ ( \
    165                   "movl %%esp,%0\n"   \
    166                   "movl %%ebp,%1\n"   \
    167             : "=rm" (ctx.SP), "=rm" (ctx.FP) )
    168       #endif
     202        struct machine_context_t {
     203                void *SP;
     204                void *FP;
     205                void *PC;
     206        };
     207
     208        // assembler routines that performs the context switch
     209        extern void CtxInvokeStub( void );
     210        void CtxSwitch( void * from, void * to ) asm ("CtxSwitch");
     211
     212        #if   defined( __x86_64__ )
     213        #define CtxGet( ctx ) __asm__ ( \
     214                        "movq %%rsp,%0\n"   \
     215                        "movq %%rbp,%1\n"   \
     216                : "=rm" (ctx.SP), "=rm" (ctx.FP) )
     217        #elif defined( __i386__ )
     218        #define CtxGet( ctx ) __asm__ ( \
     219                        "movl %%esp,%0\n"   \
     220                        "movl %%ebp,%1\n"   \
     221                : "=rm" (ctx.SP), "=rm" (ctx.FP) )
     222        #endif
    169223
    170224#endif //_INVOKE_PRIVATE_H_
  • src/libcfa/concurrency/kernel

    r136ccd7 r4ee36bf0  
    2626//-----------------------------------------------------------------------------
    2727// Locks
    28 void lock      ( spinlock * DEBUG_CTX_PARAM2 );       // Lock the spinlock, spin if already acquired
    29 void lock_yield( spinlock * DEBUG_CTX_PARAM2 );       // Lock the spinlock, yield repeatedly if already acquired
    30 bool try_lock  ( spinlock * DEBUG_CTX_PARAM2 );       // Lock the spinlock, return false if already acquired
    31 void unlock    ( spinlock * );                        // Unlock the spinlock
     28// Lock the spinlock, spin if already acquired
     29void lock      ( spinlock * DEBUG_CTX_PARAM2 );
     30
     31// Lock the spinlock, yield repeatedly if already acquired
     32void lock_yield( spinlock * DEBUG_CTX_PARAM2 );
     33
     34// Lock the spinlock, return false if already acquired
     35bool try_lock  ( spinlock * DEBUG_CTX_PARAM2 );
     36
     37// Unlock the spinlock
     38void unlock    ( spinlock * );
    3239
    3340struct semaphore {
     
    3946void  ?{}(semaphore & this, int count = 1);
    4047void ^?{}(semaphore & this);
    41 void P(semaphore * this);
    42 void V(semaphore * this);
     48void   P (semaphore & this);
     49void   V (semaphore & this);
    4350
    4451
     
    4653// Cluster
    4754struct cluster {
    48         spinlock ready_queue_lock;                      // Ready queue locks
    49         __thread_queue_t ready_queue;                   // Ready queue for threads
    50         unsigned long long int preemption;              // Preemption rate on this cluster
     55        // Ready queue locks
     56        spinlock ready_queue_lock;
     57
     58        // Ready queue for threads
     59        __thread_queue_t ready_queue;
     60
     61        // Preemption rate on this cluster
     62        unsigned long long int preemption;
    5163};
    5264
    53 void ?{}(cluster & this);
     65void ?{} (cluster & this);
    5466void ^?{}(cluster & this);
    5567
     
    7991struct processor {
    8092        // Main state
    81         struct processorCtx_t * runner;                 // Coroutine ctx who does keeps the state of the processor
    82         cluster * cltr;                                 // Cluster from which to get threads
    83         pthread_t kernel_thread;                        // Handle to pthreads
     93        // Coroutine ctx who does keeps the state of the processor
     94        struct processorCtx_t * runner;
     95
     96        // Cluster from which to get threads
     97        cluster * cltr;
     98
     99        // Handle to pthreads
     100        pthread_t kernel_thread;
    84101
    85102        // Termination
    86         volatile bool do_terminate;                     // Set to true to notify the processor should terminate
    87         semaphore terminated;                           // Termination synchronisation
     103        // Set to true to notify the processor should terminate
     104        volatile bool do_terminate;
     105
     106        // Termination synchronisation
     107        semaphore terminated;
    88108
    89109        // RunThread data
    90         struct FinishAction finish;                     // Action to do after a thread is ran
     110        // Action to do after a thread is ran
     111        struct FinishAction finish;
    91112
    92113        // Preemption data
    93         struct alarm_node_t * preemption_alarm;         // Node which is added in the discrete event simulaiton
    94         bool pending_preemption;                        // If true, a preemption was triggered in an unsafe region, the processor must preempt as soon as possible
     114        // Node which is added in the discrete event simulaiton
     115        struct alarm_node_t * preemption_alarm;
     116
     117        // If true, a preemption was triggered in an unsafe region, the processor must preempt as soon as possible
     118        bool pending_preemption;
    95119
    96120#ifdef __CFA_DEBUG__
    97         char * last_enable;                             // Last function to enable preemption on this processor
     121        // Last function to enable preemption on this processor
     122        char * last_enable;
    98123#endif
    99124};
    100125
    101 void ?{}(processor & this);
    102 void ?{}(processor & this, cluster * cltr);
     126void  ?{}(processor & this);
     127void  ?{}(processor & this, cluster * cltr);
    103128void ^?{}(processor & this);
    104129
  • src/libcfa/concurrency/kernel.c

    r136ccd7 r4ee36bf0  
    158158                LIB_DEBUG_PRINT_SAFE("Kernel : core %p signaling termination\n", &this);
    159159                this.do_terminate = true;
    160                 P( &this.terminated );
     160                P( this.terminated );
    161161                pthread_join( this.kernel_thread, NULL );
    162162        }
     
    216216        }
    217217
    218         V( &this->terminated );
     218        V( this->terminated );
    219219
    220220        LIB_DEBUG_PRINT_SAFE("Kernel : core %p terminated\n", this);
     
    335335
    336336        lock(   &this_processor->cltr->ready_queue_lock DEBUG_CTX2 );
    337         append( &this_processor->cltr->ready_queue, thrd );
     337        append( this_processor->cltr->ready_queue, thrd );
    338338        unlock( &this_processor->cltr->ready_queue_lock );
    339339
     
    344344        verify( disable_preempt_count > 0 );
    345345        lock( &this->ready_queue_lock DEBUG_CTX2 );
    346         thread_desc * head = pop_head( &this->ready_queue );
     346        thread_desc * head = pop_head( this->ready_queue );
    347347        unlock( &this->ready_queue_lock );
    348348        verify( disable_preempt_count > 0 );
     
    398398}
    399399
    400 void BlockInternal(spinlock ** locks, unsigned short count) {
     400void BlockInternal(spinlock * locks [], unsigned short count) {
    401401        disable_interrupts();
    402402        this_processor->finish.action_code = Release_Multi;
     
    411411}
    412412
    413 void BlockInternal(spinlock ** locks, unsigned short lock_count, thread_desc ** thrds, unsigned short thrd_count) {
     413void BlockInternal(spinlock * locks [], unsigned short lock_count, thread_desc * thrds [], unsigned short thrd_count) {
    414414        disable_interrupts();
    415415        this_processor->finish.action_code = Release_Multi_Schedule;
     
    618618void ^?{}(semaphore & this) {}
    619619
    620 void P(semaphore * this) {
    621         lock( &this->lock DEBUG_CTX2 );
    622         this->count -= 1;
    623         if ( this->count < 0 ) {
     620void P(semaphore & this) {
     621        lock( &this.lock DEBUG_CTX2 );
     622        this.count -= 1;
     623        if ( this.count < 0 ) {
    624624                // queue current task
    625                 append( &this->waiting, (thread_desc *)this_thread );
     625                append( this.waiting, (thread_desc *)this_thread );
    626626
    627627                // atomically release spin lock and block
    628                 BlockInternal( &this->lock );
     628                BlockInternal( &this.lock );
    629629        }
    630630        else {
    631             unlock( &this->lock );
    632         }
    633 }
    634 
    635 void V(semaphore * this) {
     631            unlock( &this.lock );
     632        }
     633}
     634
     635void V(semaphore & this) {
    636636        thread_desc * thrd = NULL;
    637         lock( &this->lock DEBUG_CTX2 );
    638         this->count += 1;
    639         if ( this->count <= 0 ) {
     637        lock( &this.lock DEBUG_CTX2 );
     638        this.count += 1;
     639        if ( this.count <= 0 ) {
    640640                // remove task at head of waiting list
    641                 thrd = pop_head( &this->waiting );
    642         }
    643 
    644         unlock( &this->lock );
     641                thrd = pop_head( this.waiting );
     642        }
     643
     644        unlock( &this.lock );
    645645
    646646        // make new owner
     
    655655}
    656656
    657 void append( __thread_queue_t * this, thread_desc * t ) {
    658         verify(this->tail != NULL);
    659         *this->tail = t;
    660         this->tail = &t->next;
    661 }
    662 
    663 thread_desc * pop_head( __thread_queue_t * this ) {
    664         thread_desc * head = this->head;
     657void append( __thread_queue_t & this, thread_desc * t ) {
     658        verify(this.tail != NULL);
     659        *this.tail = t;
     660        this.tail = &t->next;
     661}
     662
     663thread_desc * pop_head( __thread_queue_t & this ) {
     664        thread_desc * head = this.head;
    665665        if( head ) {
    666                 this->head = head->next;
     666                this.head = head->next;
    667667                if( !head->next ) {
    668                         this->tail = &this->head;
     668                        this.tail = &this.head;
    669669                }
    670670                head->next = NULL;
     
    673673}
    674674
    675 thread_desc * remove( __thread_queue_t * this, thread_desc ** it ) {
     675thread_desc * remove( __thread_queue_t & this, thread_desc ** it ) {
    676676        thread_desc * thrd = *it;
    677677        verify( thrd );
     
    679679        (*it) = thrd->next;
    680680
    681         if( this->tail == &thrd->next ) {
    682                 this->tail = it;
     681        if( this.tail == &thrd->next ) {
     682                this.tail = it;
    683683        }
    684684
    685685        thrd->next = NULL;
    686686
    687         verify( (this->head == NULL) == (&this->head == this->tail) );
    688         verify( *this->tail == NULL );
     687        verify( (this.head == NULL) == (&this.head == this.tail) );
     688        verify( *this.tail == NULL );
    689689        return thrd;
    690690}
     
    694694}
    695695
    696 void push( __condition_stack_t * this, __condition_criterion_t * t ) {
     696void push( __condition_stack_t & this, __condition_criterion_t * t ) {
    697697        verify( !t->next );
    698         t->next = this->top;
    699         this->top = t;
    700 }
    701 
    702 __condition_criterion_t * pop( __condition_stack_t * this ) {
    703         __condition_criterion_t * top = this->top;
     698        t->next = this.top;
     699        this.top = t;
     700}
     701
     702__condition_criterion_t * pop( __condition_stack_t & this ) {
     703        __condition_criterion_t * top = this.top;
    704704        if( top ) {
    705                 this->top = top->next;
     705                this.top = top->next;
    706706                top->next = NULL;
    707707        }
  • src/libcfa/concurrency/kernel_private.h

    r136ccd7 r4ee36bf0  
    4848void BlockInternal(thread_desc * thrd);
    4949void BlockInternal(spinlock * lock, thread_desc * thrd);
    50 void BlockInternal(spinlock ** locks, unsigned short count);
    51 void BlockInternal(spinlock ** locks, unsigned short count, thread_desc ** thrds, unsigned short thrd_count);
     50void BlockInternal(spinlock * locks [], unsigned short count);
     51void BlockInternal(spinlock * locks [], unsigned short count, thread_desc * thrds [], unsigned short thrd_count);
    5252void LeaveThread(spinlock * lock, thread_desc * thrd);
    5353
  • src/libcfa/concurrency/monitor

    r136ccd7 r4ee36bf0  
    3939}
    4040
    41 // static inline int ?<?(monitor_desc* lhs, monitor_desc* rhs) {
    42 //      return ((intptr_t)lhs) < ((intptr_t)rhs);
    43 // }
    44 
    4541struct monitor_guard_t {
    4642        monitor_desc ** m;
    47         int count;
     43        __lock_size_t  count;
    4844        monitor_desc ** prev_mntrs;
    49         unsigned short  prev_count;
     45        __lock_size_t   prev_count;
    5046        fptr_t          prev_func;
    5147};
    5248
    53 void ?{}( monitor_guard_t & this, monitor_desc ** m, int count, void (*func)() );
     49void ?{}( monitor_guard_t & this, monitor_desc ** m, __lock_size_t count, void (*func)() );
    5450void ^?{}( monitor_guard_t & this );
    5551
     
    5753        monitor_desc * m;
    5854        monitor_desc ** prev_mntrs;
    59         unsigned short  prev_count;
     55        __lock_size_t   prev_count;
    6056        fptr_t          prev_func;
    6157};
     
    7470
    7571struct __condition_criterion_t {
    76         bool ready;                                             //Whether or not the criterion is met (True if met)
    77         monitor_desc * target;                          //The monitor this criterion concerns
    78         struct __condition_node_t * owner;              //The parent node to which this criterion belongs
    79         __condition_criterion_t * next;         //Intrusive linked list Next field
     72        // Whether or not the criterion is met (True if met)
     73        bool ready;
     74
     75        // The monitor this criterion concerns
     76        monitor_desc * target;
     77
     78        // The parent node to which this criterion belongs
     79        struct __condition_node_t * owner;
     80
     81        // Intrusive linked list Next field
     82        __condition_criterion_t * next;
    8083};
    8184
    8285struct __condition_node_t {
    83         thread_desc * waiting_thread;                   //Thread that needs to be woken when all criteria are met
    84         __condition_criterion_t * criteria;     //Array of criteria (Criterions are contiguous in memory)
    85         unsigned short count;                           //Number of criterions in the criteria
    86         __condition_node_t * next;                      //Intrusive linked list Next field
    87         uintptr_t user_info;                            //Custom user info accessible before signalling
     86        // Thread that needs to be woken when all criteria are met
     87        thread_desc * waiting_thread;
     88
     89        // Array of criteria (Criterions are contiguous in memory)
     90        __condition_criterion_t * criteria;
     91
     92        // Number of criterions in the criteria
     93        __lock_size_t count;
     94
     95        // Intrusive linked list Next field
     96        __condition_node_t * next;
     97
     98        // Custom user info accessible before signalling
     99        uintptr_t user_info;
    88100};
    89101
     
    93105};
    94106
    95 void ?{}(__condition_node_t & this, thread_desc * waiting_thread, unsigned short count, uintptr_t user_info );
     107void ?{}(__condition_node_t & this, thread_desc * waiting_thread, __lock_size_t count, uintptr_t user_info );
    96108void ?{}(__condition_criterion_t & this );
    97109void ?{}(__condition_criterion_t & this, monitor_desc * target, __condition_node_t * owner );
    98110
    99111void ?{}( __condition_blocked_queue_t & );
    100 void append( __condition_blocked_queue_t *, __condition_node_t * );
    101 __condition_node_t * pop_head( __condition_blocked_queue_t * );
     112void append( __condition_blocked_queue_t &, __condition_node_t * );
     113__condition_node_t * pop_head( __condition_blocked_queue_t & );
    102114
    103115struct condition {
    104         __condition_blocked_queue_t blocked;    //Link list which contains the blocked threads as-well as the information needed to unblock them
    105         monitor_desc ** monitors;                       //Array of monitor pointers (Monitors are NOT contiguous in memory)
    106         unsigned short monitor_count;                   //Number of monitors in the array
     116        // Link list which contains the blocked threads as-well as the information needed to unblock them
     117        __condition_blocked_queue_t blocked;
     118
     119        // Array of monitor pointers (Monitors are NOT contiguous in memory)
     120        monitor_desc ** monitors;
     121
     122        // Number of monitors in the array
     123        __lock_size_t monitor_count;
    107124};
    108125
     
    116133}
    117134
    118 void wait( condition * this, uintptr_t user_info = 0 );
    119 bool signal( condition * this );
    120 bool signal_block( condition * this );
    121 static inline bool is_empty( condition * this ) { return !this->blocked.head; }
    122 uintptr_t front( condition * this );
     135              void wait        ( condition & this, uintptr_t user_info = 0 );
     136              bool signal      ( condition & this );
     137              bool signal_block( condition & this );
     138static inline bool is_empty    ( condition & this ) { return !this.blocked.head; }
     139         uintptr_t front       ( condition & this );
    123140
    124141//-----------------------------------------------------------------------------
  • src/libcfa/concurrency/monitor.c

    r136ccd7 r4ee36bf0  
    1717
    1818#include <stdlib>
     19#include <inttypes.h>
    1920
    2021#include "libhdr.h"
     
    2627// Forward declarations
    2728static inline void set_owner ( monitor_desc * this, thread_desc * owner );
    28 static inline void set_owner ( monitor_desc ** storage, short count, thread_desc * owner );
    29 static inline void set_mask  ( monitor_desc ** storage, short count, const __waitfor_mask_t & mask );
     29static inline void set_owner ( monitor_desc * storage [], __lock_size_t count, thread_desc * owner );
     30static inline void set_mask  ( monitor_desc * storage [], __lock_size_t count, const __waitfor_mask_t & mask );
    3031static inline void reset_mask( monitor_desc * this );
    3132
     
    3334static inline bool is_accepted( monitor_desc * this, const __monitor_group_t & monitors );
    3435
    35 static inline void lock_all( spinlock ** locks, unsigned short count );
    36 static inline void lock_all( monitor_desc ** source, spinlock ** /*out*/ locks, unsigned short count );
    37 static inline void unlock_all( spinlock ** locks, unsigned short count );
    38 static inline void unlock_all( monitor_desc ** locks, unsigned short count );
    39 
    40 static inline void save   ( monitor_desc ** ctx, short count, spinlock ** locks, unsigned int * /*out*/ recursions, __waitfor_mask_t * /*out*/ masks );
    41 static inline void restore( monitor_desc ** ctx, short count, spinlock ** locks, unsigned int * /*in */ recursions, __waitfor_mask_t * /*in */ masks );
    42 
    43 static inline void init     ( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria );
    44 static inline void init_push( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria );
     36static inline void lock_all  ( spinlock * locks [], __lock_size_t count );
     37static inline void lock_all  ( monitor_desc * source [], spinlock * /*out*/ locks [], __lock_size_t count );
     38static inline void unlock_all( spinlock * locks [], __lock_size_t count );
     39static inline void unlock_all( monitor_desc * locks [], __lock_size_t count );
     40
     41static inline void save   ( monitor_desc * ctx [], __lock_size_t count, spinlock * locks [], unsigned int /*out*/ recursions [], __waitfor_mask_t /*out*/ masks [] );
     42static inline void restore( monitor_desc * ctx [], __lock_size_t count, spinlock * locks [], unsigned int /*in */ recursions [], __waitfor_mask_t /*in */ masks [] );
     43
     44static inline void init     ( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] );
     45static inline void init_push( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] );
    4546
    4647static inline thread_desc *        check_condition   ( __condition_criterion_t * );
    47 static inline void                 brand_condition   ( condition * );
    48 static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t &, monitor_desc ** monitors, int count );
     48static inline void                 brand_condition   ( condition & );
     49static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t &, monitor_desc * monitors [], __lock_size_t count );
    4950
    5051forall(dtype T | sized( T ))
    51 static inline short insert_unique( T ** array, short & size, T * val );
    52 static inline short count_max    ( const __waitfor_mask_t & mask );
    53 static inline short aggregate    ( monitor_desc ** storage, const __waitfor_mask_t & mask );
     52static inline __lock_size_t insert_unique( T * array [], __lock_size_t & size, T * val );
     53static inline __lock_size_t count_max    ( const __waitfor_mask_t & mask );
     54static inline __lock_size_t aggregate    ( monitor_desc * storage [], const __waitfor_mask_t & mask );
    5455
    5556//-----------------------------------------------------------------------------
     
    5859        __condition_node_t waiter = { thrd, count, user_info };   /* Create the node specific to this wait operation                                     */ \
    5960        __condition_criterion_t criteria[count];                  /* Create the creteria this wait operation needs to wake up                            */ \
    60         init( count, monitors, &waiter, criteria );               /* Link everything together                                                            */ \
     61        init( count, monitors, waiter, criteria );                /* Link everything together                                                            */ \
    6162
    6263#define wait_ctx_primed(thrd, user_info)                        /* Create the necessary information to use the signaller stack                         */ \
    6364        __condition_node_t waiter = { thrd, count, user_info };   /* Create the node specific to this wait operation                                     */ \
    6465        __condition_criterion_t criteria[count];                  /* Create the creteria this wait operation needs to wake up                            */ \
    65         init_push( count, monitors, &waiter, criteria );          /* Link everything together and push it to the AS-Stack                                */ \
     66        init_push( count, monitors, waiter, criteria );           /* Link everything together and push it to the AS-Stack                                */ \
    6667
    6768#define monitor_ctx( mons, cnt )                                /* Define that create the necessary struct for internal/external scheduling operations */ \
    6869        monitor_desc ** monitors = mons;                          /* Save the targeted monitors                                                          */ \
    69         unsigned short count = cnt;                               /* Save the count to a local variable                                                  */ \
     70        __lock_size_t count = cnt;                                /* Save the count to a local variable                                                  */ \
    7071        unsigned int recursions[ count ];                         /* Save the current recursion levels to restore them later                             */ \
    71         __waitfor_mask_t masks[ count ];                          /* Save the current waitfor masks to restore them later                                */ \
     72        __waitfor_mask_t masks [ count ];                         /* Save the current waitfor masks to restore them later                                */ \
    7273        spinlock *   locks     [ count ];                         /* We need to pass-in an array of locks to BlockInternal                               */ \
    7374
     
    114115
    115116                        // Some one else has the monitor, wait in line for it
    116                         append( &this->entry_queue, thrd );
     117                        append( this->entry_queue, thrd );
    117118                        BlockInternal( &this->lock );
    118119
     
    153154                }
    154155
    155                 int count = 1;
     156                __lock_size_t count = 1;
    156157                monitor_desc ** monitors = &this;
    157158                __monitor_group_t group = { &this, 1, func };
     
    160161
    161162                        // Wake the thread that is waiting for this
    162                         __condition_criterion_t * urgent = pop( &this->signal_stack );
     163                        __condition_criterion_t * urgent = pop( this->signal_stack );
    163164                        verify( urgent );
    164165
     
    182183
    183184                        // Some one else has the monitor, wait in line for it
    184                         append( &this->entry_queue, thrd );
     185                        append( this->entry_queue, thrd );
    185186                        BlockInternal( &this->lock );
    186187
     
    272273// relies on the monitor array being sorted
    273274static inline void enter( __monitor_group_t monitors ) {
    274         for(int i = 0; i < monitors.size; i++) {
     275        for( __lock_size_t i = 0; i < monitors.size; i++) {
    275276                __enter_monitor_desc( monitors.list[i], monitors );
    276277        }
     
    279280// Leave multiple monitor
    280281// relies on the monitor array being sorted
    281 static inline void leave(monitor_desc ** monitors, int count) {
    282         for(int i = count - 1; i >= 0; i--) {
     282static inline void leave(monitor_desc * monitors [], __lock_size_t count) {
     283        for( __lock_size_t i = count - 1; i >= 0; i--) {
    283284                __leave_monitor_desc( monitors[i] );
    284285        }
     
    287288// Ctor for monitor guard
    288289// Sorts monitors before entering
    289 void ?{}( monitor_guard_t & this, monitor_desc ** m, int count, fptr_t func ) {
     290void ?{}( monitor_guard_t & this, monitor_desc * m [], __lock_size_t count, fptr_t func ) {
    290291        // Store current array
    291292        this.m = m;
     
    296297
    297298        // Save previous thread context
    298         this.prev_mntrs = this_thread->monitors.list;
    299         this.prev_count = this_thread->monitors.size;
    300         this.prev_func  = this_thread->monitors.func;
     299        this.[prev_mntrs, prev_count, prev_func] = this_thread->monitors.[list, size, func];
    301300
    302301        // Update thread context (needed for conditions)
    303         this_thread->monitors.list = m;
    304         this_thread->monitors.size = count;
    305         this_thread->monitors.func = func;
     302        this_thread->monitors.[list, size, func] = [m, count, func];
    306303
    307304        // LIB_DEBUG_PRINT_SAFE("MGUARD : enter %d\n", count);
     
    325322
    326323        // Restore thread context
    327         this_thread->monitors.list = this.prev_mntrs;
    328         this_thread->monitors.size = this.prev_count;
    329         this_thread->monitors.func = this.prev_func;
    330 }
    331 
     324        this_thread->monitors.[list, size, func] = this.[prev_mntrs, prev_count, prev_func];
     325}
    332326
    333327// Ctor for monitor guard
    334328// Sorts monitors before entering
    335 void ?{}( monitor_dtor_guard_t & this, monitor_desc ** m, fptr_t func ) {
     329void ?{}( monitor_dtor_guard_t & this, monitor_desc * m [], fptr_t func ) {
    336330        // Store current array
    337331        this.m = *m;
    338332
    339333        // Save previous thread context
    340         this.prev_mntrs = this_thread->monitors.list;
    341         this.prev_count = this_thread->monitors.size;
    342         this.prev_func  = this_thread->monitors.func;
     334        this.[prev_mntrs, prev_count, prev_func] = this_thread->monitors.[list, size, func];
    343335
    344336        // Update thread context (needed for conditions)
    345         this_thread->monitors.list = m;
    346         this_thread->monitors.size = 1;
    347         this_thread->monitors.func = func;
     337        this_thread->monitors.[list, size, func] = [m, 1, func];
    348338
    349339        __enter_monitor_dtor( this.m, func );
    350340}
    351 
    352341
    353342// Dtor for monitor guard
     
    357346
    358347        // Restore thread context
    359         this_thread->monitors.list = this.prev_mntrs;
    360         this_thread->monitors.size = this.prev_count;
    361         this_thread->monitors.func = this.prev_func;
     348        this_thread->monitors.[list, size, func] = this.[prev_mntrs, prev_count, prev_func];
    362349}
    363350
    364351//-----------------------------------------------------------------------------
    365352// Internal scheduling types
    366 void ?{}(__condition_node_t & this, thread_desc * waiting_thread, unsigned short count, uintptr_t user_info ) {
     353void ?{}(__condition_node_t & this, thread_desc * waiting_thread, __lock_size_t count, uintptr_t user_info ) {
    367354        this.waiting_thread = waiting_thread;
    368355        this.count = count;
     
    378365}
    379366
    380 void ?{}(__condition_criterion_t & this, monitor_desc * target, __condition_node_t * owner ) {
     367void ?{}(__condition_criterion_t & this, monitor_desc * target, __condition_node_t & owner ) {
    381368        this.ready  = false;
    382369        this.target = target;
    383         this.owner  = owner;
     370        this.owner  = &owner;
    384371        this.next   = NULL;
    385372}
     
    387374//-----------------------------------------------------------------------------
    388375// Internal scheduling
    389 void wait( condition * this, uintptr_t user_info = 0 ) {
     376void wait( condition & this, uintptr_t user_info = 0 ) {
    390377        brand_condition( this );
    391378
    392379        // Check that everything is as expected
    393         assertf( this->monitors != NULL, "Waiting with no monitors (%p)", this->monitors );
    394         verifyf( this->monitor_count != 0, "Waiting with 0 monitors (%i)", this->monitor_count );
    395         verifyf( this->monitor_count < 32u, "Excessive monitor count (%i)", this->monitor_count );
     380        assertf( this.monitors != NULL, "Waiting with no monitors (%p)", this.monitors );
     381        verifyf( this.monitor_count != 0, "Waiting with 0 monitors (%"PRIiFAST16")", this.monitor_count );
     382        verifyf( this.monitor_count < 32u, "Excessive monitor count (%"PRIiFAST16")", this.monitor_count );
    396383
    397384        // Create storage for monitor context
    398         monitor_ctx( this->monitors, this->monitor_count );
     385        monitor_ctx( this.monitors, this.monitor_count );
    399386
    400387        // Create the node specific to this wait operation
     
    403390        // Append the current wait operation to the ones already queued on the condition
    404391        // We don't need locks for that since conditions must always be waited on inside monitor mutual exclusion
    405         append( &this->blocked, &waiter );
     392        append( this.blocked, &waiter );
    406393
    407394        // Lock all monitors (aggregates the locks as well)
     
    409396
    410397        // Find the next thread(s) to run
    411         short thread_count = 0;
     398        __lock_size_t thread_count = 0;
    412399        thread_desc * threads[ count ];
    413400        __builtin_memset( threads, 0, sizeof( threads ) );
     
    417404
    418405        // Remove any duplicate threads
    419         for( int i = 0; i < count; i++) {
     406        for( __lock_size_t i = 0; i < count; i++) {
    420407                thread_desc * new_owner = next_thread( monitors[i] );
    421408                insert_unique( threads, thread_count, new_owner );
     
    429416}
    430417
    431 bool signal( condition * this ) {
     418bool signal( condition & this ) {
    432419        if( is_empty( this ) ) { return false; }
    433420
    434421        //Check that everything is as expected
    435         verify( this->monitors );
    436         verify( this->monitor_count != 0 );
     422        verify( this.monitors );
     423        verify( this.monitor_count != 0 );
    437424
    438425        //Some more checking in debug
    439426        LIB_DEBUG_DO(
    440427                thread_desc * this_thrd = this_thread;
    441                 if ( this->monitor_count != this_thrd->monitors.size ) {
    442                         abortf( "Signal on condition %p made with different number of monitor(s), expected %i got %i", this, this->monitor_count, this_thrd->monitors.size );
    443                 }
    444 
    445                 for(int i = 0; i < this->monitor_count; i++) {
    446                         if ( this->monitors[i] != this_thrd->monitors.list[i] ) {
    447                                 abortf( "Signal on condition %p made with different monitor, expected %p got %i", this, this->monitors[i], this_thrd->monitors.list[i] );
     428                if ( this.monitor_count != this_thrd->monitors.size ) {
     429                        abortf( "Signal on condition %p made with different number of monitor(s), expected %i got %i", &this, this.monitor_count, this_thrd->monitors.size );
     430                }
     431
     432                for(int i = 0; i < this.monitor_count; i++) {
     433                        if ( this.monitors[i] != this_thrd->monitors.list[i] ) {
     434                                abortf( "Signal on condition %p made with different monitor, expected %p got %i", &this, this.monitors[i], this_thrd->monitors.list[i] );
    448435                        }
    449436                }
    450437        );
    451438
    452         unsigned short count = this->monitor_count;
     439        __lock_size_t count = this.monitor_count;
    453440
    454441        // Lock all monitors
    455         lock_all( this->monitors, NULL, count );
     442        lock_all( this.monitors, NULL, count );
    456443
    457444        //Pop the head of the waiting queue
    458         __condition_node_t * node = pop_head( &this->blocked );
     445        __condition_node_t * node = pop_head( this.blocked );
    459446
    460447        //Add the thread to the proper AS stack
     
    462449                __condition_criterion_t * crit = &node->criteria[i];
    463450                assert( !crit->ready );
    464                 push( &crit->target->signal_stack, crit );
     451                push( crit->target->signal_stack, crit );
    465452        }
    466453
    467454        //Release
    468         unlock_all( this->monitors, count );
     455        unlock_all( this.monitors, count );
    469456
    470457        return true;
    471458}
    472459
    473 bool signal_block( condition * this ) {
    474         if( !this->blocked.head ) { return false; }
     460bool signal_block( condition & this ) {
     461        if( !this.blocked.head ) { return false; }
    475462
    476463        //Check that everything is as expected
    477         verifyf( this->monitors != NULL, "Waiting with no monitors (%p)", this->monitors );
    478         verifyf( this->monitor_count != 0, "Waiting with 0 monitors (%i)", this->monitor_count );
     464        verifyf( this.monitors != NULL, "Waiting with no monitors (%p)", this.monitors );
     465        verifyf( this.monitor_count != 0, "Waiting with 0 monitors (%"PRIiFAST16")", this.monitor_count );
    479466
    480467        // Create storage for monitor context
    481         monitor_ctx( this->monitors, this->monitor_count );
     468        monitor_ctx( this.monitors, this.monitor_count );
    482469
    483470        // Lock all monitors (aggregates the locks them as well)
     
    491478
    492479        //Find the thread to run
    493         thread_desc * signallee = pop_head( &this->blocked )->waiting_thread;
     480        thread_desc * signallee = pop_head( this.blocked )->waiting_thread;
    494481        set_owner( monitors, count, signallee );
    495482
    496         LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : signal_block condition %p (s: %p)\n", this, signallee );
     483        LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : signal_block condition %p (s: %p)\n", &this, signallee );
    497484
    498485        //Everything is ready to go to sleep
     
    512499
    513500// Access the user_info of the thread waiting at the front of the queue
    514 uintptr_t front( condition * this ) {
     501uintptr_t front( condition & this ) {
    515502        verifyf( !is_empty(this),
    516503                "Attempt to access user data on an empty condition.\n"
    517504                "Possible cause is not checking if the condition is empty before reading stored data."
    518505        );
    519         return this->blocked.head->user_info;
     506        return this.blocked.head->user_info;
    520507}
    521508
     
    537524        // This statment doesn't have a contiguous list of monitors...
    538525        // Create one!
    539         short max = count_max( mask );
     526        __lock_size_t max = count_max( mask );
    540527        monitor_desc * mon_storage[max];
    541528        __builtin_memset( mon_storage, 0, sizeof( mon_storage ) );
    542         short actual_count = aggregate( mon_storage, mask );
    543 
    544         LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : waitfor %d (s: %d, m: %d)\n", actual_count, mask.size, (short)max);
     529        __lock_size_t actual_count = aggregate( mon_storage, mask );
     530
     531        LIB_DEBUG_PRINT_BUFFER_DECL( "Kernel : waitfor %d (s: %d, m: %d)\n", actual_count, mask.size, (__lock_size_t)max);
    545532
    546533        if(actual_count == 0) return;
     
    569556
    570557                                __condition_criterion_t * dtor_crit = mon2dtor->dtor_node->criteria;
    571                                 push( &mon2dtor->signal_stack, dtor_crit );
     558                                push( mon2dtor->signal_stack, dtor_crit );
    572559
    573560                                unlock_all( locks, count );
     
    629616        set_mask( monitors, count, mask );
    630617
    631         for(int i = 0; i < count; i++) {
     618        for( __lock_size_t i = 0; i < count; i++) {
    632619                verify( monitors[i]->owner == this_thread );
    633620        }
     
    661648}
    662649
    663 static inline void set_owner( monitor_desc ** monitors, short count, thread_desc * owner ) {
     650static inline void set_owner( monitor_desc * monitors [], __lock_size_t count, thread_desc * owner ) {
    664651        monitors[0]->owner     = owner;
    665652        monitors[0]->recursion = 1;
    666         for( int i = 1; i < count; i++ ) {
     653        for( __lock_size_t i = 1; i < count; i++ ) {
    667654                monitors[i]->owner     = owner;
    668655                monitors[i]->recursion = 0;
     
    670657}
    671658
    672 static inline void set_mask( monitor_desc ** storage, short count, const __waitfor_mask_t & mask ) {
    673         for(int i = 0; i < count; i++) {
     659static inline void set_mask( monitor_desc * storage [], __lock_size_t count, const __waitfor_mask_t & mask ) {
     660        for( __lock_size_t i = 0; i < count; i++) {
    674661                storage[i]->mask = mask;
    675662        }
     
    685672        //Check the signaller stack
    686673        LIB_DEBUG_PRINT_SAFE("Kernel :  mon %p AS-stack top %p\n", this, this->signal_stack.top);
    687         __condition_criterion_t * urgent = pop( &this->signal_stack );
     674        __condition_criterion_t * urgent = pop( this->signal_stack );
    688675        if( urgent ) {
    689676                //The signaller stack is not empty,
     
    697684        // No signaller thread
    698685        // Get the next thread in the entry_queue
    699         thread_desc * new_owner = pop_head( &this->entry_queue );
     686        thread_desc * new_owner = pop_head( this->entry_queue );
    700687        set_owner( this, new_owner );
    701688
     
    705692static inline bool is_accepted( monitor_desc * this, const __monitor_group_t & group ) {
    706693        __acceptable_t * it = this->mask.clauses; // Optim
    707         int count = this->mask.size;
     694        __lock_size_t count = this->mask.size;
    708695
    709696        // Check if there are any acceptable functions
     
    714701
    715702        // For all acceptable functions check if this is the current function.
    716         for( short i = 0; i < count; i++, it++ ) {
     703        for( __lock_size_t i = 0; i < count; i++, it++ ) {
    717704                if( *it == group ) {
    718705                        *this->mask.accepted = i;
     
    725712}
    726713
    727 static inline void init( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria ) {
    728         for(int i = 0; i < count; i++) {
     714static inline void init( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] ) {
     715        for( __lock_size_t i = 0; i < count; i++) {
    729716                (criteria[i]){ monitors[i], waiter };
    730717        }
    731718
    732         waiter->criteria = criteria;
    733 }
    734 
    735 static inline void init_push( int count, monitor_desc ** monitors, __condition_node_t * waiter, __condition_criterion_t * criteria ) {
    736         for(int i = 0; i < count; i++) {
     719        waiter.criteria = criteria;
     720}
     721
     722static inline void init_push( __lock_size_t count, monitor_desc * monitors [], __condition_node_t & waiter, __condition_criterion_t criteria [] ) {
     723        for( __lock_size_t i = 0; i < count; i++) {
    737724                (criteria[i]){ monitors[i], waiter };
    738725                LIB_DEBUG_PRINT_SAFE( "Kernel :  target %p = %p\n", criteria[i].target, &criteria[i] );
    739                 push( &criteria[i].target->signal_stack, &criteria[i] );
    740         }
    741 
    742         waiter->criteria = criteria;
    743 }
    744 
    745 static inline void lock_all( spinlock ** locks, unsigned short count ) {
    746         for( int i = 0; i < count; i++ ) {
     726                push( criteria[i].target->signal_stack, &criteria[i] );
     727        }
     728
     729        waiter.criteria = criteria;
     730}
     731
     732static inline void lock_all( spinlock * locks [], __lock_size_t count ) {
     733        for( __lock_size_t i = 0; i < count; i++ ) {
    747734                lock_yield( locks[i] DEBUG_CTX2 );
    748735        }
    749736}
    750737
    751 static inline void lock_all( monitor_desc ** source, spinlock ** /*out*/ locks, unsigned short count ) {
    752         for( int i = 0; i < count; i++ ) {
     738static inline void lock_all( monitor_desc * source [], spinlock * /*out*/ locks [], __lock_size_t count ) {
     739        for( __lock_size_t i = 0; i < count; i++ ) {
    753740                spinlock * l = &source[i]->lock;
    754741                lock_yield( l DEBUG_CTX2 );
     
    757744}
    758745
    759 static inline void unlock_all( spinlock ** locks, unsigned short count ) {
    760         for( int i = 0; i < count; i++ ) {
     746static inline void unlock_all( spinlock * locks [], __lock_size_t count ) {
     747        for( __lock_size_t i = 0; i < count; i++ ) {
    761748                unlock( locks[i] );
    762749        }
    763750}
    764751
    765 static inline void unlock_all( monitor_desc ** locks, unsigned short count ) {
    766         for( int i = 0; i < count; i++ ) {
     752static inline void unlock_all( monitor_desc * locks [], __lock_size_t count ) {
     753        for( __lock_size_t i = 0; i < count; i++ ) {
    767754                unlock( &locks[i]->lock );
    768755        }
    769756}
    770757
    771 static inline void save( monitor_desc ** ctx, short count, __attribute((unused)) spinlock ** locks, unsigned int * /*out*/ recursions, __waitfor_mask_t * /*out*/ masks ) {
    772         for( int i = 0; i < count; i++ ) {
     758static inline void save(
     759        monitor_desc * ctx [],
     760        __lock_size_t count,
     761        __attribute((unused)) spinlock * locks [],
     762        unsigned int /*out*/ recursions [],
     763        __waitfor_mask_t /*out*/ masks []
     764) {
     765        for( __lock_size_t i = 0; i < count; i++ ) {
    773766                recursions[i] = ctx[i]->recursion;
    774767                masks[i]      = ctx[i]->mask;
     
    776769}
    777770
    778 static inline void restore( monitor_desc ** ctx, short count, spinlock ** locks, unsigned int * /*out*/ recursions, __waitfor_mask_t * /*out*/ masks ) {
     771static inline void restore(
     772        monitor_desc * ctx [],
     773        __lock_size_t count,
     774        spinlock * locks [],
     775        unsigned int /*out*/ recursions [],
     776        __waitfor_mask_t /*out*/ masks []
     777) {
    779778        lock_all( locks, count );
    780         for( int i = 0; i < count; i++ ) {
     779        for( __lock_size_t i = 0; i < count; i++ ) {
    781780                ctx[i]->recursion = recursions[i];
    782781                ctx[i]->mask      = masks[i];
     
    811810}
    812811
    813 static inline void brand_condition( condition * this ) {
     812static inline void brand_condition( condition & this ) {
    814813        thread_desc * thrd = this_thread;
    815         if( !this->monitors ) {
     814        if( !this.monitors ) {
    816815                // LIB_DEBUG_PRINT_SAFE("Branding\n");
    817816                assertf( thrd->monitors.list != NULL, "No current monitor to brand condition %p", thrd->monitors.list );
    818                 this->monitor_count = thrd->monitors.size;
    819 
    820                 this->monitors = malloc( this->monitor_count * sizeof( *this->monitors ) );
    821                 for( int i = 0; i < this->monitor_count; i++ ) {
    822                         this->monitors[i] = thrd->monitors.list[i];
    823                 }
    824         }
    825 }
    826 
    827 static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t & mask, monitor_desc ** monitors, int count ) {
    828 
    829         __thread_queue_t * entry_queue = &monitors[0]->entry_queue;
     817                this.monitor_count = thrd->monitors.size;
     818
     819                this.monitors = malloc( this.monitor_count * sizeof( *this.monitors ) );
     820                for( int i = 0; i < this.monitor_count; i++ ) {
     821                        this.monitors[i] = thrd->monitors.list[i];
     822                }
     823        }
     824}
     825
     826static inline [thread_desc *, int] search_entry_queue( const __waitfor_mask_t & mask, monitor_desc * monitors [], __lock_size_t count ) {
     827
     828        __thread_queue_t & entry_queue = monitors[0]->entry_queue;
    830829
    831830        // For each thread in the entry-queue
    832         for(    thread_desc ** thrd_it = &entry_queue->head;
     831        for(    thread_desc ** thrd_it = &entry_queue.head;
    833832                *thrd_it;
    834833                thrd_it = &(*thrd_it)->next
     
    852851
    853852forall(dtype T | sized( T ))
    854 static inline short insert_unique( T ** array, short & size, T * val ) {
     853static inline __lock_size_t insert_unique( T * array [], __lock_size_t & size, T * val ) {
    855854        if( !val ) return size;
    856855
    857         for(int i = 0; i <= size; i++) {
     856        for( __lock_size_t i = 0; i <= size; i++) {
    858857                if( array[i] == val ) return size;
    859858        }
     
    864863}
    865864
    866 static inline short count_max( const __waitfor_mask_t & mask ) {
    867         short max = 0;
    868         for( int i = 0; i < mask.size; i++ ) {
     865static inline __lock_size_t count_max( const __waitfor_mask_t & mask ) {
     866        __lock_size_t max = 0;
     867        for( __lock_size_t i = 0; i < mask.size; i++ ) {
    869868                max += mask.clauses[i].size;
    870869        }
     
    872871}
    873872
    874 static inline short aggregate( monitor_desc ** storage, const __waitfor_mask_t & mask ) {
    875         short size = 0;
    876         for( int i = 0; i < mask.size; i++ ) {
     873static inline __lock_size_t aggregate( monitor_desc * storage [], const __waitfor_mask_t & mask ) {
     874        __lock_size_t size = 0;
     875        for( __lock_size_t i = 0; i < mask.size; i++ ) {
    877876                __libcfa_small_sort( mask.clauses[i].list, mask.clauses[i].size );
    878                 for( int j = 0; j < mask.clauses[i].size; j++) {
     877                for( __lock_size_t j = 0; j < mask.clauses[i].size; j++) {
    879878                        insert_unique( storage, size, mask.clauses[i].list[j] );
    880879                }
     
    890889}
    891890
    892 void append( __condition_blocked_queue_t * this, __condition_node_t * c ) {
    893         verify(this->tail != NULL);
    894         *this->tail = c;
    895         this->tail = &c->next;
    896 }
    897 
    898 __condition_node_t * pop_head( __condition_blocked_queue_t * this ) {
    899         __condition_node_t * head = this->head;
     891void append( __condition_blocked_queue_t & this, __condition_node_t * c ) {
     892        verify(this.tail != NULL);
     893        *this.tail = c;
     894        this.tail = &c->next;
     895}
     896
     897__condition_node_t * pop_head( __condition_blocked_queue_t & this ) {
     898        __condition_node_t * head = this.head;
    900899        if( head ) {
    901                 this->head = head->next;
     900                this.head = head->next;
    902901                if( !head->next ) {
    903                         this->tail = &this->head;
     902                        this.tail = &this.head;
    904903                }
    905904                head->next = NULL;
  • src/tests/.expect/32/literals.txt

    r136ccd7 r4ee36bf0  
    55__attribute__ ((__nothrow__,__leaf__,__noreturn__)) extern void exit(signed int __status);
    66extern signed int printf(const char *__restrict __format, ...);
    7 void __for_each__A2_0_0_0____operator_assign__PFt0_Rt0t0____constructor__PF_Rt0____constructor__PF_Rt0t0____destructor__PF_Rt0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_preincr__PFt0_Rt0____operator_predecr__PFt0_Rt0____operator_equal__PFi_t0t0____operator_notequal__PFi_t0t0____operator_deref__PFRt1_t0__F_t0t0PF_t1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object0)(), void *__anonymous_object1), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object2)(), void *__anonymous_object3), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object4)(), void *__anonymous_object5, void *__anonymous_object6), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object7)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object8), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object9)(), void *__anonymous_object10, void *__anonymous_object11), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object12)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object13, void *__anonymous_object14), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object15)(), void *__anonymous_object16, void *__anonymous_object17), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object18)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object19, void *__anonymous_object20), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object21, void *__anonymous_object22), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object23), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object24, void *__anonymous_object25), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object26), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object27, void *__anonymous_object28), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object29), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object30, void *__anonymous_object31), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object32), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object33), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object34), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object35, void *__anonymous_object36), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object37, void *__anonymous_object38), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object39), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object40));
    8 void __for_each_reverse__A2_0_0_0____operator_assign__PFt0_Rt0t0____constructor__PF_Rt0____constructor__PF_Rt0t0____destructor__PF_Rt0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_preincr__PFt0_Rt0____operator_predecr__PFt0_Rt0____operator_equal__PFi_t0t0____operator_notequal__PFi_t0t0____operator_deref__PFRt1_t0__F_t0t0PF_t1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object41)(), void *__anonymous_object42), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object43)(), void *__anonymous_object44), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object45)(), void *__anonymous_object46, void *__anonymous_object47), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object48)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object49), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object50)(), void *__anonymous_object51, void *__anonymous_object52), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object53)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object54, void *__anonymous_object55), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object56)(), void *__anonymous_object57, void *__anonymous_object58), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object59)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object60, void *__anonymous_object61), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object62, void *__anonymous_object63), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object64), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object65, void *__anonymous_object66), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object67), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object68, void *__anonymous_object69), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object70), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object71, void *__anonymous_object72), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object73), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object74), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object75), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object76, void *__anonymous_object77), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object78, void *__anonymous_object79), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object80), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object81));
     7void __for_each__A0_2_0_0____operator_assign__PFd0_Rd0d0____constructor__PF_Rd0____constructor__PF_Rd0d0____destructor__PF_Rd0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_preincr__PFd0_Rd0____operator_predecr__PFd0_Rd0____operator_equal__PFi_d0d0____operator_notequal__PFi_d0d0____operator_deref__PFRd1_d0__F_d0d0PF_d1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object0)(), void *__anonymous_object1), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object2)(), void *__anonymous_object3), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object4)(), void *__anonymous_object5, void *__anonymous_object6), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object7)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object8), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object9)(), void *__anonymous_object10, void *__anonymous_object11), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object12)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object13, void *__anonymous_object14), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object15)(), void *__anonymous_object16, void *__anonymous_object17), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object18)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object19, void *__anonymous_object20), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object21, void *__anonymous_object22), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object23), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object24, void *__anonymous_object25), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object26), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object27, void *__anonymous_object28), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object29), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object30, void *__anonymous_object31), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object32), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object33), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object34), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object35, void *__anonymous_object36), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object37, void *__anonymous_object38), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object39), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object40));
     8void __for_each_reverse__A0_2_0_0____operator_assign__PFd0_Rd0d0____constructor__PF_Rd0____constructor__PF_Rd0d0____destructor__PF_Rd0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_preincr__PFd0_Rd0____operator_predecr__PFd0_Rd0____operator_equal__PFi_d0d0____operator_notequal__PFi_d0d0____operator_deref__PFRd1_d0__F_d0d0PF_d1___1(__attribute__ ((unused)) void (*_adapterF_9telt_type__P)(void (*__anonymous_object41)(), void *__anonymous_object42), __attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object43)(), void *__anonymous_object44), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object45)(), void *__anonymous_object46, void *__anonymous_object47), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object48)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object49), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object50)(), void *__anonymous_object51, void *__anonymous_object52), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object53)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object54, void *__anonymous_object55), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object56)(), void *__anonymous_object57, void *__anonymous_object58), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object59)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object60, void *__anonymous_object61), __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object62, void *__anonymous_object63), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object64), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object65, void *__anonymous_object66), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object67), __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object68, void *__anonymous_object69), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object70), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object71, void *__anonymous_object72), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object73), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object74), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object75), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object76, void *__anonymous_object77), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object78, void *__anonymous_object79), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object80), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void (*__func__PF_9telt_type__1)(void *__anonymous_object81));
    99void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0c__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object82), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object83), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object84, _Bool __anonymous_object85), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object86), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object87, const char *__anonymous_object88), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object89), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object90, _Bool __anonymous_object91), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object92), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object93), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object94), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object95), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object96), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object97, const char *__anonymous_object98), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object99), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object100, const char *__anonymous_object101), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object102), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object103), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object104, const char *__anonymous_object105, unsigned long int __anonymous_object106), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object107, const char *__fmt__PCc_1, ...), void *__anonymous_object108, char __anonymous_object109);
    1010void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Sc__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object110), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object111), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object112, _Bool __anonymous_object113), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object114), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object115, const char *__anonymous_object116), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object117), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object118, _Bool __anonymous_object119), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object120), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object121), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object122), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object123), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object124), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object125, const char *__anonymous_object126), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object127), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object128, const char *__anonymous_object129), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object130), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object131), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object132, const char *__anonymous_object133, unsigned long int __anonymous_object134), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object135, const char *__fmt__PCc_1, ...), void *__anonymous_object136, signed char __anonymous_object137);
     
    2929void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCl__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object642), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object643), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object644, _Bool __anonymous_object645), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object646), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object647, const char *__anonymous_object648), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object649), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object650, _Bool __anonymous_object651), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object652), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object653), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object654), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object655), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object656), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object657, const char *__anonymous_object658), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object659), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object660, const char *__anonymous_object661), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object662), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object663), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object664, const char *__anonymous_object665, unsigned long int __anonymous_object666), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object667, const char *__fmt__PCc_1, ...), void *__anonymous_object668, const signed long int *__anonymous_object669);
    3030void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCv__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object670), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object671), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object672, _Bool __anonymous_object673), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object674), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object675, const char *__anonymous_object676), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object677), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object678, _Bool __anonymous_object679), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object680), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object681), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object682), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object683), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object684), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object685, const char *__anonymous_object686), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object687), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object688, const char *__anonymous_object689), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object690), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object691), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object692, const char *__anonymous_object693, unsigned long int __anonymous_object694), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object695, const char *__fmt__PCc_1, ...), void *__anonymous_object696, const void *__anonymous_object697);
    31 void *___operator_bitor__A1_1_0_1____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_bitor__PFPd0_Pd0t1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_bitor__PFPd0_Pd0tVARGS2__FPd0_Pd0t1tVARGS2__1(__attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype7tParams_M_MP)(void (*__anonymous_object698)(), void *__anonymous_object699, void *__anonymous_object700), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype2tT_M_MP)(void (*__anonymous_object701)(), void *__anonymous_object702, void *__anonymous_object703), __attribute__ ((unused)) void (*_adapterF_P2tT2tT__MP)(void (*__anonymous_object704)(), void *__anonymous_object705, void *__anonymous_object706), __attribute__ ((unused)) void (*_adapterF2tT_P2tT2tT_P_MP)(void (*__anonymous_object707)(), __attribute__ ((unused)) void *___retval__operator_assign__2tT_1, void *__anonymous_object708, void *__anonymous_object709), __attribute__ ((unused)) unsigned long int _sizeof_2tT, __attribute__ ((unused)) unsigned long int _alignof_2tT, __attribute__ ((unused)) unsigned long int _sizeof_7tParams, __attribute__ ((unused)) unsigned long int _alignof_7tParams, __attribute__ ((unused)) void *(*___operator_assign__PF2tT_R2tT2tT__1)(void *__anonymous_object710, void *__anonymous_object711), __attribute__ ((unused)) void (*___constructor__PF_R2tT__1)(void *__anonymous_object712), __attribute__ ((unused)) void (*___constructor__PF_R2tT2tT__1)(void *__anonymous_object713, void *__anonymous_object714), __attribute__ ((unused)) void (*___destructor__PF_R2tT__1)(void *__anonymous_object715), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype2tT__1)(void *__anonymous_object716, void *__anonymous_object717), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object718), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object719), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object720, _Bool __anonymous_object721), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object722), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object723, const char *__anonymous_object724), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object725), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object726, _Bool __anonymous_object727), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object728), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object729), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object730), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object731), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object732), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object733, const char *__anonymous_object734), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object735), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object736, const char *__anonymous_object737), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object738), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object739), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object740, const char *__anonymous_object741, unsigned long int __anonymous_object742), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object743, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype7tParams__1)(void *__anonymous_object744, void *__anonymous_object745), void *__os__P7tostype_1, void *__arg__2tT_1, void *__rest__7tParams_1);
     31void *___operator_bitor__A0_2_0_1____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_bitor__PFPd0_Pd0d1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_bitor__PFPd0_Pd0tVARGS2__FPd0_Pd0d1tVARGS2__1(__attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype7tParams_M_MP)(void (*__anonymous_object698)(), void *__anonymous_object699, void *__anonymous_object700), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype2tT_M_MP)(void (*__anonymous_object701)(), void *__anonymous_object702, void *__anonymous_object703), __attribute__ ((unused)) void (*_adapterF_P2tT2tT__MP)(void (*__anonymous_object704)(), void *__anonymous_object705, void *__anonymous_object706), __attribute__ ((unused)) void (*_adapterF2tT_P2tT2tT_P_MP)(void (*__anonymous_object707)(), __attribute__ ((unused)) void *___retval__operator_assign__2tT_1, void *__anonymous_object708, void *__anonymous_object709), __attribute__ ((unused)) unsigned long int _sizeof_2tT, __attribute__ ((unused)) unsigned long int _alignof_2tT, __attribute__ ((unused)) unsigned long int _sizeof_7tParams, __attribute__ ((unused)) unsigned long int _alignof_7tParams, __attribute__ ((unused)) void *(*___operator_assign__PF2tT_R2tT2tT__1)(void *__anonymous_object710, void *__anonymous_object711), __attribute__ ((unused)) void (*___constructor__PF_R2tT__1)(void *__anonymous_object712), __attribute__ ((unused)) void (*___constructor__PF_R2tT2tT__1)(void *__anonymous_object713, void *__anonymous_object714), __attribute__ ((unused)) void (*___destructor__PF_R2tT__1)(void *__anonymous_object715), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype2tT__1)(void *__anonymous_object716, void *__anonymous_object717), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object718), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object719), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object720, _Bool __anonymous_object721), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object722), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object723, const char *__anonymous_object724), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object725), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object726, _Bool __anonymous_object727), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object728), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object729), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object730), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object731), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object732), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object733, const char *__anonymous_object734), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object735), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object736, const char *__anonymous_object737), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object738), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object739), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object740, const char *__anonymous_object741, unsigned long int __anonymous_object742), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object743, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype7tParams__1)(void *__anonymous_object744, void *__anonymous_object745), void *__os__P7tostype_1, void *__arg__2tT_1, void *__rest__7tParams_1);
    3232void *___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object746), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object747), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object748, _Bool __anonymous_object749), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object750), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object751, const char *__anonymous_object752), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object753), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object754, _Bool __anonymous_object755), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object756), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object757), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object758), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object759), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object760), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object761, const char *__anonymous_object762), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object763), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object764, const char *__anonymous_object765), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object766), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object767), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object768, const char *__anonymous_object769, unsigned long int __anonymous_object770), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object771, const char *__fmt__PCc_1, ...), void *__anonymous_object772, void *(*__anonymous_object773)(void *__anonymous_object774));
    3333void *__endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object775), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object776), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object777, _Bool __anonymous_object778), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object779), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object780, const char *__anonymous_object781), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object782), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object783, _Bool __anonymous_object784), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object785), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object786), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object787), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object788), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object789), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object790, const char *__anonymous_object791), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object792), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object793, const char *__anonymous_object794), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object795), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object796), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object797, const char *__anonymous_object798, unsigned long int __anonymous_object799), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object800, const char *__fmt__PCc_1, ...), void *__anonymous_object801);
     
    3838void *__sepDisable__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object910), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object911), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object912, _Bool __anonymous_object913), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object914), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object915, const char *__anonymous_object916), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object917), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object918, _Bool __anonymous_object919), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object920), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object921), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object922), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object923), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object924), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object925, const char *__anonymous_object926), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object927), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object928, const char *__anonymous_object929), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object930), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object931), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object932, const char *__anonymous_object933, unsigned long int __anonymous_object934), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object935, const char *__fmt__PCc_1, ...), void *__anonymous_object936);
    3939void *__sepEnable__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(__attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object937), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object938), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object939, _Bool __anonymous_object940), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object941), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object942, const char *__anonymous_object943), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object944), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object945, _Bool __anonymous_object946), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object947), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object948), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object949), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object950), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object951), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object952, const char *__anonymous_object953), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object954), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object955, const char *__anonymous_object956), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object957), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object958), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object959, const char *__anonymous_object960, unsigned long int __anonymous_object961), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object962, const char *__fmt__PCc_1, ...), void *__anonymous_object963);
    40 void __write__A2_1_0_0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_bitor__PFPd0_Pd0t1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFt2_Rt2t2____constructor__PF_Rt2____constructor__PF_Rt2t2____destructor__PF_Rt2____operator_preincr__PFt2_Rt2____operator_predecr__PFt2_Rt2____operator_equal__PFi_t2t2____operator_notequal__PFi_t2t2____operator_deref__PFRt1_t2__F_t2t2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object964)(), void *__anonymous_object965), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object966)(), void *__anonymous_object967, void *__anonymous_object968), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object969)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object970), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object971)(), void *__anonymous_object972, void *__anonymous_object973), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object974)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object975, void *__anonymous_object976), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object977)(), void *__anonymous_object978, void *__anonymous_object979), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object980)(), void *__anonymous_object981, void *__anonymous_object982), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object983)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object984, void *__anonymous_object985), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object986, void *__anonymous_object987), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object988), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object989, void *__anonymous_object990), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object991), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object992, void *__anonymous_object993), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object994), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object995), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object996, _Bool __anonymous_object997), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object998), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object999, const char *__anonymous_object1000), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1001), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1002, _Bool __anonymous_object1003), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1004), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1005), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1006), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1007), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1008), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1009, const char *__anonymous_object1010), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1011), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1012, const char *__anonymous_object1013), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1014), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1015), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1016, const char *__anonymous_object1017, unsigned long int __anonymous_object1018), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1019, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1020, void *__anonymous_object1021), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1022), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1023, void *__anonymous_object1024), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1025), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1026), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1027), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1028, void *__anonymous_object1029), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1030, void *__anonymous_object1031), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1032), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1);
    41 void __write_reverse__A2_1_0_0____operator_assign__PFt1_Rt1t1____constructor__PF_Rt1____constructor__PF_Rt1t1____destructor__PF_Rt1____operator_bitor__PFPd0_Pd0t1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFt2_Rt2t2____constructor__PF_Rt2____constructor__PF_Rt2t2____destructor__PF_Rt2____operator_preincr__PFt2_Rt2____operator_predecr__PFt2_Rt2____operator_equal__PFi_t2t2____operator_notequal__PFi_t2t2____operator_deref__PFRt1_t2__F_t2t2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object1033)(), void *__anonymous_object1034), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object1035)(), void *__anonymous_object1036, void *__anonymous_object1037), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object1038)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object1039), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object1040)(), void *__anonymous_object1041, void *__anonymous_object1042), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object1043)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object1044, void *__anonymous_object1045), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object1046)(), void *__anonymous_object1047, void *__anonymous_object1048), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object1049)(), void *__anonymous_object1050, void *__anonymous_object1051), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object1052)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object1053, void *__anonymous_object1054), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object1055, void *__anonymous_object1056), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object1057), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object1058, void *__anonymous_object1059), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object1060), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object1061, void *__anonymous_object1062), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object1063), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object1064), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object1065, _Bool __anonymous_object1066), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object1067), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object1068, const char *__anonymous_object1069), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1070), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1071, _Bool __anonymous_object1072), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1073), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1074), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1075), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1076), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1077), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1078, const char *__anonymous_object1079), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1080), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1081, const char *__anonymous_object1082), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1083), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1084), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1085, const char *__anonymous_object1086, unsigned long int __anonymous_object1087), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1088, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1089, void *__anonymous_object1090), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1091), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1092, void *__anonymous_object1093), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1094), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1095), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1096), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1097, void *__anonymous_object1098), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1099, void *__anonymous_object1100), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1101), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1);
     40void __write__A0_3_0_0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_bitor__PFPd0_Pd0d1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFd2_Rd2d2____constructor__PF_Rd2____constructor__PF_Rd2d2____destructor__PF_Rd2____operator_preincr__PFd2_Rd2____operator_predecr__PFd2_Rd2____operator_equal__PFi_d2d2____operator_notequal__PFi_d2d2____operator_deref__PFRd1_d2__F_d2d2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object964)(), void *__anonymous_object965), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object966)(), void *__anonymous_object967, void *__anonymous_object968), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object969)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object970), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object971)(), void *__anonymous_object972, void *__anonymous_object973), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object974)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object975, void *__anonymous_object976), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object977)(), void *__anonymous_object978, void *__anonymous_object979), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object980)(), void *__anonymous_object981, void *__anonymous_object982), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object983)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object984, void *__anonymous_object985), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object986, void *__anonymous_object987), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object988), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object989, void *__anonymous_object990), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object991), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object992, void *__anonymous_object993), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object994), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object995), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object996, _Bool __anonymous_object997), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object998), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object999, const char *__anonymous_object1000), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1001), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1002, _Bool __anonymous_object1003), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1004), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1005), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1006), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1007), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1008), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1009, const char *__anonymous_object1010), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1011), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1012, const char *__anonymous_object1013), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1014), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1015), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1016, const char *__anonymous_object1017, unsigned long int __anonymous_object1018), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1019, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1020, void *__anonymous_object1021), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1022), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1023, void *__anonymous_object1024), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1025), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1026), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1027), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1028, void *__anonymous_object1029), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1030, void *__anonymous_object1031), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1032), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1);
     41void __write_reverse__A0_3_0_0____operator_assign__PFd1_Rd1d1____constructor__PF_Rd1____constructor__PF_Rd1d1____destructor__PF_Rd1____operator_bitor__PFPd0_Pd0d1___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc____operator_assign__PFd2_Rd2d2____constructor__PF_Rd2____constructor__PF_Rd2d2____destructor__PF_Rd2____operator_preincr__PFd2_Rd2____operator_predecr__PFd2_Rd2____operator_equal__PFi_d2d2____operator_notequal__PFi_d2d2____operator_deref__PFRd1_d2__F_d2d2Pd0__1(__attribute__ ((unused)) void *(*_adapterFP9telt_type_14titerator_type_M_P)(void (*__anonymous_object1033)(), void *__anonymous_object1034), __attribute__ ((unused)) signed int (*_adapterFi_14titerator_type14titerator_type_M_PP)(void (*__anonymous_object1035)(), void *__anonymous_object1036, void *__anonymous_object1037), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type_P_M)(void (*__anonymous_object1038)(), __attribute__ ((unused)) void *___retval__operator_preincr__14titerator_type_1, void *__anonymous_object1039), __attribute__ ((unused)) void (*_adapterF_P14titerator_type14titerator_type__MP)(void (*__anonymous_object1040)(), void *__anonymous_object1041, void *__anonymous_object1042), __attribute__ ((unused)) void (*_adapterF14titerator_type_P14titerator_type14titerator_type_P_MP)(void (*__anonymous_object1043)(), __attribute__ ((unused)) void *___retval__operator_assign__14titerator_type_1, void *__anonymous_object1044, void *__anonymous_object1045), __attribute__ ((unused)) void *(*_adapterFP7tostype_P7tostype9telt_type_M_MP)(void (*__anonymous_object1046)(), void *__anonymous_object1047, void *__anonymous_object1048), __attribute__ ((unused)) void (*_adapterF_P9telt_type9telt_type__MP)(void (*__anonymous_object1049)(), void *__anonymous_object1050, void *__anonymous_object1051), __attribute__ ((unused)) void (*_adapterF9telt_type_P9telt_type9telt_type_P_MP)(void (*__anonymous_object1052)(), __attribute__ ((unused)) void *___retval__operator_assign__9telt_type_1, void *__anonymous_object1053, void *__anonymous_object1054), __attribute__ ((unused)) unsigned long int _sizeof_9telt_type, __attribute__ ((unused)) unsigned long int _alignof_9telt_type, __attribute__ ((unused)) unsigned long int _sizeof_14titerator_type, __attribute__ ((unused)) unsigned long int _alignof_14titerator_type, __attribute__ ((unused)) void *(*___operator_assign__PF9telt_type_R9telt_type9telt_type__1)(void *__anonymous_object1055, void *__anonymous_object1056), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type__1)(void *__anonymous_object1057), __attribute__ ((unused)) void (*___constructor__PF_R9telt_type9telt_type__1)(void *__anonymous_object1058, void *__anonymous_object1059), __attribute__ ((unused)) void (*___destructor__PF_R9telt_type__1)(void *__anonymous_object1060), __attribute__ ((unused)) void *(*___operator_bitor__PFP7tostype_P7tostype9telt_type__1)(void *__anonymous_object1061, void *__anonymous_object1062), __attribute__ ((unused)) _Bool (*__sepPrt__PFb_P7tostype__1)(void *__anonymous_object1063), __attribute__ ((unused)) void (*__sepReset__PF_P7tostype__1)(void *__anonymous_object1064), __attribute__ ((unused)) void (*__sepReset__PF_P7tostypeb__1)(void *__anonymous_object1065, _Bool __anonymous_object1066), __attribute__ ((unused)) const char *(*__sepGetCur__PFPCc_P7tostype__1)(void *__anonymous_object1067), __attribute__ ((unused)) void (*__sepSetCur__PF_P7tostypePCc__1)(void *__anonymous_object1068, const char *__anonymous_object1069), __attribute__ ((unused)) _Bool (*__getNL__PFb_P7tostype__1)(void *__anonymous_object1070), __attribute__ ((unused)) void (*__setNL__PF_P7tostypeb__1)(void *__anonymous_object1071, _Bool __anonymous_object1072), __attribute__ ((unused)) void (*__sepOn__PF_P7tostype__1)(void *__anonymous_object1073), __attribute__ ((unused)) void (*__sepOff__PF_P7tostype__1)(void *__anonymous_object1074), __attribute__ ((unused)) _Bool (*__sepDisable__PFb_P7tostype__1)(void *__anonymous_object1075), __attribute__ ((unused)) _Bool (*__sepEnable__PFb_P7tostype__1)(void *__anonymous_object1076), __attribute__ ((unused)) const char *(*__sepGet__PFPCc_P7tostype__1)(void *__anonymous_object1077), __attribute__ ((unused)) void (*__sepSet__PF_P7tostypePCc__1)(void *__anonymous_object1078, const char *__anonymous_object1079), __attribute__ ((unused)) const char *(*__sepGetTuple__PFPCc_P7tostype__1)(void *__anonymous_object1080), __attribute__ ((unused)) void (*__sepSetTuple__PF_P7tostypePCc__1)(void *__anonymous_object1081, const char *__anonymous_object1082), __attribute__ ((unused)) signed int (*__fail__PFi_P7tostype__1)(void *__anonymous_object1083), __attribute__ ((unused)) signed int (*__flush__PFi_P7tostype__1)(void *__anonymous_object1084), __attribute__ ((unused)) void (*__open__PF_P7tostypePCcPCc__1)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tostype__1)(void *__os__P7tostype_1), __attribute__ ((unused)) void *(*__write__PFP7tostype_P7tostypePCcUl__1)(void *__anonymous_object1085, const char *__anonymous_object1086, unsigned long int __anonymous_object1087), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tostypePCc__1)(void *__anonymous_object1088, const char *__fmt__PCc_1, ...), __attribute__ ((unused)) void *(*___operator_assign__PF14titerator_type_R14titerator_type14titerator_type__1)(void *__anonymous_object1089, void *__anonymous_object1090), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type__1)(void *__anonymous_object1091), __attribute__ ((unused)) void (*___constructor__PF_R14titerator_type14titerator_type__1)(void *__anonymous_object1092, void *__anonymous_object1093), __attribute__ ((unused)) void (*___destructor__PF_R14titerator_type__1)(void *__anonymous_object1094), __attribute__ ((unused)) void *(*___operator_preincr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1095), __attribute__ ((unused)) void *(*___operator_predecr__PF14titerator_type_R14titerator_type__1)(void *__anonymous_object1096), __attribute__ ((unused)) signed int (*___operator_equal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1097, void *__anonymous_object1098), __attribute__ ((unused)) signed int (*___operator_notequal__PFi_14titerator_type14titerator_type__1)(void *__anonymous_object1099, void *__anonymous_object1100), __attribute__ ((unused)) void *(*___operator_deref__PFR9telt_type_14titerator_type__1)(void *__anonymous_object1101), void *__begin__14titerator_type_1, void *__end__14titerator_type_1, void *__os__P7tostype_1);
    4242void *___operator_bitor__A0_1_0_0___fail__PFi_Pd0___eof__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___read__PFPd0_Pd0PcUl___ungetc__PFPd0_Pd0c___fmt__PFi_Pd0PCc__FPd0_Pd0Rc__1(__attribute__ ((unused)) signed int (*__fail__PFi_P7tistype__1)(void *__anonymous_object1102), __attribute__ ((unused)) signed int (*__eof__PFi_P7tistype__1)(void *__anonymous_object1103), __attribute__ ((unused)) void (*__open__PF_P7tistypePCcPCc__1)(void *__is__P7tistype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tistype__1)(void *__is__P7tistype_1), __attribute__ ((unused)) void *(*__read__PFP7tistype_P7tistypePcUl__1)(void *__anonymous_object1104, char *__anonymous_object1105, unsigned long int __anonymous_object1106), __attribute__ ((unused)) void *(*__ungetc__PFP7tistype_P7tistypec__1)(void *__anonymous_object1107, char __anonymous_object1108), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tistypePCc__1)(void *__anonymous_object1109, const char *__fmt__PCc_1, ...), void *__anonymous_object1110, char *__anonymous_object1111);
    4343void *___operator_bitor__A0_1_0_0___fail__PFi_Pd0___eof__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___read__PFPd0_Pd0PcUl___ungetc__PFPd0_Pd0c___fmt__PFi_Pd0PCc__FPd0_Pd0RSc__1(__attribute__ ((unused)) signed int (*__fail__PFi_P7tistype__1)(void *__anonymous_object1112), __attribute__ ((unused)) signed int (*__eof__PFi_P7tistype__1)(void *__anonymous_object1113), __attribute__ ((unused)) void (*__open__PF_P7tistypePCcPCc__1)(void *__is__P7tistype_1, const char *__name__PCc_1, const char *__mode__PCc_1), __attribute__ ((unused)) void (*__close__PF_P7tistype__1)(void *__is__P7tistype_1), __attribute__ ((unused)) void *(*__read__PFP7tistype_P7tistypePcUl__1)(void *__anonymous_object1114, char *__anonymous_object1115, unsigned long int __anonymous_object1116), __attribute__ ((unused)) void *(*__ungetc__PFP7tistype_P7tistypec__1)(void *__anonymous_object1117, char __anonymous_object1118), __attribute__ ((unused)) signed int (*__fmt__PFi_P7tistypePCc__1)(void *__anonymous_object1119, const char *__fmt__PCc_1, ...), void *__anonymous_object1120, signed char *__anonymous_object1121);
     
    466466    struct ofstream *_tmp_cp_ret2;
    467467    __attribute__ ((unused)) struct ofstream *_thunk0(struct ofstream *_p0){
    468         return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1322))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1323))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1324, _Bool __anonymous_object1325))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1326))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1327, const char *__anonymous_object1328))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1329))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1330, _Bool __anonymous_object1331))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1332))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1333))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1334))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1335))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1336))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1337, const char *__anonymous_object1338))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1339))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1340, const char *__anonymous_object1341))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1342))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1343))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1344, const char *__anonymous_object1345, unsigned long int __anonymous_object1346))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1347, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);
    469     }
    470     ((void)(((void)(_tmp_cp_ret2=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1348))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1349))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1350, _Bool __anonymous_object1351))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1352))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1353, const char *__anonymous_object1354))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1355))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1356, _Bool __anonymous_object1357))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1358))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1359))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1360))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1361))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1362))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1363, const char *__anonymous_object1364))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1365))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1366, const char *__anonymous_object1367))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1368))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1369))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1370, const char *__anonymous_object1371, unsigned long int __anonymous_object1372))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1373, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret1=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0c__1(((_Bool (*)(void *__anonymous_object1374))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1375))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1376, _Bool __anonymous_object1377))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1378))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1379, const char *__anonymous_object1380))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1381))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1382, _Bool __anonymous_object1383))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1384))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1385))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1386))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1387))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1388))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1389, const char *__anonymous_object1390))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1391))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1392, const char *__anonymous_object1393))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1394))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1395))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1396, const char *__anonymous_object1397, unsigned long int __anonymous_object1398))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1399, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret0=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1400))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1401))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1402, _Bool __anonymous_object1403))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1404))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1405, const char *__anonymous_object1406))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1407))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1408, _Bool __anonymous_object1409))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1410))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1411))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1412))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1413))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1414))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1415, const char *__anonymous_object1416))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1417))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1418, const char *__anonymous_object1419))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1420))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1421))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1422, const char *__anonymous_object1423, unsigned long int __anonymous_object1424))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1425, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "char "))) , _tmp_cp_ret0), __v__c_1))) , _tmp_cp_ret1), ((void *(*)(void *__anonymous_object1426))(&_thunk0))))) , _tmp_cp_ret2));
     468        return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1322))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1323))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1324, _Bool __anonymous_object1325))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1326))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1327, const char *__anonymous_object1328))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1329))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1330, _Bool __anonymous_object1331))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1332))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1333))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1334))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1335))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1336))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1337, const char *__anonymous_object1338))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1339))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1340, const char *__anonymous_object1341))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1342))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1343))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1344, const char *__anonymous_object1345, unsigned long int __anonymous_object1346))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1347, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0));
     469    }
     470    ((void)(((void)(_tmp_cp_ret2=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1348))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1349))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1350, _Bool __anonymous_object1351))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1352))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1353, const char *__anonymous_object1354))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1355))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1356, _Bool __anonymous_object1357))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1358))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1359))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1360))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1361))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1362))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1363, const char *__anonymous_object1364))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1365))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1366, const char *__anonymous_object1367))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1368))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1369))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1370, const char *__anonymous_object1371, unsigned long int __anonymous_object1372))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1373, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret1=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0c__1(((_Bool (*)(void *__anonymous_object1374))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1375))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1376, _Bool __anonymous_object1377))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1378))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1379, const char *__anonymous_object1380))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1381))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1382, _Bool __anonymous_object1383))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1384))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1385))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1386))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1387))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1388))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1389, const char *__anonymous_object1390))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1391))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1392, const char *__anonymous_object1393))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1394))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1395))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1396, const char *__anonymous_object1397, unsigned long int __anonymous_object1398))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1399, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret0=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1400))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1401))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1402, _Bool __anonymous_object1403))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1404))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1405, const char *__anonymous_object1406))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1407))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1408, _Bool __anonymous_object1409))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1410))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1411))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1412))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1413))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1414))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1415, const char *__anonymous_object1416))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1417))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1418, const char *__anonymous_object1419))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1420))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1421))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1422, const char *__anonymous_object1423, unsigned long int __anonymous_object1424))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1425, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "char "))) , _tmp_cp_ret0)), __v__c_1))) , _tmp_cp_ret1)), ((void *(*)(void *__anonymous_object1426))(&_thunk0))))) , _tmp_cp_ret2));
    471471    ((void)(_tmp_cp_ret0) /* ^?{} */);
    472472    ((void)(_tmp_cp_ret1) /* ^?{} */);
     
    478478    struct ofstream *_tmp_cp_ret5;
    479479    __attribute__ ((unused)) struct ofstream *_thunk1(struct ofstream *_p0){
    480         return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1427))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1428))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1429, _Bool __anonymous_object1430))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1431))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1432, const char *__anonymous_object1433))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1434))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1435, _Bool __anonymous_object1436))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1437))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1438))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1439))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1440))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1441))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1442, const char *__anonymous_object1443))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1444))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1445, const char *__anonymous_object1446))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1447))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1448))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1449, const char *__anonymous_object1450, unsigned long int __anonymous_object1451))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1452, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);
    481     }
    482     ((void)(((void)(_tmp_cp_ret5=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1453))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1454))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1455, _Bool __anonymous_object1456))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1457))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1458, const char *__anonymous_object1459))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1460))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1461, _Bool __anonymous_object1462))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1463))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1464))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1465))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1466))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1467))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1468, const char *__anonymous_object1469))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1470))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1471, const char *__anonymous_object1472))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1473))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1474))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1475, const char *__anonymous_object1476, unsigned long int __anonymous_object1477))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1478, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret4=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Sc__1(((_Bool (*)(void *__anonymous_object1479))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1480))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1481, _Bool __anonymous_object1482))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1483))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1484, const char *__anonymous_object1485))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1486))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1487, _Bool __anonymous_object1488))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1489))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1490))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1491))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1492))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1493))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1494, const char *__anonymous_object1495))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1496))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1497, const char *__anonymous_object1498))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1499))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1500))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1501, const char *__anonymous_object1502, unsigned long int __anonymous_object1503))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1504, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret3=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1505))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1506))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1507, _Bool __anonymous_object1508))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1509))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1510, const char *__anonymous_object1511))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1512))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1513, _Bool __anonymous_object1514))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1515))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1516))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1517))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1518))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1519))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1520, const char *__anonymous_object1521))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1522))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1523, const char *__anonymous_object1524))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1525))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1526))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1527, const char *__anonymous_object1528, unsigned long int __anonymous_object1529))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1530, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "signed char "))) , _tmp_cp_ret3), __v__Sc_1))) , _tmp_cp_ret4), ((void *(*)(void *__anonymous_object1531))(&_thunk1))))) , _tmp_cp_ret5));
     480        return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1427))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1428))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1429, _Bool __anonymous_object1430))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1431))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1432, const char *__anonymous_object1433))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1434))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1435, _Bool __anonymous_object1436))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1437))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1438))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1439))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1440))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1441))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1442, const char *__anonymous_object1443))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1444))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1445, const char *__anonymous_object1446))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1447))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1448))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1449, const char *__anonymous_object1450, unsigned long int __anonymous_object1451))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1452, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0));
     481    }
     482    ((void)(((void)(_tmp_cp_ret5=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1453))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1454))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1455, _Bool __anonymous_object1456))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1457))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1458, const char *__anonymous_object1459))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1460))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1461, _Bool __anonymous_object1462))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1463))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1464))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1465))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1466))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1467))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1468, const char *__anonymous_object1469))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1470))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1471, const char *__anonymous_object1472))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1473))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1474))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1475, const char *__anonymous_object1476, unsigned long int __anonymous_object1477))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1478, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret4=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Sc__1(((_Bool (*)(void *__anonymous_object1479))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1480))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1481, _Bool __anonymous_object1482))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1483))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1484, const char *__anonymous_object1485))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1486))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1487, _Bool __anonymous_object1488))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1489))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1490))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1491))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1492))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1493))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1494, const char *__anonymous_object1495))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1496))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1497, const char *__anonymous_object1498))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1499))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1500))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1501, const char *__anonymous_object1502, unsigned long int __anonymous_object1503))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1504, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret3=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1505))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1506))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1507, _Bool __anonymous_object1508))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1509))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1510, const char *__anonymous_object1511))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1512))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1513, _Bool __anonymous_object1514))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1515))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1516))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1517))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1518))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1519))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1520, const char *__anonymous_object1521))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1522))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1523, const char *__anonymous_object1524))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1525))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1526))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1527, const char *__anonymous_object1528, unsigned long int __anonymous_object1529))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1530, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "signed char "))) , _tmp_cp_ret3)), __v__Sc_1))) , _tmp_cp_ret4)), ((void *(*)(void *__anonymous_object1531))(&_thunk1))))) , _tmp_cp_ret5));
    483483    ((void)(_tmp_cp_ret3) /* ^?{} */);
    484484    ((void)(_tmp_cp_ret4) /* ^?{} */);
     
    490490    struct ofstream *_tmp_cp_ret8;
    491491    __attribute__ ((unused)) struct ofstream *_thunk2(struct ofstream *_p0){
    492         return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1532))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1533))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1534, _Bool __anonymous_object1535))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1536))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1537, const char *__anonymous_object1538))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1539))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1540, _Bool __anonymous_object1541))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1542))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1543))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1544))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1545))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1546))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1547, const char *__anonymous_object1548))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1549))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1550, const char *__anonymous_object1551))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1552))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1553))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1554, const char *__anonymous_object1555, unsigned long int __anonymous_object1556))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1557, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);
    493     }
    494     ((void)(((void)(_tmp_cp_ret8=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1558))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1559))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1560, _Bool __anonymous_object1561))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1562))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1563, const char *__anonymous_object1564))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1565))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1566, _Bool __anonymous_object1567))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1568))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1569))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1570))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1571))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1572))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1573, const char *__anonymous_object1574))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1575))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1576, const char *__anonymous_object1577))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1578))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1579))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1580, const char *__anonymous_object1581, unsigned long int __anonymous_object1582))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1583, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret7=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Uc__1(((_Bool (*)(void *__anonymous_object1584))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1585))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1586, _Bool __anonymous_object1587))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1588))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1589, const char *__anonymous_object1590))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1591))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1592, _Bool __anonymous_object1593))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1594))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1595))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1596))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1597))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1598))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1599, const char *__anonymous_object1600))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1601))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1602, const char *__anonymous_object1603))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1604))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1605))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1606, const char *__anonymous_object1607, unsigned long int __anonymous_object1608))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1609, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret6=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1610))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1611))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1612, _Bool __anonymous_object1613))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1614))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1615, const char *__anonymous_object1616))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1617))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1618, _Bool __anonymous_object1619))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1620))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1621))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1622))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1623))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1624))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1625, const char *__anonymous_object1626))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1627))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1628, const char *__anonymous_object1629))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1630))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1631))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1632, const char *__anonymous_object1633, unsigned long int __anonymous_object1634))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1635, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "unsigned char "))) , _tmp_cp_ret6), __v__Uc_1))) , _tmp_cp_ret7), ((void *(*)(void *__anonymous_object1636))(&_thunk2))))) , _tmp_cp_ret8));
     492        return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1532))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1533))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1534, _Bool __anonymous_object1535))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1536))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1537, const char *__anonymous_object1538))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1539))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1540, _Bool __anonymous_object1541))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1542))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1543))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1544))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1545))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1546))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1547, const char *__anonymous_object1548))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1549))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1550, const char *__anonymous_object1551))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1552))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1553))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1554, const char *__anonymous_object1555, unsigned long int __anonymous_object1556))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1557, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0));
     493    }
     494    ((void)(((void)(_tmp_cp_ret8=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1558))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1559))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1560, _Bool __anonymous_object1561))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1562))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1563, const char *__anonymous_object1564))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1565))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1566, _Bool __anonymous_object1567))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1568))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1569))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1570))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1571))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1572))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1573, const char *__anonymous_object1574))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1575))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1576, const char *__anonymous_object1577))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1578))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1579))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1580, const char *__anonymous_object1581, unsigned long int __anonymous_object1582))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1583, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret7=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Uc__1(((_Bool (*)(void *__anonymous_object1584))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1585))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1586, _Bool __anonymous_object1587))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1588))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1589, const char *__anonymous_object1590))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1591))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1592, _Bool __anonymous_object1593))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1594))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1595))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1596))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1597))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1598))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1599, const char *__anonymous_object1600))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1601))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1602, const char *__anonymous_object1603))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1604))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1605))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1606, const char *__anonymous_object1607, unsigned long int __anonymous_object1608))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1609, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret6=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1610))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1611))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1612, _Bool __anonymous_object1613))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1614))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1615, const char *__anonymous_object1616))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1617))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1618, _Bool __anonymous_object1619))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1620))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1621))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1622))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1623))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1624))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1625, const char *__anonymous_object1626))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1627))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1628, const char *__anonymous_object1629))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1630))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1631))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1632, const char *__anonymous_object1633, unsigned long int __anonymous_object1634))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1635, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "unsigned char "))) , _tmp_cp_ret6)), __v__Uc_1))) , _tmp_cp_ret7)), ((void *(*)(void *__anonymous_object1636))(&_thunk2))))) , _tmp_cp_ret8));
    495495    ((void)(_tmp_cp_ret6) /* ^?{} */);
    496496    ((void)(_tmp_cp_ret7) /* ^?{} */);
     
    502502    struct ofstream *_tmp_cp_ret11;
    503503    __attribute__ ((unused)) struct ofstream *_thunk3(struct ofstream *_p0){
    504         return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1637))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1638))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1639, _Bool __anonymous_object1640))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1641))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1642, const char *__anonymous_object1643))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1644))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1645, _Bool __anonymous_object1646))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1647))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1648))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1649))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1650))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1651))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1652, const char *__anonymous_object1653))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1654))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1655, const char *__anonymous_object1656))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1657))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1658))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1659, const char *__anonymous_object1660, unsigned long int __anonymous_object1661))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1662, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);
    505     }
    506     ((void)(((void)(_tmp_cp_ret11=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1663))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1664))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1665, _Bool __anonymous_object1666))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1667))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1668, const char *__anonymous_object1669))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1670))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1671, _Bool __anonymous_object1672))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1673))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1674))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1675))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1676))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1677))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1678, const char *__anonymous_object1679))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1680))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1681, const char *__anonymous_object1682))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1683))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1684))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1685, const char *__anonymous_object1686, unsigned long int __anonymous_object1687))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1688, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret10=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0s__1(((_Bool (*)(void *__anonymous_object1689))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1690))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1691, _Bool __anonymous_object1692))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1693))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1694, const char *__anonymous_object1695))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1696))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1697, _Bool __anonymous_object1698))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1699))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1700))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1701))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1702))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1703))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1704, const char *__anonymous_object1705))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1706))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1707, const char *__anonymous_object1708))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1709))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1710))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1711, const char *__anonymous_object1712, unsigned long int __anonymous_object1713))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1714, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret9=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1715))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1716))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1717, _Bool __anonymous_object1718))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1719))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1720, const char *__anonymous_object1721))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1722))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1723, _Bool __anonymous_object1724))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1725))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1726))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1727))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1728))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1729))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1730, const char *__anonymous_object1731))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1732))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1733, const char *__anonymous_object1734))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1735))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1736))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1737, const char *__anonymous_object1738, unsigned long int __anonymous_object1739))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1740, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "signed short int"))) , _tmp_cp_ret9), __v__s_1))) , _tmp_cp_ret10), ((void *(*)(void *__anonymous_object1741))(&_thunk3))))) , _tmp_cp_ret11));
     504        return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1637))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1638))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1639, _Bool __anonymous_object1640))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1641))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1642, const char *__anonymous_object1643))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1644))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1645, _Bool __anonymous_object1646))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1647))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1648))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1649))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1650))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1651))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1652, const char *__anonymous_object1653))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1654))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1655, const char *__anonymous_object1656))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1657))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1658))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1659, const char *__anonymous_object1660, unsigned long int __anonymous_object1661))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1662, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0));
     505    }
     506    ((void)(((void)(_tmp_cp_ret11=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1663))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1664))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1665, _Bool __anonymous_object1666))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1667))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1668, const char *__anonymous_object1669))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1670))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1671, _Bool __anonymous_object1672))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1673))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1674))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1675))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1676))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1677))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1678, const char *__anonymous_object1679))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1680))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1681, const char *__anonymous_object1682))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1683))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1684))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1685, const char *__anonymous_object1686, unsigned long int __anonymous_object1687))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1688, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret10=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0s__1(((_Bool (*)(void *__anonymous_object1689))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1690))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1691, _Bool __anonymous_object1692))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1693))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1694, const char *__anonymous_object1695))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1696))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1697, _Bool __anonymous_object1698))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1699))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1700))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1701))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1702))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1703))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1704, const char *__anonymous_object1705))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1706))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1707, const char *__anonymous_object1708))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1709))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1710))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1711, const char *__anonymous_object1712, unsigned long int __anonymous_object1713))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1714, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret9=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1715))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1716))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1717, _Bool __anonymous_object1718))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1719))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1720, const char *__anonymous_object1721))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1722))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1723, _Bool __anonymous_object1724))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1725))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1726))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1727))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1728))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1729))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1730, const char *__anonymous_object1731))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1732))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1733, const char *__anonymous_object1734))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1735))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1736))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1737, const char *__anonymous_object1738, unsigned long int __anonymous_object1739))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1740, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "signed short int"))) , _tmp_cp_ret9)), __v__s_1))) , _tmp_cp_ret10)), ((void *(*)(void *__anonymous_object1741))(&_thunk3))))) , _tmp_cp_ret11));
    507507    ((void)(_tmp_cp_ret9) /* ^?{} */);
    508508    ((void)(_tmp_cp_ret10) /* ^?{} */);
     
    514514    struct ofstream *_tmp_cp_ret14;
    515515    __attribute__ ((unused)) struct ofstream *_thunk4(struct ofstream *_p0){
    516         return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1742))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1743))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1744, _Bool __anonymous_object1745))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1746))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1747, const char *__anonymous_object1748))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1749))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1750, _Bool __anonymous_object1751))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1752))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1753))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1754))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1755))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1756))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1757, const char *__anonymous_object1758))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1759))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1760, const char *__anonymous_object1761))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1762))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1763))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1764, const char *__anonymous_object1765, unsigned long int __anonymous_object1766))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1767, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);
    517     }
    518     ((void)(((void)(_tmp_cp_ret14=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1768))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1769))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1770, _Bool __anonymous_object1771))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1772))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1773, const char *__anonymous_object1774))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1775))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1776, _Bool __anonymous_object1777))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1778))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1779))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1780))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1781))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1782))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1783, const char *__anonymous_object1784))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1785))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1786, const char *__anonymous_object1787))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1788))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1789))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1790, const char *__anonymous_object1791, unsigned long int __anonymous_object1792))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1793, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret13=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Us__1(((_Bool (*)(void *__anonymous_object1794))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1795))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1796, _Bool __anonymous_object1797))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1798))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1799, const char *__anonymous_object1800))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1801))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1802, _Bool __anonymous_object1803))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1804))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1805))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1806))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1807))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1808))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1809, const char *__anonymous_object1810))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1811))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1812, const char *__anonymous_object1813))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1814))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1815))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1816, const char *__anonymous_object1817, unsigned long int __anonymous_object1818))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1819, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret12=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1820))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1821))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1822, _Bool __anonymous_object1823))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1824))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1825, const char *__anonymous_object1826))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1827))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1828, _Bool __anonymous_object1829))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1830))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1831))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1832))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1833))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1834))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1835, const char *__anonymous_object1836))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1837))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1838, const char *__anonymous_object1839))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1840))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1841))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1842, const char *__anonymous_object1843, unsigned long int __anonymous_object1844))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1845, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "unsigned short int"))) , _tmp_cp_ret12), __v__Us_1))) , _tmp_cp_ret13), ((void *(*)(void *__anonymous_object1846))(&_thunk4))))) , _tmp_cp_ret14));
     516        return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1742))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1743))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1744, _Bool __anonymous_object1745))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1746))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1747, const char *__anonymous_object1748))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1749))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1750, _Bool __anonymous_object1751))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1752))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1753))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1754))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1755))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1756))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1757, const char *__anonymous_object1758))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1759))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1760, const char *__anonymous_object1761))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1762))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1763))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1764, const char *__anonymous_object1765, unsigned long int __anonymous_object1766))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1767, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0));
     517    }
     518    ((void)(((void)(_tmp_cp_ret14=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1768))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1769))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1770, _Bool __anonymous_object1771))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1772))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1773, const char *__anonymous_object1774))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1775))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1776, _Bool __anonymous_object1777))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1778))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1779))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1780))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1781))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1782))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1783, const char *__anonymous_object1784))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1785))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1786, const char *__anonymous_object1787))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1788))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1789))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1790, const char *__anonymous_object1791, unsigned long int __anonymous_object1792))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1793, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret13=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Us__1(((_Bool (*)(void *__anonymous_object1794))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1795))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1796, _Bool __anonymous_object1797))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1798))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1799, const char *__anonymous_object1800))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1801))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1802, _Bool __anonymous_object1803))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1804))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1805))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1806))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1807))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1808))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1809, const char *__anonymous_object1810))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1811))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1812, const char *__anonymous_object1813))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1814))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1815))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1816, const char *__anonymous_object1817, unsigned long int __anonymous_object1818))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1819, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret12=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1820))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1821))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1822, _Bool __anonymous_object1823))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1824))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1825, const char *__anonymous_object1826))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1827))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1828, _Bool __anonymous_object1829))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1830))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1831))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1832))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1833))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1834))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1835, const char *__anonymous_object1836))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1837))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1838, const char *__anonymous_object1839))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1840))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1841))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1842, const char *__anonymous_object1843, unsigned long int __anonymous_object1844))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1845, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "unsigned short int"))) , _tmp_cp_ret12)), __v__Us_1))) , _tmp_cp_ret13)), ((void *(*)(void *__anonymous_object1846))(&_thunk4))))) , _tmp_cp_ret14));
    519519    ((void)(_tmp_cp_ret12) /* ^?{} */);
    520520    ((void)(_tmp_cp_ret13) /* ^?{} */);
     
    526526    struct ofstream *_tmp_cp_ret17;
    527527    __attribute__ ((unused)) struct ofstream *_thunk5(struct ofstream *_p0){
    528         return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1847))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1848))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1849, _Bool __anonymous_object1850))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1851))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1852, const char *__anonymous_object1853))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1854))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1855, _Bool __anonymous_object1856))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1857))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1858))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1859))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1860))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1861))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1862, const char *__anonymous_object1863))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1864))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1865, const char *__anonymous_object1866))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1867))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1868))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1869, const char *__anonymous_object1870, unsigned long int __anonymous_object1871))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1872, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), _p0);
    529     }
    530     ((void)(((void)(_tmp_cp_ret17=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1873))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1874))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1875, _Bool __anonymous_object1876))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1877))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1878, const char *__anonymous_object1879))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1880))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1881, _Bool __anonymous_object1882))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1883))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1884))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1885))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1886))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1887))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1888, const char *__anonymous_object1889))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1890))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1891, const char *__anonymous_object1892))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1893))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1894))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1895, const char *__anonymous_object1896, unsigned long int __anonymous_object1897))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1898, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret16=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Ui__1(((_Bool (*)(void *__anonymous_object1899))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1900))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1901, _Bool __anonymous_object1902))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1903))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1904, const char *__anonymous_object1905))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1906))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1907, _Bool __anonymous_object1908))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1909))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1910))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1911))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1912))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1913))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1914, const char *__anonymous_object1915))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1916))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1917, const char *__anonymous_object1918))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1919))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1920))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1921, const char *__anonymous_object1922, unsigned long int __anonymous_object1923))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1924, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), (((void)(_tmp_cp_ret15=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1925))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1926))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1927, _Bool __anonymous_object1928))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1929))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1930, const char *__anonymous_object1931))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1932))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1933, _Bool __anonymous_object1934))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1935))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1936))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1937))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1938))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1939))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1940, const char *__anonymous_object1941))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1942))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1943, const char *__anonymous_object1944))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1945))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1946))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1947, const char *__anonymous_object1948, unsigned long int __anonymous_object1949))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1950, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), __sout__P9sofstream_1, "size_t"))) , _tmp_cp_ret15), __v__Ui_1))) , _tmp_cp_ret16), ((void *(*)(void *__anonymous_object1951))(&_thunk5))))) , _tmp_cp_ret17));
     528        return __endl__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0__1(((_Bool (*)(void *__anonymous_object1847))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1848))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1849, _Bool __anonymous_object1850))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1851))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1852, const char *__anonymous_object1853))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1854))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1855, _Bool __anonymous_object1856))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1857))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1858))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1859))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1860))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1861))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1862, const char *__anonymous_object1863))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1864))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1865, const char *__anonymous_object1866))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1867))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1868))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1869, const char *__anonymous_object1870, unsigned long int __anonymous_object1871))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1872, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)_p0));
     529    }
     530    ((void)(((void)(_tmp_cp_ret17=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PFPd0_Pd0___1(((_Bool (*)(void *__anonymous_object1873))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1874))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1875, _Bool __anonymous_object1876))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1877))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1878, const char *__anonymous_object1879))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1880))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1881, _Bool __anonymous_object1882))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1883))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1884))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1885))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1886))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1887))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1888, const char *__anonymous_object1889))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1890))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1891, const char *__anonymous_object1892))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1893))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1894))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1895, const char *__anonymous_object1896, unsigned long int __anonymous_object1897))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1898, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret16=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0Ui__1(((_Bool (*)(void *__anonymous_object1899))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1900))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1901, _Bool __anonymous_object1902))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1903))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1904, const char *__anonymous_object1905))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1906))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1907, _Bool __anonymous_object1908))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1909))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1910))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1911))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1912))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1913))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1914, const char *__anonymous_object1915))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1916))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1917, const char *__anonymous_object1918))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1919))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1920))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1921, const char *__anonymous_object1922, unsigned long int __anonymous_object1923))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1924, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)(((void)(_tmp_cp_ret15=___operator_bitor__A0_1_0_0___sepPrt__PFb_Pd0___sepReset__PF_Pd0___sepReset__PF_Pd0b___sepGetCur__PFPCc_Pd0___sepSetCur__PF_Pd0PCc___getNL__PFb_Pd0___setNL__PF_Pd0b___sepOn__PF_Pd0___sepOff__PF_Pd0___sepDisable__PFb_Pd0___sepEnable__PFb_Pd0___sepGet__PFPCc_Pd0___sepSet__PF_Pd0PCc___sepGetTuple__PFPCc_Pd0___sepSetTuple__PF_Pd0PCc___fail__PFi_Pd0___flush__PFi_Pd0___open__PF_Pd0PCcPCc___close__PF_Pd0___write__PFPd0_Pd0PCcUl___fmt__PFi_Pd0PCc__FPd0_Pd0PCc__1(((_Bool (*)(void *__anonymous_object1925))__sepPrt__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1926))__sepReset__F_P9sofstream__1), ((void (*)(void *__anonymous_object1927, _Bool __anonymous_object1928))__sepReset__F_P9sofstreamb__1), ((const char *(*)(void *__anonymous_object1929))__sepGetCur__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1930, const char *__anonymous_object1931))__sepSetCur__F_P9sofstreamPCc__1), ((_Bool (*)(void *__anonymous_object1932))__getNL__Fb_P9sofstream__1), ((void (*)(void *__anonymous_object1933, _Bool __anonymous_object1934))__setNL__F_P9sofstreamb__1), ((void (*)(void *__anonymous_object1935))__sepOn__F_P9sofstream__1), ((void (*)(void *__anonymous_object1936))__sepOff__F_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1937))__sepDisable__Fb_P9sofstream__1), ((_Bool (*)(void *__anonymous_object1938))__sepEnable__Fb_P9sofstream__1), ((const char *(*)(void *__anonymous_object1939))__sepGet__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1940, const char *__anonymous_object1941))__sepSet__F_P9sofstreamPCc__1), ((const char *(*)(void *__anonymous_object1942))__sepGetTuple__FPCc_P9sofstream__1), ((void (*)(void *__anonymous_object1943, const char *__anonymous_object1944))__sepSetTuple__F_P9sofstreamPCc__1), ((signed int (*)(void *__anonymous_object1945))__fail__Fi_P9sofstream__1), ((signed int (*)(void *__anonymous_object1946))__flush__Fi_P9sofstream__1), ((void (*)(void *__os__P7tostype_1, const char *__name__PCc_1, const char *__mode__PCc_1))__open__F_P9sofstreamPCcPCc__1), ((void (*)(void *__os__P7tostype_1))__close__F_P9sofstream__1), ((void *(*)(void *__anonymous_object1947, const char *__anonymous_object1948, unsigned long int __anonymous_object1949))__write__FP9sofstream_P9sofstreamPCcUl__1), ((signed int (*)(void *__anonymous_object1950, const char *__fmt__PCc_1, ...))__fmt__Fi_P9sofstreamPCc__1), ((void *)__sout__P9sofstream_1), "size_t"))) , _tmp_cp_ret15)), __v__Ui_1))) , _tmp_cp_ret16)), ((void *(*)(void *__anonymous_object1951))(&_thunk5))))) , _tmp_cp_ret17));
    531531    ((void)(_tmp_cp_ret15) /* ^?{} */);
    532532    ((void)(_tmp_cp_ret16) /* ^?{} */);
  • src/tests/boundedBuffer.c

    r136ccd7 r4ee36bf0  
    1 // 
     1//
    22// The contents of this file are covered under the licence agreement in the
    33// file "LICENCE" distributed with Cforall.
    4 // 
    5 // boundedBuffer.c -- 
    6 // 
     4//
     5// boundedBuffer.c --
     6//
    77// Author           : Peter A. Buhr
    88// Created On       : Mon Oct 30 12:45:13 2017
     
    1010// Last Modified On : Mon Oct 30 23:02:46 2017
    1111// Update Count     : 9
    12 // 
     12//
    1313
    1414#include <stdlib>
     
    3131
    3232void insert( Buffer & mutex buffer, int elem ) {
    33         if ( buffer.count == 20 ) wait( &buffer.empty );
     33        if ( buffer.count == 20 ) wait( buffer.empty );
    3434        buffer.elements[buffer.back] = elem;
    3535        buffer.back = ( buffer.back + 1 ) % 20;
    3636        buffer.count += 1;
    37         signal( &buffer.full );
     37        signal( buffer.full );
    3838}
    3939int remove( Buffer & mutex buffer ) {
    40         if ( buffer.count == 0 ) wait( &buffer.full );
     40        if ( buffer.count == 0 ) wait( buffer.full );
    4141        int elem = buffer.elements[buffer.front];
    4242        buffer.front = ( buffer.front + 1 ) % 20;
    4343        buffer.count -= 1;
    44         signal( &buffer.empty );
     44        signal( buffer.empty );
    4545        return elem;
    4646}
  • src/tests/datingService.c

    r136ccd7 r4ee36bf0  
    1 //                               -*- Mode: C -*- 
    2 // 
     1//                               -*- Mode: C -*-
     2//
    33// The contents of this file are covered under the licence agreement in the
    44// file "LICENCE" distributed with Cforall.
    5 // 
    6 // datingService.c -- 
    7 // 
     5//
     6// datingService.c --
     7//
    88// Author           : Peter A. Buhr
    99// Created On       : Mon Oct 30 12:56:20 2017
     
    1111// Last Modified On : Mon Oct 30 23:02:11 2017
    1212// Update Count     : 15
    13 // 
     13//
    1414
    1515#include <stdlib>                                                                               // random
     
    1818#include <thread>
    1919#include <unistd.h>                                                                             // getpid
    20