Changeset 1bc9dcb


Ignore:
Timestamp:
Jun 16, 2017, 3:34:55 PM (7 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
9d85038
Parents:
a724ac1 (diff), d33bc7c (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Files:
13 added
5 deleted
15 edited

Legend:

Unmodified
Added
Removed
  • .gitignore

    ra724ac1 r1bc9dcb  
    1313libcfa/Makefile
    1414src/Makefile
    15 version
     15/version
    1616
    1717# genereted by premake
  • configure

    ra724ac1 r1bc9dcb  
    62516251
    62526252
    6253 ac_config_files="$ac_config_files Makefile src/driver/Makefile src/Makefile src/benchmark/Makefile src/examples/Makefile src/tests/Makefile src/prelude/Makefile src/libcfa/Makefile"
     6253ac_config_files="$ac_config_files Makefile src/driver/Makefile src/Makefile src/benchmark/Makefile src/examples/Makefile src/tests/Makefile src/tests/preempt_longrun/Makefile src/prelude/Makefile src/libcfa/Makefile"
    62546254
    62556255
     
    70197019    "src/examples/Makefile") CONFIG_FILES="$CONFIG_FILES src/examples/Makefile" ;;
    70207020    "src/tests/Makefile") CONFIG_FILES="$CONFIG_FILES src/tests/Makefile" ;;
     7021    "src/tests/preempt_longrun/Makefile") CONFIG_FILES="$CONFIG_FILES src/tests/preempt_longrun/Makefile" ;;
    70217022    "src/prelude/Makefile") CONFIG_FILES="$CONFIG_FILES src/prelude/Makefile" ;;
    70227023    "src/libcfa/Makefile") CONFIG_FILES="$CONFIG_FILES src/libcfa/Makefile" ;;
  • configure.ac

    ra724ac1 r1bc9dcb  
    235235        src/examples/Makefile
    236236        src/tests/Makefile
     237        src/tests/preempt_longrun/Makefile
    237238        src/prelude/Makefile
    238239        src/libcfa/Makefile
  • doc/proposals/concurrency/Makefile

    ra724ac1 r1bc9dcb  
    1313annex/glossary \
    1414text/intro \
     15text/cforall \
    1516text/basics \
    1617text/concurrency \
  • doc/proposals/concurrency/build/bump_ver.sh

    ra724ac1 r1bc9dcb  
    11#!/bin/bash
    2 if [ ! -f build/version ]; then
    3     echo "0.0.0" > build/version
     2if [ ! -f version ]; then
     3    echo "0.0.0" > version
    44fi
    55
    6 sed -r 's/([0-9]+\.[0-9]+.)([0-9]+)/echo "\1\$((\2+1))" > version/ge' build/version > /dev/null
     6sed -r 's/([0-9]+\.[0-9]+.)([0-9]+)/echo "\1\$((\2+1))" > version/ge' version > /dev/null
  • doc/proposals/concurrency/text/basics.tex

    ra724ac1 r1bc9dcb  
    77
    88\section{Basics of concurrency}
    9 At its core, concurrency is based on having multiple call stacks and potentially multiple threads of execution for these stacks. Concurrency alone without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution and switching between these call stacks on a regular basis. A minimal concurrency product can be achieved by creating coroutines which instead of context switching between each other, always ask an oracle where to context switch next. While coroutines do not technically require a stack, stackfull coroutines are the closest abstraction to a practical "naked"" call stack. When writing concurrency in terms of coroutines, the oracle effectively becomes a scheduler and the whole system now follows a cooperative threading model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine but in any case a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to be present, the only feature missing is preemption. Indeed, concurrency challenges appear with the lack of determinism. Guaranteeing mutual-exclusion or synchronisation are simply ways of limiting the lack of determinism in the system. A scheduler introduces order of execution uncertainty while preemption introduces incertainty about when context-switches occur. Now it is important to understand that uncertainty is not necessarily undesireable, uncertainty can often be used by systems to significantly increase performance and is often the basis of giving the user the illusion that hundred of tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as little determinism as correctness will allow\cit.
     9At its core, concurrency is based on having call-stacks and potentially multiple threads of execution for these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution, and switching between these call stacks on a regular basis. A minimal concurrency product can be achieved by creating coroutines, which instead of context switching between each other, always ask an oracle where to context switch next. While coroutines do not technically require a stack, stackfull coroutines are the closest abstraction to a practical "naked"" call stack. When writing concurrency in terms of coroutines, the oracle effectively becomes a scheduler and the whole system now follows a cooperative threading-model \cit. The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. Indeed, concurrency challenges appear with non-determinism. Guaranteeing mutual-exclusion or synchronisation are simply ways of limiting the lack of determinism in a system. A scheduler introduces order of execution uncertainty, while preemption introduces incertainty about where context-switches occur. Now it is important to understand that uncertainty is not necessarily undesireable; uncertainty can often be used by systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows\cit.
    1010
    1111\section{\protect\CFA 's Thread Building Blocks}
    12 % As a system-level language, \CFA should offer both performance and flexibilty as its primary goals, simplicity and user-friendliness being a secondary concern. Therefore, the core of parallelism in \CFA should prioritize power and efficiency. With this said, deconstructing popular paradigms in order to get simple building blocks yields \glspl{uthread} as the core parallelism block. \Glspl{pool} and other parallelism paradigms can then be built on top of the underlying threading model.
    13 One of the important features that is missing to C is threading. On modern architectures, the lack of threading is becoming less and less forgivable\cite{Sutter05, Sutter05b} and therefore any modern programming language should have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts an a way that is as natural as possible to programmers used to imperative languages. And being a system level language means programmers will expect to be able to choose precisely which features they need and which cost they are willing to pay.
    14 
    15 \section{Coroutines A stepping stone}\label{coroutine}
    16 While the main focus of this proposal is concurrency and parallelism, as mentionned above it is important to adress coroutines which are actually a significant underlying aspect of the concurrency system. Indeed, while having nothing todo with parallelism and arguably little to do with concurrency, coroutines need to deal with context-switchs and and other context management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core API of coroutines revolve around two features independent call stacks and \code{suspend}/\code{resume}.
     12One of the important features that is missing in C is threading. On modern architectures, a lack of threading is becoming less and less forgivable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent and/or parallel programs. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers used to imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
     13
     14\section{Coroutines: A stepping stone}\label{coroutine}
     15While the main focus of this proposal is concurrency and parallelism, as mentionned above it is important to adress coroutines, which are actually a significant underlying aspect of a concurrency system. Indeed, while having nothing todo with parallelism and arguably little to do with concurrency, coroutines need to deal with context-switchs and and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core API of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}.
    1716
    1817Here is an example of a solution to the fibonnaci problem using \CFA coroutines:
     
    2625        }
    2726
     27        // main automacically called on first resume
    2828        void main(Fibonacci* this) {
    2929                int fn1, fn2;           // retained between resumes
     
    5959
    6060\subsection{Construction}
    61 One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run some code after the user-constructor runs. In the case of the coroutines this challenge is simpler since there is no loss of determinism brough by preemption or scheduling, however, the underlying challenge remains the same for coroutines and threads.
    62 
    63 The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor (Obviously we only solve cases where these two statements don't conflict). There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
    64 
    65 Furthermore, \CFA faces an extra challenge which is that polymorphique routines rely on invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
     61One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
     62
     63The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. Like for regular objects, constructors can still leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine.
     64
     65Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
    6666
    6767\begin{cfacode}
     
    7878}
    7979\end{cfacode}
    80 Indeed, the generated C code\footnote{Code trimmed down for brevity} shows that a local thunk is created in order to hold type information:
     80The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
    8181
    8282\begin{ccode}
     
    9595}
    9696\end{ccode}
    97 The problem in the this example is that there is a race condition between the start of the execution of \code{noop} on the other thread and the stack frame of \code{bar} being destroyed. This extra challenge limits which solutions are viable because storing the function pointer for too long only increases the chances that the race will end in undefined behavior; i.e. the stack based thunk being destroyed before it was used.
     97The problem in this example is a race condition between the start of the execution of \code{noop} on the other thread and the stack frame of \code{bar} being destroyed. This extra challenge limits which solutions are viable because storing the function pointer for too long only increases the chances that the race will end in undefined behavior; i.e. the stack based thunk being destroyed before it was used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that the routines cannot be passed outside of the scope of the functions these were declared in. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks.
    9898
    9999\subsection{Alternative: Composition}
    100 One solution to this challenge would be to use inheritence,
     100One solution to this challenge would be to use composition/containement,
    101101
    102102\begin{cfacode}
    103103        struct Fibonacci {
    104104              int fn; // used for communication
    105               coroutine c;
     105              coroutine c; //composition
    106106        };
    107107
     
    111111        }
    112112\end{cfacode}
    113 
    114 There are two downsides to this approach. The first, which is relatively minor, is that the base class needs to be made aware of the main routine pointer, regardless of whether we use a parameter or a virtual pointer, this means the coroutine data must be made larger to store a value that is actually a compile time constant (The address of the main routine). The second problem which is both subtle but significant, is that now users can get the initialisation order of there coroutines wrong. Indeed, every field of a \CFA struct will be constructed but in the order of declaration, unless users explicitly write otherwise. This means that users who forget to initialize a the coroutine at the right time may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem.
     113There are two downsides to this approach. The first, which is relatively minor, is that the base class needs to be made aware of the main routine pointer, regardless of whether a parameter or a virtual pointer is used, this means the coroutine data must be made larger to store a value that is actually a compile time constant (address of the main routine). The second problem, which is both subtle and significant, is that now users can get the initialisation order of there coroutines wrong. Indeed, every field of a \CFA struct is constructed but in declaration order, unless users explicitly write otherwise. This semantics means that users who forget to initialize a the coroutine may resume the coroutine with an uninitilized object. For coroutines, this is unlikely to be a problem, for threads however, this is a significant problem.
    115114
    116115\subsection{Alternative: Reserved keyword}
     
    122121        };
    123122\end{cfacode}
    124 
    125123This mean the compiler can solve problems by injecting code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users who would want to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of \CFA.
    126124While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can both be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
     
    128126\subsection{Alternative: Lamda Objects}
    129127
    130 For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types \code{asymmetric_coroutine<>::pull_type}, \code{asymmetric_coroutine<>::push_type}, \code{symmetric_coroutine<>::call_type}, \code{symmetric_coroutine<>::yield_type}. Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known example. The main problem of these approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write and \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
    131 
    132 \subsection{Trait based coroutines}
    133 
    134 Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as \say{anything that \say{satisfies the trait \code{is_coroutine} and is used as a coroutine} is a coroutine}.
     128For coroutines as for threads, many implementations are based on routine pointers or function objects\cit. For example, Boost implements coroutines in terms of four functor object types:
     129\begin{cfacode}
     130asymmetric_coroutine<>::pull_type
     131asymmetric_coroutine<>::push_type
     132symmetric_coroutine<>::call_type
     133symmetric_coroutine<>::yield_type
     134\end{cfacode}
     135Often, the canonical threading paradigm in languages is based on function pointers, pthread being one of the most well known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
     136
     137A variation of this would be to use an simple function pointer in the same way pthread does for threads :
     138\begin{cfacode}
     139void foo( coroutine_t cid, void * arg ) {
     140        int * value = (int *)arg;
     141        //Coroutine body
     142}
     143
     144int main() {
     145        int value = 0;
     146        coroutine_t cid = coroutine_create( &foo, (void*)&value );
     147        coroutine_resume( &cid );
     148}
     149\end{cfacode}
     150This semantic is more common for thread interfaces than coroutines but would work equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity.
     151
     152\subsection{Alternative: Trait-based coroutines}
     153
     154Finally the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} and is used as a coroutine is a coroutine.
    135155
    136156\begin{cfacode}
     
    140160};
    141161\end{cfacode}
    142 
    143 This entails that an object is not a coroutine until \code{resume} (or \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory foot print of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine.
     162This ensures an object is not a coroutine until \code{resume} (or \code{prime}) is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory foot print of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine.
     163
     164\begin{center}
     165\begin{tabular}{c c c}
     166\begin{cfacode}[tabsize=3]
     167coroutine MyCoroutine {
     168        int someValue;
     169};
     170\end{cfacode} & == & \begin{cfacode}[tabsize=3]
     171struct MyCoroutine {
     172        int someValue;
     173        coroutine_desc __cor;
     174};
     175
     176static inline
     177coroutine_desc * get_coroutine(
     178        struct MyCoroutine * this
     179) {
     180        return &this->__cor;
     181}
     182
     183void main(struct MyCoroutine * this);
     184\end{cfacode}
     185\end{tabular}
     186\end{center}
     187
    144188
    145189
    146190\section{Thread Interface}\label{threads}
    147 The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. By default these are implemented as \glspl{uthread}, and as such, offer a flexible and lightweight threading interface (lightweight compared to \glspl{kthread}). A thread can be declared using a SUE declaration \code{thread} as follows:
     191The basic building blocks of multi-threading in \CFA are \glspl{cfathread}. Both use and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows:
    148192
    149193\begin{cfacode}
     
    151195\end{cfacode}
    152196
    153 Like for coroutines, the keyword is a thin wrapper arount a \CFA trait:
     197As for coroutines, the keyword is a thin wrapper arount a \CFA trait:
    154198
    155199\begin{cfacode}
     
    170214\end{cfacode}
    171215
    172 In this example, threads of type \code{foo} will start there execution in the \code{void main(foo*)} routine which in this case prints \code{"Hello World!"}. While this proposoal encourages this approach which enforces strongly type programming, users may prefer to use the routine based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously
     216In this example, threads of type \code{foo} start execution in the \code{void main(foo*)} routine which prints \code{"Hello World!"}. While this proposoal encourages this approach to enforce strongly-typed programming, users may prefer to use the routine based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as parameter and executes it on its stack asynchronously
    173217\begin{cfacode}
    174218        typedef void (*voidFunc)(void);
     
    201245void main() {
    202246        World w;
    203         //Thread run forks here
    204 
    205         //Printing "Hello " and "World!" will be run concurrently
     247        //Thread forks here
     248
     249        //Printing "Hello " and "World!" are run concurrently
    206250        sout | "Hello " | endl;
    207251
     
    210254\end{cfacode}
    211255
    212 This semantic has several advantages over explicit semantics typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction. While this seems like a significant limitation, existing \CFA semantics can solve this problem. Indeed, using dynamic allocation to create threads will naturally let threads outlive the scope in which the thread was created much like dynamically allocating memory will let objects outlive the scope in which thy were created
     256This semantic has several advantages over explicit semantics typesafety is guaranteed, a thread is always started and stopped exaclty once and users cannot make any progamming errors. Another advantage of this semantic is that it naturally scale to multiple threads meaning basic synchronisation is very simple
     257
     258\begin{cfacode}
     259        thread MyThread {
     260                //...
     261        };
     262
     263        //main
     264        void main(MyThread* this) {
     265                //...
     266        }
     267
     268        void foo() {
     269                MyThread thrds[10];
     270                //Start 10 threads at the beginning of the scope
     271
     272                DoStuff();
     273
     274                //Wait for the 10 threads to finish
     275        }
     276\end{cfacode}
     277
     278However, one of the apparent drawbacks of this system is that threads now always form a lattice, that is they are always destroyed in opposite order of construction because of block structure. However, storage allocation os not limited to blocks; dynamic allocation can create threads that outlive the scope in which the thread is created much like dynamically allocating memory lets objects outlive the scope in which they are created
    213279
    214280\begin{cfacode}
     
    241307        }
    242308\end{cfacode}
    243 
    244 Another advantage of this semantic is that it naturally scale to multiple threads meaning basic synchronisation is very simple
    245 
    246 \begin{cfacode}
    247         thread MyThread {
    248                 //...
    249         };
    250 
    251         //main
    252         void main(MyThread* this) {
    253                 //...
    254         }
    255 
    256         void foo() {
    257                 MyThread thrds[10];
    258                 //Start 10 threads at the beginning of the scope
    259 
    260                 DoStuff();
    261 
    262                 //Wait for the 10 threads to finish
    263         }
    264 \end{cfacode}
  • doc/proposals/concurrency/text/concurrency.tex

    ra724ac1 r1bc9dcb  
    44% ======================================================================
    55% ======================================================================
    6 Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. This distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At a lower level, non-concurrent paradigms are often implemented as locks and atomic operations. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct.
     6Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms that closely relate to networking concepts (channels\cit for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account.
     7
     8Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}.
     9
     10An approach that is worth mentionning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cit, the performance and feature set is currently too restrictive to be the main concurrency paradigm for general purpose language, which is why it was rejected as the core paradigm for concurrency in \CFA.
     11
     12One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct.
    713
    814\section{Basics}
    9 The basic features that concurrency tools neet to offer is support for mutual-exclusion and synchronisation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is the group of instructions on an associated portion of data that requires the limited access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools are used to guarantee that event \textit{X} always happens before \textit{Y}.
     15Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronisation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools numerous mechanisms to establish timing relationships among threads.
    1016
    1117\subsection{Mutual-Exclusion}
    12 As mentionned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solution exists for mutual exclusion which vary in terms of performance, flexibility and ease of use. Methods range from low level locks, which are fast and flexible but require significant attention to be correct, to  higher level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Often by either guaranteeing some problems cannot occur (e.g. being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} which offer an easy way to express mutual-exclusion on a restricted set of features (.e.g: reading/writing large types atomically). Another challenge with low level locks is composability. Locks are said to not be composable because it takes careful organising for multiple locks to be used and once while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
     18As mentionned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solution exists for mutual exclusion which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to  higher-level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} which offer an easy way to express mutual-exclusion on a restricted set of operations (.e.g: reading/writing large types atomically). Another challenge with low-level locks is composability. Locks are not composable because it takes careful organising for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
    1319
    1420\subsection{Synchronization}
    15 As for mutual-exclusion, low level synchronisation primitive often offer great performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, for example message passing, or offering simple solution to otherwise involved challenges. An example of this is barging. As mentionned above synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time synchronisation happens around a critical section, where threads most acquire said critical section in a certain order. However, it may also be desired to be able to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. This is called barging, where event \textit{X} tries to effect event \textit{Y} but anoter thread races to grab the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
     21As for mutual-exclusion, low level synchronisation primitive often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, .eg., message passing, or offering simple solution to otherwise involved challenges. An example of this is barging. As mentionned above synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time synchronisation happens around a critical section, where threads most acquire said critical section in a certain order. However, it may also be desired to be able to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. This is called barging, where event \textit{X} tries to effect event \textit{Y} but anoter thread races to grab the critical section and emits \textit{Z} before \textit{Y}. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
    1622
    1723% ======================================================================
     
    2026% ======================================================================
    2127% ======================================================================
    22 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :
     28A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it :
    2329\begin{cfacode}
    2430        typedef /*some monitor type*/ monitor;
     
    3642% ======================================================================
    3743% ======================================================================
    38 The above monitor example displays some of the intrinsic characteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable.
    39 
    40 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can be both generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter :
     44The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable objects.
     45
     46Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can occur for generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter :
    4147
    4248\begin{cfacode}
     
    4652        size_t ++?(counter_t & mutex this); //increment
    4753
    48         //need for mutex is platform dependent here
     54        //need for mutex is platform dependent
    4955        void ?{}(size_t * this, counter_t & mutex cnt); //conversion
    5056\end{cfacode}
     
    5258Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading an \code{size_t} is an atomic operation.
    5359
    54 Having both \code{mutex} and \code{nomutex} keywords could be argued to be redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)} then it is reasonable that it should default to the safest option \code{mutex}. On the other hand, the option of having routine \code{void foo(counter_t & this)} mean \code{nomutex} is unsafe by default and may easily cause subtle errors. In fact \code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routines neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without a doubt wheter or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword.
     60Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the "normal" parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that "this routine is not special". Another alternative is to make having exactly one of these keywords mandatory, which would provide the same semantics but without the ambiguity of supporting routines neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without a doubt wheter or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword.
    5561
    5662
     
    6066int f2(const monitor & mutex m);
    6167int f3(monitor ** mutex m);
    62 int f4(monitor *[] mutex m);
     68int f4(monitor * mutex m []);
    6369int f5(graph(monitor*) & mutex m);
    6470\end{cfacode}
     
    6874int f1(monitor & mutex m);   //Okay : recommanded case
    6975int f2(monitor * mutex m);   //Okay : could be an array but probably not
    70 int f3(monitor [] mutex m);  //Not Okay : Array of unkown length
     76int f3(monitor mutex m []);  //Not Okay : Array of unkown length
    7177int f4(monitor ** mutex m);  //Not Okay : Could be an array
    72 int f5(monitor *[] mutex m); //Not Okay : Array of unkown length
     78int f5(monitor * mutex m []); //Not Okay : Array of unkown length
    7379\end{cfacode}
    7480
  • doc/proposals/concurrency/text/intro.tex

    ra724ac1 r1bc9dcb  
    33% ======================================================================
    44
    5 This proposal provides a minimal concurrency API that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency core is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading.
     5This proposal provides a minimal concurrency API that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of the concurrency, in \CFA. Indeed, for highly productive parallel programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. Therefore a high-level approach is adapted in \CFA
    66
    7 There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency tools are available to the users and which parallelism tools are available. While these two concepts are often seen together, they are in fact distinct concepts that require different sorts of tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization.
     7There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency and which parallelism tools are available to the users. While these two concepts are often combined, they are in fact distinct concepts that require different tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization.
  • doc/proposals/concurrency/thesis.tex

    ra724ac1 r1bc9dcb  
    7777\fancyhf{}
    7878\cfoot{\thepage}
    79 \rfoot{v\input{build/version}}
     79\rfoot{v\input{version}}
    8080
    8181%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     
    9494
    9595\input{intro}
     96
     97\input{cforall}
    9698
    9799\input{basics}
  • doc/user/user.tex

    ra724ac1 r1bc9dcb  
    1111%% Created On       : Wed Apr  6 14:53:29 2016
    1212%% Last Modified By : Peter A. Buhr
    13 %% Last Modified On : Fri Jun  2 10:07:51 2017
    14 %% Update Count     : 2128
     13%% Last Modified On : Fri Jun 16 12:00:01 2017
     14%% Update Count     : 2433
    1515%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    1616
     
    4343\usepackage[pagewise]{lineno}
    4444\renewcommand{\linenumberfont}{\scriptsize\sffamily}
    45 \input{common}                                          % bespoke macros used in the document
     45\input{common}                                          % common CFA document macros
    4646\usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref}
    4747\usepackage{breakurl}
     
    110110\renewcommand{\subsectionmark}[1]{\markboth{\thesubsection\quad #1}{\thesubsection\quad #1}}
    111111\pagenumbering{roman}
    112 \linenumbers                                            % comment out to turn off line numbering
     112%\linenumbers                                            % comment out to turn off line numbering
    113113
    114114\maketitle
     
    454454the type suffixes ©U©, ©L©, etc. may start with an underscore ©1_U©, ©1_ll© or ©1.0E10_f©.
    455455\end{enumerate}
    456 It is significantly easier to read and enter long constants when they are broken up into smaller groupings (most cultures use comma or period among digits for the same purpose).
     456It is significantly easier to read and enter long constants when they are broken up into smaller groupings (many cultures use comma and/or period among digits for the same purpose).
    457457This extension is backwards compatible, matches with the use of underscore in variable names, and appears in \Index*{Ada} and \Index*{Java} 8.
    458458
     
    464464\begin{cfa}
    465465int ®`®otype®`® = 3;                    §\C{// make keyword an identifier}§
    466 double ®`®choose®`® = 3.5;
    467 \end{cfa}
    468 Programs can be converted easily by enclosing keyword identifiers in backquotes, and the backquotes can be removed later when the identifier name is changed to a non-keyword name.
     466double ®`®forall®`® = 3.5;
     467\end{cfa}
     468Existing C programs with keyword clashes can be converted by enclosing keyword identifiers in backquotes, and eventually the identifier name can be changed to a non-keyword name.
    469469\VRef[Figure]{f:InterpositionHeaderFile} shows how clashes in C header files (see~\VRef{s:StandardHeaders}) can be handled using preprocessor \newterm{interposition}: ©#include_next© and ©-I filename©:
    470470
     
    473473// include file uses the CFA keyword "otype".
    474474#if ! defined( otype )                  §\C{// nesting ?}§
    475 #define otype `otype`
     475#define otype ®`®otype®`®               §\C{// make keyword an identifier}§
    476476#define __CFA_BFD_H__
    477477#endif // ! otype
     
    497497\begin{tabular}{@{}ll@{}}
    498498\begin{cfa}
    499 int *x[5]
     499int * x[5]
    500500\end{cfa}
    501501&
     
    508508For example, a routine returning a \Index{pointer} to an array of integers is defined and used in the following way:
    509509\begin{cfa}
    510 int (*f())[5] {...};                    §\C{
    511 ... (*f())[3] += 1;
     510int ®(*®f®())[®5®]® {...};                              §\C{definition
     511 ... ®(*®f®())[®3®]® += 1;                              §\C{usage}§
    512512\end{cfa}
    513513Essentially, the return type is wrapped around the routine name in successive layers (like an \Index{onion}).
     
    516516\CFA provides its own type, variable and routine declarations, using a different syntax.
    517517The new declarations place qualifiers to the left of the base type, while C declarations place qualifiers to the right of the base type.
    518 In the following example, \R{red} is for the base type and \B{blue} is for the qualifiers.
     518In the following example, \R{red} is the base type and \B{blue} is qualifiers.
    519519The \CFA declarations move the qualifiers to the left of the base type, \ie move the blue to the left of the red, while the qualifiers have the same meaning but are ordered left to right to specify a variable's type.
    520520\begin{quote2}
     
    534534\end{tabular}
    535535\end{quote2}
    536 The only exception is bit field specification, which always appear to the right of the base type.
     536The only exception is \Index{bit field} specification, which always appear to the right of the base type.
    537537% Specifically, the character ©*© is used to indicate a pointer, square brackets ©[©\,©]© are used to represent an array or function return value, and parentheses ©()© are used to indicate a routine parameter.
    538538However, unlike C, \CFA type declaration tokens are distributed across all variables in the declaration list.
     
    583583\begin{cfa}
    584584int z[ 5 ];
    585 char *w[ 5 ];
    586 double (*v)[ 5 ];
     585char * w[ 5 ];
     586double (* v)[ 5 ];
    587587struct s {
    588588        int f0:3;
    589         int *f1;
    590         int *f2[ 5 ]
     589        int * f1;
     590        int * f2[ 5 ]
    591591};
    592592\end{cfa}
     
    637637\begin{cfa}
    638638int extern x[ 5 ];
    639 const int static *y;
     639const int static * y;
    640640\end{cfa}
    641641&
     
    658658\begin{cfa}
    659659y = (®int *®)x;
    660 i = sizeof(®int *[ 5 ]®);
     660i = sizeof(®int * [ 5 ]®);
    661661\end{cfa}
    662662\end{tabular}
     
    672672C provides a \newterm{pointer type};
    673673\CFA adds a \newterm{reference type}.
    674 These types may be derived from a object or routine type, called the \newterm{referenced type}.
     674These types may be derived from an object or routine type, called the \newterm{referenced type}.
    675675Objects of these types contain an \newterm{address}, which is normally a location in memory, but may also address memory-mapped registers in hardware devices.
    676676An integer constant expression with the value 0, or such an expression cast to type ©void *©, is called a \newterm{null-pointer constant}.\footnote{
     
    729729
    730730A \Index{pointer}/\Index{reference} object is a generalization of an object variable-name, \ie a mutable address that can point to more than one memory location during its lifetime.
    731 (Similarly, an integer variable can contain multiple integer literals during its lifetime versus an integer constant representing a single literal during its lifetime, and like a variable name, may not occupy storage as the literal is embedded directly into instructions.)
     731(Similarly, an integer variable can contain multiple integer literals during its lifetime versus an integer constant representing a single literal during its lifetime, and like a variable name, may not occupy storage if the literal is embedded directly into instructions.)
    732732Hence, a pointer occupies memory to store its current address, and the pointer's value is loaded by dereferencing, \eg:
    733733\begin{quote2}
     
    758758\begin{cfa}
    759759p1 = p2;                                                §\C{// p1 = p2\ \ rather than\ \ *p1 = *p2}§
    760 p2 = p1 + x;                                    §\C{// p2 = p1 + x\ \ rather than\ \ *p1 = *p1 + x}§
     760p2 = p1 + x;                                    §\C{// p2 = p1 + x\ \ rather than\ \ *p2 = *p1 + x}§
    761761\end{cfa}
    762762even though the assignment to ©p2© is likely incorrect, and the programmer probably meant:
     
    765765®*®p2 = ®*®p1 + x;                              §\C{// pointed-to value assignment / operation}§
    766766\end{cfa}
    767 The C semantics works well for situations where manipulation of addresses is the primary meaning and data is rarely accessed, such as storage management (©malloc©/©free©).
     767The C semantics work well for situations where manipulation of addresses is the primary meaning and data is rarely accessed, such as storage management (©malloc©/©free©).
    768768
    769769However, in most other situations, the pointed-to value is requested more often than the pointer address.
     
    799799For a \CFA reference type, the cancellation on the left-hand side of assignment leaves the reference as an address (\Index{lvalue}):
    800800\begin{cfa}
    801 (&®*®)r1 = &x;                                  §\C{// (\&*) cancel giving address of r1 not variable pointed-to by r1}§
     801(&®*®)r1 = &x;                                  §\C{// (\&*) cancel giving address in r1 not variable pointed-to by r1}§
    802802\end{cfa}
    803803Similarly, the address of a reference can be obtained for assignment or computation (\Index{rvalue}):
    804804\begin{cfa}
    805 (&(&®*®)®*®)r3 = &(&®*®)r2;             §\C{// (\&*) cancel giving address of r2, (\&(\&*)*) cancel giving address of r3}§
     805(&(&®*®)®*®)r3 = &(&®*®)r2;             §\C{// (\&*) cancel giving address in r2, (\&(\&*)*) cancel giving address in r3}§
    806806\end{cfa}
    807807Cancellation\index{cancellation!pointer/reference}\index{pointer!cancellation} works to arbitrary depth.
     
    824824As for a pointer type, a reference type may have qualifiers:
    825825\begin{cfa}
    826 const int cx = 5;                               §\C{// cannot change cx;}§
    827 const int & cr = cx;                    §\C{// cannot change what cr points to}§
    828 ®&®cr = &cx;                                    §\C{// can change cr}§
    829 cr = 7;                                                 §\C{// error, cannot change cx}§
    830 int & const rc = x;                             §\C{// must be initialized}§
    831 ®&®rc = &x;                                             §\C{// error, cannot change rc}§
    832 const int & const crc = cx;             §\C{// must be initialized}§
    833 crc = 7;                                                §\C{// error, cannot change cx}§
    834 ®&®crc = &cx;                                   §\C{// error, cannot change crc}§
    835 \end{cfa}
    836 Hence, for type ©& const©, there is no pointer assignment, so ©&rc = &x© is disallowed, and \emph{the address value cannot be the null pointer unless an arbitrary pointer is coerced into the reference}:
    837 \begin{cfa}
    838 int & const cr = *0;                    §\C{// where 0 is the int * zero}§
    839 \end{cfa}
    840 Note, constant reference-types do not prevent addressing errors because of explicit storage-management:
     826const int cx = 5;                                       §\C{// cannot change cx;}§
     827const int & cr = cx;                            §\C{// cannot change what cr points to}§
     828®&®cr = &cx;                                            §\C{// can change cr}§
     829cr = 7;                                                         §\C{// error, cannot change cx}§
     830int & const rc = x;                                     §\C{// must be initialized}§
     831®&®rc = &x;                                                     §\C{// error, cannot change rc}§
     832const int & const crc = cx;                     §\C{// must be initialized}§
     833crc = 7;                                                        §\C{// error, cannot change cx}§
     834®&®crc = &cx;                                           §\C{// error, cannot change crc}§
     835\end{cfa}
     836Hence, for type ©& const©, there is no pointer assignment, so ©&rc = &x© is disallowed, and \emph{the address value cannot be the null pointer unless an arbitrary pointer is coerced\index{coercion} into the reference}:
     837\begin{cfa}
     838int & const cr = *0;                            §\C{// where 0 is the int * zero}§
     839\end{cfa}
     840Note, constant reference-types do not prevent \Index{addressing errors} because of explicit storage-management:
    841841\begin{cfa}
    842842int & const cr = *malloc();
    843843cr = 5;
    844 delete &cr;
    845 cr = 7;                                                 §\C{// unsound pointer dereference}§
    846 \end{cfa}
    847 
    848 Finally, the position of the ©const© qualifier \emph{after} the pointer/reference qualifier causes confuse for C programmers.
     844free( &cr );
     845cr = 7;                                                         §\C{// unsound pointer dereference}§
     846\end{cfa}
     847
     848The position of the ©const© qualifier \emph{after} the pointer/reference qualifier causes confuse for C programmers.
    849849The ©const© qualifier cannot be moved before the pointer/reference qualifier for C style-declarations;
    850 \CFA-style declarations attempt to address this issue:
     850\CFA-style declarations (see \VRef{s:Declarations}) attempt to address this issue:
    851851\begin{quote2}
    852852\begin{tabular}{@{}l@{\hspace{3em}}l@{}}
     
    863863\end{tabular}
    864864\end{quote2}
    865 where the \CFA declaration is read left-to-right (see \VRef{s:Declarations}).
     865where the \CFA declaration is read left-to-right.
     866
     867Finally, like pointers, references are usable and composable with other type operators and generators.
     868\begin{cfa}
     869int w, x, y, z, & ar[3] = { x, y, z }; §\C{// initialize array of references}§
     870&ar[1] = &w;                                            §\C{// change reference array element}§
     871typeof( ar[1] ) p;                                      §\C{// (gcc) is int, i.e., the type of referenced object}§
     872typeof( &ar[1] ) q;                                     §\C{// (gcc) is int \&, i.e., the type of reference}§
     873sizeof( ar[1] ) == sizeof( int );       §\C{// is true, i.e., the size of referenced object}§
     874sizeof( &ar[1] ) == sizeof( int *)      §\C{// is true, i.e., the size of a reference}§
     875\end{cfa}
    866876
    867877In contrast to \CFA reference types, \Index*[C++]{\CC{}}'s reference types are all ©const© references, preventing changes to the reference address, so only value assignment is possible, which eliminates half of the \Index{address duality}.
     878Also, \CC does not allow \Index{array}s\index{array!reference} of reference\footnote{
     879The reason for disallowing arrays of reference is unknown, but possibly comes from references being ethereal (like a textual macro), and hence, replaceable by the referant object.}
    868880\Index*{Java}'s reference types to objects (all Java objects are on the heap) are like C pointers, which always manipulate the address, and there is no (bit-wise) object assignment, so objects are explicitly cloned by shallow or deep copying, which eliminates half of the address duality.
     881
     882
     883\subsection{Initialization}
    869884
    870885\Index{Initialization} is different than \Index{assignment} because initialization occurs on the empty (uninitialized) storage on an object, while assignment occurs on possibly initialized storage of an object.
     
    872887Because the object being initialized has no value, there is only one meaningful semantics with respect to address duality: it must mean address as there is no pointed-to value.
    873888In contrast, the left-hand side of assignment has an address that has a duality.
    874 Therefore, for pointer/reference initialization, the initializing value must be an address (\Index{lvalue}) not a value (\Index{rvalue}).
    875 \begin{cfa}
    876 int * p = &x;                           §\C{// must have address of x}§
    877 int & r = x;                            §\C{// must have address of x}§
    878 \end{cfa}
    879 Therefore, it is superfluous to require explicitly taking the address of the initialization object, even though the type is incorrect.
    880 Hence, \CFA allows ©r© to be assigned ©x© because it infers a reference for ©x©, by implicitly inserting a address-of operator, ©&©, and it is an error to put an ©&© because the types no longer match.
    881 Unfortunately, C allows ©p© to be assigned with ©&x© or ©x©, by value, but most compilers warn about the latter assignment as being potentially incorrect.
    882 (\CFA extends pointer initialization so a variable name is automatically referenced, eliminating the unsafe assignment.)
     889Therefore, for pointer/reference initialization, the initializing value must be an address not a value.
     890\begin{cfa}
     891int * p = &x;                                           §\C{// assign address of x}§
     892®int * p = x;®                                          §\C{// assign value of x}§
     893int & r = x;                                            §\C{// must have address of x}§
     894\end{cfa}
     895Like the previous example with C pointer-arithmetic, it is unlikely assigning the value of ©x© into a pointer is meaningful (again, a warning is usually given).
     896Therefore, for safety, this context requires an address, so it is superfluous to require explicitly taking the address of the initialization object, even though the type is incorrect.
     897Note, this is strictly a convenience and safety feature for a programmer.
     898Hence, \CFA allows ©r© to be assigned ©x© because it infers a reference for ©x©, by implicitly inserting a address-of operator, ©&©, and it is an error to put an ©&© because the types no longer match due to the implicit dereference.
     899Unfortunately, C allows ©p© to be assigned with ©&x© (address) or ©x© (value), but most compilers warn about the latter assignment as being potentially incorrect.
    883900Similarly, when a reference type is used for a parameter/return type, the call-site argument does not require a reference operator for the same reason.
    884901\begin{cfa}
    885 int & f( int & r );                             §\C{// reference parameter and return}§
    886 z = f( x ) + f( y );                    §\C{// reference operator added, temporaries needed for call results}§
     902int & f( int & r );                                     §\C{// reference parameter and return}§
     903z = f( x ) + f( y );                            §\C{// reference operator added, temporaries needed for call results}§
    887904\end{cfa}
    888905Within routine ©f©, it is possible to change the argument by changing the corresponding parameter, and parameter ©r© can be locally reassigned within ©f©.
     
    892909z = temp1 + temp2;
    893910\end{cfa}
    894 This implicit referencing is crucial for reducing the syntactic burden for programmers when using references;
     911This \Index{implicit referencing} is crucial for reducing the syntactic burden for programmers when using references;
    895912otherwise references have the same syntactic  burden as pointers in these contexts.
    896913
     
    899916void f( ®const® int & cr );
    900917void g( ®const® int * cp );
    901 f( 3 );                   g( &3 );
    902 f( x + y );             g( &(x + y) );
     918f( 3 );                   g( ®&®3 );
     919f( x + y );             g( ®&®(x + y) );
    903920\end{cfa}
    904921Here, the compiler passes the address to the literal 3 or the temporary for the expression ©x + y©, knowing the argument cannot be changed through the parameter.
    905 (The ©&© is necessary for the pointer-type parameter to make the types match, and is a common requirement for a C programmer.)
     922The ©&© before the constant/expression for the pointer-type parameter (©g©) is a \CFA extension necessary to type match and is a common requirement before a variable in C (\eg ©scanf©).
     923Importantly, ©&3© may not be equal to ©&3©, where the references occur across calls because the temporaries maybe different on each call.
     924
    906925\CFA \emph{extends} this semantics to a mutable pointer/reference parameter, and the compiler implicitly creates the necessary temporary (copying the argument), which is subsequently pointed-to by the reference parameter and can be changed.\footnote{
    907926If whole program analysis is possible, and shows the parameter is not assigned, \ie it is ©const©, the temporary is unnecessary.}
     
    909928void f( int & r );
    910929void g( int * p );
    911 f( 3 );                   g( &3 );              §\C{// compiler implicit generates temporaries}§
    912 f( x + y );             g( &(x + y) );  §\C{// compiler implicit generates temporaries}§
     930f( 3 );                   g( ®&®3 );            §\C{// compiler implicit generates temporaries}§
     931f( x + y );             g( ®&®(x + y) );        §\C{// compiler implicit generates temporaries}§
    913932\end{cfa}
    914933Essentially, there is an implicit \Index{rvalue} to \Index{lvalue} conversion in this case.\footnote{
     
    917936
    918937%\CFA attempts to handle pointers and references in a uniform, symmetric manner.
    919 However, C handles routine objects in an inconsistent way.
    920 A routine object is both a pointer and a reference (particle and wave).
     938Finally, C handles \Index{routine object}s in an inconsistent way.
     939A routine object is both a pointer and a reference (\Index{particle and wave}).
    921940\begin{cfa}
    922941void f( int i );
    923 void (*fp)( int );
    924 fp = f;                                                 §\C{// reference initialization}§
    925 fp = &f;                                                §\C{// pointer initialization}§
    926 fp = *f;                                                §\C{// reference initialization}§
    927 fp(3);                                                  §\C{// reference invocation}§
    928 (*fp)(3);                                               §\C{// pointer invocation}§
    929 \end{cfa}
    930 A routine object is best described by a ©const© reference:
    931 \begin{cfa}
    932 const void (&fr)( int ) = f;
    933 fr = ...                                                §\C{// error, cannot change code}§
    934 &fr = ...;                                              §\C{// changing routine reference}§
    935 fr( 3 );                                                §\C{// reference call to f}§
    936 (*fr)(3);                                               §\C{// error, incorrect type}§
     942void (*fp)( int );                                      §\C{// routine pointer}§
     943fp = f;                                                         §\C{// reference initialization}§
     944fp = &f;                                                        §\C{// pointer initialization}§
     945fp = *f;                                                        §\C{// reference initialization}§
     946fp(3);                                                          §\C{// reference invocation}§
     947(*fp)(3);                                                       §\C{// pointer invocation}§
     948\end{cfa}
     949While C's treatment of routine objects has similarity to inferring a reference type in initialization contexts, the examples are assignment not initialization, and all possible forms of assignment are possible (©f©, ©&f©, ©*f©) without regard for type.
     950Instead, a routine object should be referenced by a ©const© reference:
     951\begin{cfa}
     952®const® void (®&® fr)( int ) = f;       §\C{// routine reference}§
     953fr = ...                                                        §\C{// error, cannot change code}§
     954&fr = ...;                                                      §\C{// changing routine reference}§
     955fr( 3 );                                                        §\C{// reference call to f}§
     956(*fr)(3);                                                       §\C{// error, incorrect type}§
    937957\end{cfa}
    938958because the value of the routine object is a routine literal, \ie the routine code is normally immutable during execution.\footnote{
     
    940960\CFA allows this additional use of references for routine objects in an attempt to give a more consistent meaning for them.
    941961
    942 This situation is different from inferring with reference type being used ...
    943 
     962
     963\subsection{Address-of Semantics}
     964
     965In C, ©&E© is an rvalue for any expression ©E©.
     966\CFA extends the ©&© (address-of) operator as follows:
     967\begin{itemize}
     968\item
     969if ©R© is an \Index{rvalue} of type ©T &$_1$...&$_r$© where $r \ge 1$ references (©&© symbols) than ©&R© has type ©T ®*®&$_{\color{red}2}$...&$_{\color{red}r}$©, \ie ©T© pointer with $r-1$ references (©&© symbols).
     970
     971\item
     972if ©L© is an \Index{lvalue} of type ©T &$_1$...&$_l$© where $l \ge 0$ references (©&© symbols) then ©&L© has type ©T ®*®&$_{\color{red}1}$...&$_{\color{red}l}$©, \ie ©T© pointer with $l$ references (©&© symbols).
     973\end{itemize}
     974The following example shows the first rule applied to different \Index{rvalue} contexts:
     975\begin{cfa}
     976int x, * px, ** ppx, *** pppx, **** ppppx;
     977int & rx = x, && rrx = rx, &&& rrrx = rrx ;
     978x = rrrx;               // rrrx is an lvalue with type int &&& (equivalent to x)
     979px = &rrrx;             // starting from rrrx, &rrrx is an rvalue with type int *&&& (&x)
     980ppx = &&rrrx;   // starting from &rrrx, &&rrrx is an rvalue with type int **&& (&rx)
     981pppx = &&&rrrx; // starting from &&rrrx, &&&rrrx is an rvalue with type int ***& (&rrx)
     982ppppx = &&&&rrrx; // starting from &&&rrrx, &&&&rrrx is an rvalue with type int **** (&rrrx)
     983\end{cfa}
     984The following example shows the second rule applied to different \Index{lvalue} contexts:
     985\begin{cfa}
     986int x, * px, ** ppx, *** pppx;
     987int & rx = x, && rrx = rx, &&& rrrx = rrx ;
     988rrrx = 2;               // rrrx is an lvalue with type int &&& (equivalent to x)
     989&rrrx = px;             // starting from rrrx, &rrrx is an rvalue with type int *&&& (rx)
     990&&rrrx = ppx;   // starting from &rrrx, &&rrrx is an rvalue with type int **&& (rrx)
     991&&&rrrx = pppx; // starting from &&rrrx, &&&rrrx is an rvalue with type int ***& (rrrx)
     992\end{cfa}
     993
     994
     995\subsection{Conversions}
     996
     997C provides a basic implicit conversion to simplify variable usage:
     998\begin{enumerate}
     999\setcounter{enumi}{-1}
     1000\item
     1001lvalue to rvalue conversion: ©cv T© converts to ©T©, which allows implicit variable dereferencing.
     1002\begin{cfa}
     1003int x;
     1004x + 1;                  // lvalue variable (int) converts to rvalue for expression
     1005\end{cfa}
     1006An rvalue has no type qualifiers (©cv©), so the lvalue qualifiers are dropped.
     1007\end{enumerate}
     1008\CFA provides three new implicit conversion for reference types to simplify reference usage.
     1009\begin{enumerate}
     1010\item
     1011reference to rvalue conversion: ©cv T &© converts to ©T©, which allows implicit reference dereferencing.
     1012\begin{cfa}
     1013int x, &r = x, f( int p );
     1014x = ®r® + f( ®r® );  // lvalue reference converts to rvalue
     1015\end{cfa}
     1016An rvalue has no type qualifiers (©cv©), so the reference qualifiers are dropped.
     1017
     1018\item
     1019lvalue to reference conversion: \lstinline[deletekeywords={lvalue}]@lvalue-type cv1 T@ converts to ©cv2 T &©, which allows implicitly converting variables to references.
     1020\begin{cfa}
     1021int x, &r = ®x®, f( int & p ); // lvalue variable (int) convert to reference (int &)
     1022f( ®x® );               // lvalue variable (int) convert to reference (int &)
     1023\end{cfa}
     1024Conversion can restrict a type, where ©cv1© $\le$ ©cv2©, \eg passing an ©int© to a ©const volatile int &©, which has low cost.
     1025Conversion can expand a type, where ©cv1© $>$ ©cv2©, \eg passing a ©const volatile int© to an ©int &©, which has high cost (\Index{warning});
     1026furthermore, if ©cv1© has ©const© but not ©cv2©, a temporary variable is created to preserve the immutable lvalue.
     1027
     1028\item
     1029rvalue to reference conversion: ©T© converts to ©cv T &©, which allows binding references to temporaries.
     1030\begin{cfa}
     1031int x, & f( int & p );
     1032f( ®x + 3® );   // rvalue parameter (int) implicitly converts to lvalue temporary reference (int &)
     1033®&f®(...) = &x; // rvalue result (int &) implicitly converts to lvalue temporary reference (int &)
     1034\end{cfa}
     1035In both case, modifications to the temporary are inaccessible (\Index{warning}).
     1036Conversion expands the temporary-type with ©cv©, which is low cost since the temporary is inaccessible.
     1037\end{enumerate}
    9441038
    9451039
    9461040\begin{comment}
    947 \section{References}
    948 
    949 By introducing references in parameter types, users are given an easy way to pass a value by reference, without the need for NULL pointer checks.
    950 In structures, a reference can replace a pointer to an object that should always have a valid value.
    951 When a structure contains a reference, all of its constructors must initialize the reference and all instances of this structure must initialize it upon definition.
    952 
    953 The syntax for using references in \CFA is the same as \CC with the exception of reference initialization.
    954 Use ©&© to specify a reference, and access references just like regular objects, not like pointers (use dot notation to access fields).
    955 When initializing a reference, \CFA uses a different syntax which differentiates reference initialization from assignment to a reference.
    956 The ©&© is used on both sides of the expression to clarify that the address of the reference is being set to the address of the variable to which it refers.
    957 
    958 
    9591041From: Richard Bilson <rcbilson@gmail.com>
    9601042Date: Wed, 13 Jul 2016 01:58:58 +0000
     
    11181200\section{Routine Definition}
    11191201
    1120 \CFA also supports a new syntax for routine definition, as well as ISO C and K\&R routine syntax.
     1202\CFA also supports a new syntax for routine definition, as well as \Celeven and K\&R routine syntax.
    11211203The point of the new syntax is to allow returning multiple values from a routine~\cite{Galletly96,CLU}, \eg:
    11221204\begin{cfa}
     
    11381220in both cases the type is assumed to be void as opposed to old style C defaults of int return type and unknown parameter types, respectively, as in:
    11391221\begin{cfa}
    1140 [§\,§] g();                                             §\C{// no input or output parameters}§
    1141 [ void ] g( void );                             §\C{// no input or output parameters}§
     1222[§\,§] g();                                                     §\C{// no input or output parameters}§
     1223[ void ] g( void );                                     §\C{// no input or output parameters}§
    11421224\end{cfa}
    11431225
     
    11571239\begin{cfa}
    11581240typedef int foo;
    1159 int f( int (* foo) );                   §\C{// foo is redefined as a parameter name}§
     1241int f( int (* foo) );                           §\C{// foo is redefined as a parameter name}§
    11601242\end{cfa}
    11611243The string ``©int (* foo)©'' declares a C-style named-parameter of type pointer to an integer (the parenthesis are superfluous), while the same string declares a \CFA style unnamed parameter of type routine returning integer with unnamed parameter of type pointer to foo.
     
    11651247C-style declarations can be used to declare parameters for \CFA style routine definitions, \eg:
    11661248\begin{cfa}
    1167 [ int ] f( * int, int * );              §\C{// returns an integer, accepts 2 pointers to integers}§
    1168 [ * int, int * ] f( int );              §\C{// returns 2 pointers to integers, accepts an integer}§
     1249[ int ] f( * int, int * );                      §\C{// returns an integer, accepts 2 pointers to integers}§
     1250[ * int, int * ] f( int );                      §\C{// returns 2 pointers to integers, accepts an integer}§
    11691251\end{cfa}
    11701252The reason for allowing both declaration styles in the new context is for backwards compatibility with existing preprocessor macros that generate C-style declaration-syntax, as in:
    11711253\begin{cfa}
    11721254#define ptoa( n, d ) int (*n)[ d ]
    1173 int f( ptoa( p, 5 ) ) ...               §\C{// expands to int f( int (*p)[ 5 ] )}§
    1174 [ int ] f( ptoa( p, 5 ) ) ...   §\C{// expands to [ int ] f( int (*p)[ 5 ] )}§
     1255int f( ptoa( p, 5 ) ) ...                       §\C{// expands to int f( int (*p)[ 5 ] )}§
     1256[ int ] f( ptoa( p, 5 ) ) ...           §\C{// expands to [ int ] f( int (*p)[ 5 ] )}§
    11751257\end{cfa}
    11761258Again, programmers are highly encouraged to use one declaration form or the other, rather than mixing the forms.
     
    11941276        int z;
    11951277        ... x = 0; ... y = z; ...
    1196         ®return;® §\C{// implicitly return x, y}§
     1278        ®return;®                                                       §\C{// implicitly return x, y}§
    11971279}
    11981280\end{cfa}
     
    12041286[ int x, int y ] f() {
    12051287        ...
    1206 } §\C{// implicitly return x, y}§
     1288}                                                                               §\C{// implicitly return x, y}§
    12071289\end{cfa}
    12081290In this case, the current values of ©x© and ©y© are returned to the calling routine just as if a ©return© had been encountered.
     1291
     1292Named return values may be used in conjunction with named parameter values;
     1293specifically, a return and parameter can have the same name.
     1294\begin{cfa}
     1295[ int x, int y ] f( int, x, int y ) {
     1296        ...
     1297}                                                                               §\C{// implicitly return x, y}§
     1298\end{cfa}
     1299This notation allows the compiler to eliminate temporary variables in nested routine calls.
     1300\begin{cfa}
     1301[ int x, int y ] f( int, x, int y );    §\C{// prototype declaration}§
     1302int a, b;
     1303[a, b] = f( f( f( a, b ) ) );
     1304\end{cfa}
     1305While the compiler normally ignores parameters names in prototype declarations, here they are used to eliminate temporary return-values by inferring that the results of each call are the inputs of the next call, and ultimately, the left-hand side of the assignment.
     1306Hence, even without the body of routine ©f© (separate compilation), it is possible to perform a global optimization across routine calls.
     1307The compiler warns about naming inconsistencies between routine prototype and definition in this case, and behaviour is \Index{undefined} if the programmer is inconsistent.
    12091308
    12101309
     
    12141313as well, parameter names are optional, \eg:
    12151314\begin{cfa}
    1216 [ int x ] f ();                                 §\C{// returning int with no parameters}§
    1217 [ * int ] g (int y);                    §\C{// returning pointer to int with int parameter}§
    1218 [ ] h (int,char);                               §\C{// returning no result with int and char parameters}§
    1219 [ * int,int ] j (int);                  §\C{// returning pointer to int and int, with int parameter}§
     1315[ int x ] f ();                                                 §\C{// returning int with no parameters}§
     1316[ * int ] g (int y);                                    §\C{// returning pointer to int with int parameter}§
     1317[ ] h ( int, char );                                    §\C{// returning no result with int and char parameters}§
     1318[ * int, int ] j ( int );                               §\C{// returning pointer to int and int, with int parameter}§
    12201319\end{cfa}
    12211320This syntax allows a prototype declaration to be created by cutting and pasting source text from the routine definition header (or vice versa).
     
    12251324\multicolumn{1}{c@{\hspace{3em}}}{\textbf{\CFA}}        & \multicolumn{1}{c}{\textbf{C}}        \\
    12261325\begin{cfa}
    1227 [ int ] f(int), g;
     1326[ int ] f( int ), g;
    12281327\end{cfa}
    12291328&
    12301329\begin{cfa}
    1231 int f(int), g(int);
     1330int f( int ), g( int );
    12321331\end{cfa}
    12331332\end{tabular}
     
    12351334Declaration qualifiers can only appear at the start of a \CFA routine declaration,\footref{StorageClassSpecifier} \eg:
    12361335\begin{cfa}
    1237 extern [ int ] f (int);
    1238 static [ int ] g (int);
     1336extern [ int ] f ( int );
     1337static [ int ] g ( int );
    12391338\end{cfa}
    12401339
     
    12441343The syntax for pointers to \CFA routines specifies the pointer name on the right, \eg:
    12451344\begin{cfa}
    1246 * [ int x ] () fp;                      §\C{// pointer to routine returning int with no parameters}§
    1247 * [ * int ] (int y) gp;         §\C{// pointer to routine returning pointer to int with int parameter}§
    1248 * [ ] (int,char) hp;            §\C{// pointer to routine returning no result with int and char parameters}§
    1249 * [ * int,int ] (int) jp;       §\C{// pointer to routine returning pointer to int and int, with int parameter}§
     1345* [ int x ] () fp;                                              §\C{// pointer to routine returning int with no parameters}§
     1346* [ * int ] (int y) gp;                                 §\C{// pointer to routine returning pointer to int with int parameter}§
     1347* [ ] (int,char) hp;                                    §\C{// pointer to routine returning no result with int and char parameters}§
     1348* [ * int,int ] ( int ) jp;                             §\C{// pointer to routine returning pointer to int and int, with int parameter}§
    12501349\end{cfa}
    12511350While parameter names are optional, \emph{a routine name cannot be specified};
    12521351for example, the following is incorrect:
    12531352\begin{cfa}
    1254 * [ int x ] f () fp;            §\C{// routine name "f" is not allowed}§
     1353* [ int x ] f () fp;                                    §\C{// routine name "f" is not allowed}§
    12551354\end{cfa}
    12561355
     
    12581357\section{Named and Default Arguments}
    12591358
    1260 Named and default arguments~\cite{Hardgrave76}\footnote{
     1359Named\index{named arguments}\index{arguments!named} and default\index{default arguments}\index{arguments!default} arguments~\cite{Hardgrave76}\footnote{
    12611360Francez~\cite{Francez77} proposed a further extension to the named-parameter passing style, which specifies what type of communication (by value, by reference, by name) the argument is passed to the routine.}
    12621361are two mechanisms to simplify routine call.
     
    14391538        int ;                                   §\C{// disallowed, unnamed field}§
    14401539        int *;                                  §\C{// disallowed, unnamed field}§
    1441         int (*)(int);                   §\C{// disallowed, unnamed field}§
     1540        int (*)( int );                 §\C{// disallowed, unnamed field}§
    14421541};
    14431542\end{cfa}
     
    15621661}
    15631662int main() {
    1564         * [int](int) fp = foo();        §\C{// int (*fp)(int)}§
     1663        * [int]( int ) fp = foo();      §\C{// int (*fp)( int )}§
    15651664        sout | fp( 3 ) | endl;
    15661665}
     
    26832782
    26842783
    2685 \subsection{Constructors and Destructors}
     2784\section{Constructors and Destructors}
    26862785
    26872786\CFA supports C initialization of structures, but it also adds constructors for more advanced initialization.
     
    30143113
    30153114
     3115\begin{comment}
    30163116\section{Generics}
    30173117
     
    32203320        }
    32213321\end{cfa}
     3322\end{comment}
    32223323
    32233324
     
    32793380        Complex *p3 = new(0.5, 1.0); // allocate + 2 param constructor
    32803381}
    3281 
    32823382\end{cfa}
    32833383
     
    32913391
    32923392
     3393\begin{comment}
    32933394\subsection{Unsafe C Constructs}
    32943395
     
    33013402The exact set of unsafe C constructs that will be disallowed in \CFA has not yet been decided, but is sure to include pointer arithmetic, pointer casting, etc.
    33023403Once the full set is decided, the rules will be listed here.
     3404\end{comment}
    33033405
    33043406
    33053407\section{Concurrency}
    3306 
    3307 Today's processors for nearly all use cases, ranging from embedded systems to large cloud computing servers, are composed of multiple cores, often heterogeneous.
    3308 As machines grow in complexity, it becomes more difficult for a program to make the most use of the hardware available.
    3309 \CFA includes built-in concurrency features to enable high performance and improve programmer productivity on these multi-/many-core machines.
    33103408
    33113409Concurrency support in \CFA is implemented on top of a highly efficient runtime system of light-weight, M:N, user level threads.
     
    33143412This enables a very familiar interface to all programmers, even those with no parallel programming experience.
    33153413It also allows the compiler to do static type checking of all communication, a very important safety feature.
    3316 This controlled communication with type safety has some similarities with channels in \Index*{Go}, and can actually implement
    3317 channels exactly, as well as create additional communication patterns that channels cannot.
     3414This controlled communication with type safety has some similarities with channels in \Index*{Go}, and can actually implement channels exactly, as well as create additional communication patterns that channels cannot.
    33183415Mutex objects, monitors, are used to contain mutual exclusion within an object and synchronization across concurrent threads.
    33193416
    3320 Three new keywords are added to support these features:
    3321 
    3322 monitor creates a structure with implicit locking when accessing fields
    3323 
    3324 mutex implies use of a monitor requiring the implicit locking
    3325 
    3326 task creates a type with implicit locking, separate stack, and a thread
     3417\begin{figure}
     3418\begin{cfa}
     3419#include <fstream>
     3420#include <coroutine>
     3421
     3422coroutine Fibonacci {
     3423        int fn;                                                         §\C{// used for communication}§
     3424};
     3425void ?{}( Fibonacci * this ) {
     3426        this->fn = 0;
     3427}
     3428void main( Fibonacci * this ) {
     3429        int fn1, fn2;                                           §\C{// retained between resumes}§
     3430        this->fn = 0;                                           §\C{// case 0}§
     3431        fn1 = this->fn;
     3432        suspend();                                                      §\C{// return to last resume}§
     3433
     3434        this->fn = 1;                                           §\C{// case 1}§
     3435        fn2 = fn1;
     3436        fn1 = this->fn;
     3437        suspend();                                                      §\C{// return to last resume}§
     3438
     3439        for ( ;; ) {                                            §\C{// general case}§
     3440                this->fn = fn1 + fn2;
     3441                fn2 = fn1;
     3442                fn1 = this->fn;
     3443                suspend();                                              §\C{// return to last resume}§
     3444        } // for
     3445}
     3446int next( Fibonacci * this ) {
     3447        resume( this );                                         §\C{// transfer to last suspend}§
     3448        return this->fn;
     3449}
     3450int main() {
     3451        Fibonacci f1, f2;
     3452        for ( int i = 1; i <= 10; i += 1 ) {
     3453                sout | next( &f1 ) | ' ' | next( &f2 ) | endl;
     3454        } // for
     3455}
     3456\end{cfa}
     3457\caption{Fibonacci Coroutine}
     3458\label{f:FibonacciCoroutine}
     3459\end{figure}
     3460
     3461
     3462\subsection{Coroutine}
     3463
     3464\Index{Coroutines} are the precursor to tasks.
     3465\VRef[Figure]{f:FibonacciCoroutine} shows a coroutine that computes the \Index*{Fibonacci} numbers.
    33273466
    33283467
     
    33393478\end{cfa}
    33403479
     3480\begin{figure}
     3481\begin{cfa}
     3482#include <fstream>
     3483#include <kernel>
     3484#include <monitor>
     3485#include <thread>
     3486
     3487monitor global_t {
     3488        int value;
     3489};
     3490
     3491void ?{}(global_t * this) {
     3492        this->value = 0;
     3493}
     3494
     3495static global_t global;
     3496
     3497void increment3( global_t * mutex this ) {
     3498        this->value += 1;
     3499}
     3500void increment2( global_t * mutex this ) {
     3501        increment3( this );
     3502}
     3503void increment( global_t * mutex this ) {
     3504        increment2( this );
     3505}
     3506
     3507thread MyThread {};
     3508
     3509void main( MyThread* this ) {
     3510        for(int i = 0; i < 1_000_000; i++) {
     3511                increment( &global );
     3512        }
     3513}
     3514int main(int argc, char* argv[]) {
     3515        processor p;
     3516        {
     3517                MyThread f[4];
     3518        }
     3519        sout | global.value | endl;
     3520}
     3521\end{cfa}
     3522\caption{Atomic-Counter Monitor}
     3523\caption{f:AtomicCounterMonitor}
     3524\end{figure}
     3525
     3526\begin{comment}
    33413527Since a monitor structure includes an implicit locking mechanism, it does not make sense to copy a monitor;
    33423528it is always passed by reference.
     
    33853571}
    33863572\end{cfa}
     3573\end{comment}
    33873574
    33883575
     
    33923579A task provides mutual exclusion like a monitor, and also has its own execution state and a thread of control.
    33933580Similar to a monitor, a task is defined like a structure:
     3581
     3582\begin{figure}
     3583\begin{cfa}
     3584#include <fstream>
     3585#include <kernel>
     3586#include <stdlib>
     3587#include <thread>
     3588
     3589thread First  { signal_once * lock; };
     3590thread Second { signal_once * lock; };
     3591
     3592void ?{}( First * this, signal_once* lock ) { this->lock = lock; }
     3593void ?{}( Second * this, signal_once* lock ) { this->lock = lock; }
     3594
     3595void main( First * this ) {
     3596        for ( int i = 0; i < 10; i += 1 ) {
     3597                sout | "First : Suspend No." | i + 1 | endl;
     3598                yield();
     3599        }
     3600        signal( this->lock );
     3601}
     3602
     3603void main( Second * this ) {
     3604        wait( this->lock );
     3605        for ( int i = 0; i < 10; i += 1 ) {
     3606                sout | "Second : Suspend No." | i + 1 | endl;
     3607                yield();
     3608        }
     3609}
     3610
     3611int main( void ) {
     3612        signal_once lock;
     3613        sout | "User main begin" | endl;
     3614        {
     3615                processor p;
     3616                {
     3617                        First  f = { &lock };
     3618                        Second s = { &lock };
     3619                }
     3620        }
     3621        sout | "User main end" | endl;
     3622}
     3623\end{cfa}
     3624\caption{Simple Tasks}
     3625\label{f:SimpleTasks}
     3626\end{figure}
     3627
     3628
     3629\begin{comment}
    33943630\begin{cfa}
    33953631type Adder = task {
     
    34453681\end{cfa}
    34463682
    3447 
    34483683\subsection{Cooperative Scheduling}
    34493684
     
    35583793}
    35593794\end{cfa}
    3560 
    3561 
     3795\end{comment}
     3796
     3797
     3798\begin{comment}
    35623799\section{Modules and Packages }
    35633800
    3564 \begin{comment}
    35653801High-level encapsulation is useful for organizing code into reusable units, and accelerating compilation speed.
    35663802\CFA provides a convenient mechanism for creating, building and sharing groups of functionality that enhances productivity and improves compile time.
     
    42264462
    42274463
     4464\begin{comment}
    42284465\subsection[Comparing Key Features of CFA]{Comparing Key Features of \CFA}
    42294466
     
    46034840
    46044841
    4605 \begin{comment}
    46064842\subsubsection{Modules / Packages}
    46074843
     
    46834919}
    46844920\end{cfa}
    4685 \end{comment}
    46864921
    46874922
     
    48445079
    48455080\subsection{Summary of Language Comparison}
    4846 
    4847 
    4848 \subsubsection[C++]{\CC}
     5081\end{comment}
     5082
     5083
     5084\subsection[C++]{\CC}
    48495085
    48505086\Index*[C++]{\CC{}} is a general-purpose programming language.
     
    48675103
    48685104
    4869 \subsubsection{Go}
     5105\subsection{Go}
    48705106
    48715107\Index*{Go}, also commonly referred to as golang, is a programming language developed at Google in 2007 [.].
     
    48835119
    48845120
    4885 \subsubsection{Rust}
     5121\subsection{Rust}
    48865122
    48875123\Index*{Rust} is a general-purpose, multi-paradigm, compiled programming language developed by Mozilla Research.
     
    48975133
    48985134
    4899 \subsubsection{D}
     5135\subsection{D}
    49005136
    49015137The \Index*{D} programming language is an object-oriented, imperative, multi-paradigm system programming
     
    50095245\item[Rationale:] keywords added to implement new semantics of \CFA.
    50105246\item[Effect on original feature:] change to semantics of well-defined feature. \\
    5011 Any ISO C programs using these keywords as identifiers are invalid \CFA programs.
     5247Any \Celeven programs using these keywords as identifiers are invalid \CFA programs.
    50125248\item[Difficulty of converting:] keyword clashes are accommodated by syntactic transformations using the \CFA backquote escape-mechanism (see~\VRef{s:BackquoteIdentifiers}).
    50135249\item[How widely used:] clashes among new \CFA keywords and existing identifiers are rare.
     
    52295465hence, names in these include files are not mangled\index{mangling!name} (see~\VRef{s:Interoperability}).
    52305466All other C header files must be explicitly wrapped in ©extern "C"© to prevent name mangling.
     5467For \Index*[C++]{\CC{}}, the name-mangling issue is handled implicitly because most C header-files are augmented with checks for preprocessor variable ©__cplusplus©, which adds appropriate ©extern "C"© qualifiers.
    52315468
    52325469
     
    53115548}
    53125549
    5313 // §\CFA§ safe initialization/copy
     5550// §\CFA§ safe initialization/copy, i.e., implicit size specification
    53145551forall( dtype T | sized(T) ) T * memset( T * dest, char c );§\indexc{memset}§
    53155552forall( dtype T | sized(T) ) T * memcpy( T * dest, const T * src );§\indexc{memcpy}§
     
    54215658\leavevmode
    54225659\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    5423 forall( otype T | { int ?<?( T, T ); } )
    5424 T min( T t1, T t2 );§\indexc{min}§
    5425 
    5426 forall( otype T | { int ?>?( T, T ); } )
    5427 T max( T t1, T t2 );§\indexc{max}§
    5428 
    5429 forall( otype T | { T min( T, T ); T max( T, T ); } )
    5430 T clamp( T value, T min_val, T max_val );§\indexc{clamp}§
    5431 
    5432 forall( otype T )
    5433 void swap( T * t1, T * t2 );§\indexc{swap}§
     5660forall( otype T | { int ?<?( T, T ); } ) T min( T t1, T t2 );§\indexc{min}§
     5661forall( otype T | { int ?>?( T, T ); } ) T max( T t1, T t2 );§\indexc{max}§
     5662forall( otype T | { T min( T, T ); T max( T, T ); } ) T clamp( T value, T min_val, T max_val );§\indexc{clamp}§
     5663forall( otype T ) void swap( T * t1, T * t2 );§\indexc{swap}§
    54345664\end{cfa}
    54355665
  • doc/working/exception/translate.c

    ra724ac1 r1bc9dcb  
    22 *
    33 * Note that these are not final. Names, syntax and the exact translation
    4  * will be updated. The first section is the shared definitions we will have
    5  * to have access to where the translations are preformed.
     4 * will be updated. The first section is the shared definitions, not generated
     5 * by the local translations but used by the translated code.
     6 *
     7 * Most of these exist only after translation (in C code). The first (the
     8 * exception type) has to exist in Cforall code so that it can be used
     9 * directly in Cforall. The two __throw_* functions might have wrappers in
     10 * Cforall, but the underlying functions should probably be C. struct
     11 * stack_exception_data has to exist inside of the coroutine data structures
     12 * and so should be compiled as they are.
    613 */
    714
    8 // Currently it is a typedef for int, but later it will be the root of the
    9 // hierarchy and so have to be public.
     15// Currently it is a typedef for int, but later it will be a new type.
    1016typedef int exception;
    1117
    12 // These might be given simpler names and made public.
    1318void __throw_terminate(exception except) __attribute__((noreturn));
    1419void __throw_resume(exception except);
     
    4954
    5055__throw_resume(exception_instance);
     56
     57
     58
     59// Rethrows (inside matching handlers):
     60"Cforall"
     61
     62throw;
     63
     64resume;
     65
     66"C"
     67
     68__rethrow_terminate();
     69
     70return false;
    5171
    5272
     
    232252                }
    233253                void finally1() {
    234                         // (Finally, because of timing, also work for resume.)
     254                        // Finally, because of timing, also works for resume.
     255                        // However this might not actually be better in any way.
    235256                        __try_resume_cleanup();
    236257
  • src/Common/PassVisitor.h

    ra724ac1 r1bc9dcb  
    5454        virtual void visit( BranchStmt *branchStmt ) override final;
    5555        virtual void visit( ReturnStmt *returnStmt ) override final;
     56        virtual void visit( ThrowStmt *throwStmt ) override final;
    5657        virtual void visit( TryStmt *tryStmt ) override final;
    5758        virtual void visit( CatchStmt *catchStmt ) override final;
     
    139140        virtual Statement* mutate( BranchStmt *branchStmt ) override final;
    140141        virtual Statement* mutate( ReturnStmt *returnStmt ) override final;
     142        virtual Statement* mutate( ThrowStmt *throwStmt ) override final;
    141143        virtual Statement* mutate( TryStmt *returnStmt ) override final;
    142144        virtual Statement* mutate( CatchStmt *catchStmt ) override final;
     
    230232        std::list< Statement* > *       get_afterStmts () { return stmtsToAddAfter_impl ( pass, 0); }
    231233        bool visit_children() { bool* skip = skip_children_impl(pass, 0); return ! (skip && *skip); }
    232 };
     234        void reset_visit() { bool* skip = skip_children_impl(pass, 0); if(skip) *skip = false; }
     235
     236        guard_value_impl init_guard() {
     237                guard_value_impl guard;
     238                auto at_cleanup = at_cleanup_impl(pass, 0);
     239                if( at_cleanup ) {
     240                        *at_cleanup = [&guard]( cleanup_func_t && func, void* val ) {
     241                                guard.push( std::move( func ), val );
     242                        };
     243                }
     244                return guard;
     245        }
     246};
     247
     248template<typename pass_type, typename T>
     249void GuardValue( pass_type * pass, T& val ) {
     250        pass->at_cleanup( [ val ]( void * newVal ) {
     251                * static_cast< T * >( newVal ) = val;
     252        }, static_cast< void * >( & val ) );
     253}
     254
     255class WithTypeSubstitution {
     256protected:
     257        WithTypeSubstitution() = default;
     258        ~WithTypeSubstitution() = default;
     259
     260public:
     261        TypeSubstitution * env;
     262};
     263
     264class WithStmtsToAdd {
     265protected:
     266        WithStmtsToAdd() = default;
     267        ~WithStmtsToAdd() = default;
     268
     269public:
     270        std::list< Statement* > stmtsToAddBefore;
     271        std::list< Statement* > stmtsToAddAfter;
     272};
     273
     274class WithShortCircuiting {
     275protected:
     276        WithShortCircuiting() = default;
     277        ~WithShortCircuiting() = default;
     278
     279public:
     280        bool skip_children;
     281};
     282
     283class WithScopes {
     284protected:
     285        WithScopes() = default;
     286        ~WithScopes() = default;
     287
     288public:
     289        at_cleanup_t at_cleanup;
     290
     291        template< typename T >
     292        void GuardValue( T& val ) {
     293                at_cleanup( [ val ]( void * newVal ) {
     294                        * static_cast< T * >( newVal ) = val;
     295                }, static_cast< void * >( & val ) );
     296        }
     297};
     298
    233299
    234300#include "PassVisitor.impl.h"
  • src/Common/PassVisitor.impl.h

    ra724ac1 r1bc9dcb  
    11#pragma once
    22
    3 #define VISIT_START( node )  \
    4         call_previsit( node ); \
    5         if( visit_children() ) { \
    6 
    7 #define VISIT_END( node )            \
    8         }                              \
    9         return call_postvisit( node ); \
    10 
    11 #define MUTATE_START( node )  \
    12         call_premutate( node ); \
    13         if( visit_children() ) { \
     3#define VISIT_START( node )                     \
     4        __attribute__((unused))                   \
     5        const auto & guard = init_guard();        \
     6        call_previsit( node );                    \
     7        if( visit_children() ) {                  \
     8                reset_visit();                      \
     9
     10#define VISIT_END( node )                       \
     11        }                                         \
     12        call_postvisit( node );                   \
     13
     14#define MUTATE_START( node )                    \
     15        __attribute__((unused))                   \
     16        const auto & guard = init_guard();        \
     17        call_premutate( node );                   \
     18        if( visit_children() ) {                  \
     19                reset_visit();                      \
    1420
    1521#define MUTATE_END( type, node )                \
     
    1824
    1925
    20 #define VISIT_BODY( node )    \
    21         VISIT_START( node );  \
    22         Visitor::visit( node ); \
    23         VISIT_END( node ); \
     26#define VISIT_BODY( node )        \
     27        VISIT_START( node );        \
     28        Visitor::visit( node );     \
     29        VISIT_END( node );          \
    2430
    2531
     
    389395
    390396//--------------------------------------------------------------------------
     397// ThrowStmt
     398
     399template< typename pass_type >
     400void PassVisitor< pass_type >::visit( ThrowStmt * node ) {
     401        VISIT_BODY( node );
     402}
     403
     404template< typename pass_type >
     405Statement * PassVisitor< pass_type >::mutate( ThrowStmt * node ) {
     406        MUTATE_BODY( Statement, node );
     407}
     408
     409//--------------------------------------------------------------------------
    391410// TryStmt
    392411template< typename pass_type >
  • src/Common/PassVisitor.proto.h

    ra724ac1 r1bc9dcb  
    11#pragma once
     2
     3typedef std::function<void( void * )> cleanup_func_t;
     4
     5class guard_value_impl {
     6public:
     7        guard_value_impl() = default;
     8
     9        ~guard_value_impl() {
     10                while( !cleanups.empty() ) {
     11                        auto& cleanup = cleanups.top();
     12                        cleanup.func( cleanup.val );
     13                        cleanups.pop();
     14                }
     15        }
     16
     17        void push( cleanup_func_t && func, void* val ) {
     18                cleanups.emplace( std::move(func), val );
     19        }
     20
     21private:
     22        struct cleanup_t {
     23                cleanup_func_t func;
     24                void * val;
     25
     26                cleanup_t( cleanup_func_t&& func, void * val ) : func(func), val(val) {}
     27        };
     28
     29        std::stack< cleanup_t > cleanups;
     30};
     31
     32typedef std::function< void( cleanup_func_t, void * ) > at_cleanup_t;
    233
    334//-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
     
    73104#define FIELD_PTR( type, name )                                                                                                        \
    74105template<typename pass_type>                                                                                                           \
    75 static inline auto name##_impl( pass_type& pass, __attribute__((unused)) int unused ) -> decltype( &pass.name ) { return &pass.name; }  \
     106static inline auto name##_impl( pass_type& pass, __attribute__((unused)) int unused ) -> decltype( &pass.name ) { return &pass.name; } \
    76107                                                                                                                                       \
    77108template<typename pass_type>                                                                                                           \
     
    82113FIELD_PTR( std::list< Statement* >, stmtsToAddAfter  )
    83114FIELD_PTR( bool, skip_children )
     115FIELD_PTR( at_cleanup_t, at_cleanup )
  • src/InitTweak/GenInit.cc

    ra724ac1 r1bc9dcb  
    1616#include <stack>
    1717#include <list>
     18
     19#include "InitTweak.h"
    1820#include "GenInit.h"
    19 #include "InitTweak.h"
     21
     22#include "Common/PassVisitor.h"
     23
    2024#include "SynTree/Declaration.h"
    21 #include "SynTree/Type.h"
    2225#include "SynTree/Expression.h"
    23 #include "SynTree/Statement.h"
    2426#include "SynTree/Initializer.h"
    2527#include "SynTree/Mutator.h"
     28#include "SynTree/Statement.h"
     29#include "SynTree/Type.h"
     30
    2631#include "SymTab/Autogen.h"
    2732#include "SymTab/Mangler.h"
     33
     34#include "GenPoly/DeclMutator.h"
    2835#include "GenPoly/PolyMutator.h"
    29 #include "GenPoly/DeclMutator.h"
    3036#include "GenPoly/ScopedSet.h"
     37
    3138#include "ResolvExpr/typeops.h"
    3239
     
    3744        }
    3845
    39         class ReturnFixer final : public GenPoly::PolyMutator {
     46        class ReturnFixer : public WithStmtsToAdd, public WithScopes {
    4047          public:
    4148                /// consistently allocates a temporary variable for the return value
     
    4451                static void makeReturnTemp( std::list< Declaration * > &translationUnit );
    4552
    46                 typedef GenPoly::PolyMutator Parent;
    47                 using Parent::mutate;
    48                 virtual DeclarationWithType * mutate( FunctionDecl *functionDecl ) override;
    49                 virtual Statement * mutate( ReturnStmt * returnStmt ) override;
     53                void premutate( FunctionDecl *functionDecl );
     54                void premutate( ReturnStmt * returnStmt );
    5055
    5156          protected:
     
    129134
    130135        void ReturnFixer::makeReturnTemp( std::list< Declaration * > & translationUnit ) {
    131                 ReturnFixer fixer;
     136                PassVisitor<ReturnFixer> fixer;
    132137                mutateAll( translationUnit, fixer );
    133138        }
    134139
    135         Statement *ReturnFixer::mutate( ReturnStmt *returnStmt ) {
     140        void ReturnFixer::premutate( ReturnStmt *returnStmt ) {
    136141                std::list< DeclarationWithType * > & returnVals = ftype->get_returnVals();
    137142                assert( returnVals.size() == 0 || returnVals.size() == 1 );
     
    144149                        construct->get_args().push_back( new AddressExpr( new VariableExpr( returnVals.front() ) ) );
    145150                        construct->get_args().push_back( returnStmt->get_expr() );
    146                         stmtsToAdd.push_back(new ExprStmt(noLabels, construct));
     151                        stmtsToAddBefore.push_back(new ExprStmt(noLabels, construct));
    147152
    148153                        // return the retVal object
    149154                        returnStmt->set_expr( new VariableExpr( returnVals.front() ) );
    150155                } // if
    151                 return returnStmt;
    152         }
    153 
    154         DeclarationWithType* ReturnFixer::mutate( FunctionDecl *functionDecl ) {
    155                 ValueGuard< FunctionType * > oldFtype( ftype );
    156                 ValueGuard< std::string > oldFuncName( funcName );
     156        }
     157
     158        void ReturnFixer::premutate( FunctionDecl *functionDecl ) {
     159                GuardValue( ftype );
     160                GuardValue( funcName );
    157161
    158162                ftype = functionDecl->get_functionType();
    159163                funcName = functionDecl->get_name();
    160                 return Parent::mutate( functionDecl );
    161164        }
    162165
Note: See TracChangeset for help on using the changeset viewer.