Ignore:
Timestamp:
Apr 17, 2018, 12:01:09 PM (7 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
Children:
3265399
Parents:
b2fe1c9 (diff), 81bb114 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/concurrency/Paper.tex

    rb2fe1c9 r32cab5b  
    1 % inline code ©...© (copyright symbol) emacs: C-q M-)
    2 % red highlighting ®...® (registered trademark symbol) emacs: C-q M-.
    3 % blue highlighting ß...ß (sharp s symbol) emacs: C-q M-_
    4 % green highlighting ¢...¢ (cent symbol) emacs: C-q M-"
    5 % LaTex escape §...§ (section symbol) emacs: C-q M-'
    6 % keyword escape ¶...¶ (pilcrow symbol) emacs: C-q M-^
    7 % math escape $...$ (dollar symbol)
    8 
    9 \documentclass[10pt]{article}
     1\documentclass[AMA,STIX1COL]{WileyNJD-v2}
     2
     3\articletype{RESEARCH ARTICLE}%
     4
     5\received{26 April 2016}
     6\revised{6 June 2016}
     7\accepted{6 June 2016}
     8
     9\raggedbottom
    1010
    1111%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    1212
    1313% Latex packages used in the document.
    14 \usepackage[T1]{fontenc}                                        % allow Latin1 (extended ASCII) characters
    15 \usepackage{textcomp}
    16 \usepackage[latin1]{inputenc}
    17 \usepackage{fullpage,times,comment}
    1814\usepackage{epic,eepic}
     15\usepackage{xspace}
     16\usepackage{comment}
    1917\usepackage{upquote}                                            % switch curled `'" to straight
    20 \usepackage{calc}
    21 \usepackage{xspace}
    22 \usepackage[labelformat=simple]{subfig}
     18\usepackage{listings}                                           % format program code
     19\usepackage[labelformat=simple,aboveskip=0pt,farskip=0pt]{subfig}
    2320\renewcommand{\thesubfigure}{(\alph{subfigure})}
    24 \usepackage{graphicx}
    25 \usepackage{tabularx}
    26 \usepackage{multicol}
    27 \usepackage{varioref}
    28 \usepackage{listings}                                           % format program code
    29 \usepackage[flushmargin]{footmisc}                              % support label/reference in footnote
    30 \usepackage{latexsym}                                           % \Box glyph
    31 \usepackage{mathptmx}                                           % better math font with "times"
    32 \usepackage[usenames]{color}
     21\usepackage{siunitx}
     22\sisetup{ binary-units=true }
     23%\input{style}                                                          % bespoke macros used in the document
     24
     25\hypersetup{breaklinks=true}
     26\definecolor{OliveGreen}{cmyk}{0.64 0 0.95 0.40}
     27\definecolor{Mahogany}{cmyk}{0 0.85 0.87 0.35}
     28\definecolor{Plum}{cmyk}{0.50 1 0 0}
     29
    3330\usepackage[pagewise]{lineno}
    3431\renewcommand{\linenumberfont}{\scriptsize\sffamily}
    35 \usepackage{fancyhdr}
    36 \usepackage{float}
    37 \usepackage{siunitx}
    38 \sisetup{ binary-units=true }
    39 \input{style}                                                   % bespoke macros used in the document
    40 \usepackage{url}
    41 \usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,pagebackref=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref}
    42 \usepackage{breakurl}
    43 \urlstyle{rm}
    44 
    45 \setlength{\topmargin}{-0.45in}                         % move running title into header
    46 \setlength{\headsep}{0.25in}
     32
     33\lefthyphenmin=4                                                        % hyphen only after 4 characters
     34\righthyphenmin=4
    4735
    4836%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     
    5038% Names used in the document.
    5139
    52 \newcommand{\Version}{1.0.0}
    53 \newcommand{\CS}{C\raisebox{-0.9ex}{\large$^\sharp$}\xspace}
     40\newcommand{\CFAIcon}{\textsf{C}\raisebox{\depth}{\rotatebox{180}{\textsf{A}}}\xspace} % Cforall symbolic name
     41\newcommand{\CFA}{\protect\CFAIcon}             % safe for section/caption
     42\newcommand{\CFL}{\textrm{Cforall}\xspace}      % Cforall symbolic name
     43\newcommand{\Celeven}{\textrm{C11}\xspace}      % C11 symbolic name
     44\newcommand{\CC}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}\xspace} % C++ symbolic name
     45\newcommand{\CCeleven}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}11\xspace} % C++11 symbolic name
     46\newcommand{\CCfourteen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}14\xspace} % C++14 symbolic name
     47\newcommand{\CCseventeen}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}17\xspace} % C++17 symbolic name
     48\newcommand{\CCtwenty}{\textrm{C}\kern-.1em\hbox{+\kern-.25em+}20\xspace} % C++20 symbolic name
     49\newcommand{\Csharp}{C\raisebox{-0.7ex}{\Large$^\sharp$}\xspace} % C# symbolic name
     50
     51%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    5452
    5553\newcommand{\Textbf}[2][red]{{\color{#1}{\textbf{#2}}}}
     
    6260\newcommand{\TODO}{{\Textbf{TODO}}}
    6361
    64 
    65 \newsavebox{\LstBox}
    66 
    6762%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    6863
    69 \setcounter{secnumdepth}{2}                           % number subsubsections
    70 \setcounter{tocdepth}{2}                              % subsubsections in table of contents
    71 % \linenumbers                                          % comment out to turn off line numbering
    72 
    73 \title{Concurrency in \CFA}
    74 \author{Thierry Delisle and Peter A. Buhr, Waterloo, Ontario, Canada}
     64% Default underscore is too low and wide. Cannot use lstlisting "literate" as replacing underscore
     65% removes it as a variable-name character so keywords in variables are highlighted. MUST APPEAR
     66% AFTER HYPERREF.
     67%\DeclareTextCommandDefault{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.1ex}}}
     68\renewcommand{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.075ex}}}
     69
     70\makeatletter
     71% parindent is relative, i.e., toggled on/off in environments like itemize, so store the value for
     72% use rather than use \parident directly.
     73\newlength{\parindentlnth}
     74\setlength{\parindentlnth}{\parindent}
     75
     76\newcommand{\LstBasicStyle}[1]{{\lst@basicstyle{\lst@basicstyle{#1}}}}
     77\newcommand{\LstKeywordStyle}[1]{{\lst@basicstyle{\lst@keywordstyle{#1}}}}
     78\newcommand{\LstCommentStyle}[1]{{\lst@basicstyle{\lst@commentstyle{#1}}}}
     79
     80\newlength{\gcolumnposn}                                        % temporary hack because lstlisting does not handle tabs correctly
     81\newlength{\columnposn}
     82\setlength{\gcolumnposn}{3.5in}
     83\setlength{\columnposn}{\gcolumnposn}
     84\newcommand{\C}[2][\@empty]{\ifx#1\@empty\else\global\setlength{\columnposn}{#1}\global\columnposn=\columnposn\fi\hfill\makebox[\textwidth-\columnposn][l]{\lst@basicstyle{\LstCommentStyle{#2}}}}
     85\newcommand{\CRT}{\global\columnposn=\gcolumnposn}
     86
     87% Denote newterms in particular font and index them without particular font and in lowercase, e.g., \newterm{abc}.
     88% The option parameter provides an index term different from the new term, e.g., \newterm[\texttt{abc}]{abc}
     89% The star version does not lowercase the index information, e.g., \newterm*{IBM}.
     90\newcommand{\newtermFontInline}{\emph}
     91\newcommand{\newterm}{\@ifstar\@snewterm\@newterm}
     92\newcommand{\@newterm}[2][\@empty]{\lowercase{\def\temp{#2}}{\newtermFontInline{#2}}\ifx#1\@empty\index{\temp}\else\index{#1@{\protect#2}}\fi}
     93\newcommand{\@snewterm}[2][\@empty]{{\newtermFontInline{#2}}\ifx#1\@empty\index{#2}\else\index{#1@{\protect#2}}\fi}
     94
     95% Latin abbreviation
     96\newcommand{\abbrevFont}{\textit}                       % set empty for no italics
     97\@ifundefined{eg}{
     98\newcommand{\EG}{\abbrevFont{e}.\abbrevFont{g}.}
     99\newcommand*{\eg}{%
     100        \@ifnextchar{,}{\EG}%
     101                {\@ifnextchar{:}{\EG}%
     102                        {\EG,\xspace}}%
     103}}{}%
     104\@ifundefined{ie}{
     105\newcommand{\IE}{\abbrevFont{i}.\abbrevFont{e}.}
     106\newcommand*{\ie}{%
     107        \@ifnextchar{,}{\IE}%
     108                {\@ifnextchar{:}{\IE}%
     109                        {\IE,\xspace}}%
     110}}{}%
     111\@ifundefined{etc}{
     112\newcommand{\ETC}{\abbrevFont{etc}}
     113\newcommand*{\etc}{%
     114        \@ifnextchar{.}{\ETC}%
     115        {\ETC.\xspace}%
     116}}{}%
     117\@ifundefined{etal}{
     118\newcommand{\ETAL}{\abbrevFont{et}~\abbrevFont{al}}
     119\newcommand*{\etal}{%
     120        \@ifnextchar{.}{\protect\ETAL}%
     121                {\protect\ETAL.\xspace}%
     122}}{}%
     123\@ifundefined{viz}{
     124\newcommand{\VIZ}{\abbrevFont{viz}}
     125\newcommand*{\viz}{%
     126        \@ifnextchar{.}{\VIZ}%
     127                {\VIZ.\xspace}%
     128}}{}%
     129\makeatother
     130
     131\newenvironment{cquote}{%
     132        \list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
     133        \item\relax
     134}{%
     135        \endlist
     136}% cquote
     137
     138% CFA programming language, based on ANSI C (with some gcc additions)
     139\lstdefinelanguage{CFA}[ANSI]{C}{
     140        morekeywords={
     141                _Alignas, _Alignof, __alignof, __alignof__, asm, __asm, __asm__, __attribute, __attribute__,
     142                auto, _Bool, catch, catchResume, choose, _Complex, __complex, __complex__, __const, __const__,
     143                coroutine, disable, dtype, enable, __extension__, exception, fallthrough, fallthru, finally,
     144                __float80, float80, __float128, float128, forall, ftype, _Generic, _Imaginary, __imag, __imag__,
     145                inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or,
     146                otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, thread,
     147                _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__,
     148                virtual, __volatile, __volatile__, waitfor, when, with, zero_t},
     149        moredirectives={defined,include_next}%
     150}
     151
     152\lstset{
     153language=CFA,
     154columns=fullflexible,
     155basicstyle=\linespread{0.9}\sf,                                                 % reduce line spacing and use sanserif font
     156stringstyle=\tt,                                                                                % use typewriter font
     157tabsize=5,                                                                                              % N space tabbing
     158xleftmargin=\parindentlnth,                                                             % indent code to paragraph indentation
     159%mathescape=true,                                                                               % LaTeX math escape in CFA code $...$
     160escapechar=\$,                                                                                  % LaTeX escape in CFA code
     161keepspaces=true,                                                                                %
     162showstringspaces=false,                                                                 % do not show spaces with cup
     163showlines=true,                                                                                 % show blank lines at end of code
     164aboveskip=4pt,                                                                                  % spacing above/below code block
     165belowskip=3pt,
     166% replace/adjust listing characters that look bad in sanserif
     167literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
     168        {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
     169        {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textgreater}}2,
     170moredelim=**[is][\color{red}]{`}{`},
     171}% lstset
     172
     173% uC++ programming language, based on ANSI C++
     174\lstdefinelanguage{uC++}[ANSI]{C++}{
     175        morekeywords={
     176                _Accept, _AcceptReturn, _AcceptWait, _Actor, _At, _CatchResume, _Cormonitor, _Coroutine, _Disable,
     177                _Else, _Enable, _Event, _Finally, _Monitor, _Mutex, _Nomutex, _PeriodicTask, _RealTimeTask,
     178                _Resume, _Select, _SporadicTask, _Task, _Timeout, _When, _With, _Throw},
     179}
     180\lstdefinelanguage{Golang}{
     181        morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},
     182        morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,
     183                bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},
     184        morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},
     185        morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},
     186        morekeywords=[5]{Println,Printf,Error,},
     187        sensitive=true,
     188        morecomment=[l]{//},
     189        morecomment=[s]{/*}{*/},
     190        morestring=[b]',
     191        morestring=[b]",
     192        morestring=[s]{`}{`},
     193}
     194
     195\lstnewenvironment{cfa}[1][]
     196{\lstset{#1}}
     197{}
     198\lstnewenvironment{C++}[1][]                            % use C++ style
     199{\lstset{language=C++,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
     200{}
     201\lstnewenvironment{uC++}[1][]
     202{\lstset{#1}}
     203{}
     204\lstnewenvironment{Go}[1][]
     205{\lstset{#1}}
     206{}
     207
     208% inline code @...@
     209\lstMakeShortInline@%
     210
     211
     212\title{\texorpdfstring{Concurrency in \protect\CFA}{Concurrency in Cforall}}
     213
     214\author[1]{Thierry Delisle}
     215\author[1]{Peter A. Buhr*}
     216\authormark{Thierry Delisle \textsc{et al}}
     217
     218\address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Ontario}, \country{Canada}}}
     219
     220\corres{*Peter A. Buhr, \email{pabuhr{\char`\@}uwaterloo.ca}}
     221\presentaddress{Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, N2L 3G1, Canada}
     222
     223
     224\abstract[Summary]{
     225\CFA is a modern, polymorphic, \emph{non-object-oriented} extension of the C programming language.
     226This paper discusses the design of the concurrency and parallelism features in \CFA, and the concurrent runtime-system.
     227These features are created from scratch as ISO C lacks concurrency, relying largely on pthreads.
     228Coroutines and lightweight (user) threads are introduced into the language.
     229In addition, monitors are added as a high-level mechanism for mutual exclusion and synchronization.
     230A unique contribution is allowing multiple monitors to be safely acquired simultaneously.
     231All features respect the expectations of C programmers, while being fully integrate with the \CFA polymorphic type-system and other language features.
     232Finally, experimental results are presented to compare the performance of the new features with similar mechanisms in other concurrent programming-languages.
     233}%
     234
     235\keywords{concurrency, parallelism, coroutines, threads, monitors, runtime, C, Cforall}
    75236
    76237
    77238\begin{document}
     239\linenumbers                                            % comment out to turn off line numbering
     240
    78241\maketitle
    79242
    80 \begin{abstract}
    81 \CFA is a modern, \emph{non-object-oriented} extension of the C programming language.
    82 This paper serves as a definition and an implementation for the concurrency and parallelism \CFA offers. These features are created from scratch due to the lack of concurrency in ISO C. Lightweight threads are introduced into the language. In addition, monitors are introduced as a high-level tool for control-flow based synchronization and mutual-exclusion. The main contributions of this paper are two-fold: it extends the existing semantics of monitors introduce by~\cite{Hoare74} to handle monitors in groups and also details the engineering effort needed to introduce these features as core language features. Indeed, these features are added with respect to expectations of C programmers, and integrate with the \CFA type-system and other language features.
    83 \end{abstract}
    84 
    85 %----------------------------------------------------------------------
    86 % MAIN BODY
    87 %----------------------------------------------------------------------
    88 
     243% ======================================================================
    89244% ======================================================================
    90245\section{Introduction}
    91246% ======================================================================
    92 
    93 This paper provides a minimal concurrency \textbf{api} that is simple, efficient and can be reused to build higher-level features. The simplest possible concurrency system is a thread and a lock but this low-level approach is hard to master. An easier approach for users is to support higher-level constructs as the basis of concurrency. Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{HPP:Study}. Examples are task based, message passing and implicit threading. The high-level approach and its minimal \textbf{api} are tested in a dialect of C, called \CFA. Furthermore, the proposed \textbf{api} doubles as an early definition of the \CFA language and library. This paper also provides an implementation of the concurrency library for \CFA as well as all the required language features added to the source-to-source translator.
    94 
    95 There are actually two problems that need to be solved in the design of concurrency for a programming language: which concurrency and which parallelism tools are available to the programmer. While these two concepts are often combined, they are in fact distinct, requiring different tools~\cite{Buhr05a}. Concurrency tools need to handle mutual exclusion and synchronization, while parallelism tools are about performance, cost and resource utilization.
    96 
    97 In the context of this paper, a \textbf{thread} is a fundamental unit of execution that runs a sequence of code, generally on a program stack. Having multiple simultaneous threads gives rise to concurrency and generally requires some kind of locking mechanism to ensure proper execution. Correspondingly, \textbf{concurrency} is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, etc.) concurrent threads are introduced. Accordingly, \textbf{locking} (and by extension locks) are defined as a mechanism that prevents the progress of certain threads in order to avoid problems due to concurrency. Finally, in this paper \textbf{parallelism} is distinct from concurrency and is defined as running multiple threads simultaneously. More precisely, parallelism implies \emph{actual} simultaneous execution as opposed to concurrency which only requires \emph{apparent} simultaneous execution. As such, parallelism is only observable in the differences in performance or, more generally, differences in timing.
     247% ======================================================================
     248
     249This paper provides a minimal concurrency \newterm{Abstract Program Interface} (API) that is simple, efficient and can be used to build other concurrency features.
     250While the simplest concurrency system is a thread and a lock, this low-level approach is hard to master.
     251An easier approach for programmers is to support higher-level constructs as the basis of concurrency.
     252Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{Hochstein05}.
     253Examples of high-level approaches are task based~\cite{TBB}, message passing~\cite{Erlang,MPI}, and implicit threading~\cite{OpenMP}.
     254
     255This paper used the following terminology.
     256A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state.
     257Multiple simultaneous threads gives rise to \newterm{concurrency}, which requires locking to ensure safe communication and access to shared data.
     258% Correspondingly, concurrency is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, \etc) concurrent threads are introduced.
     259\newterm{Locking}, and by extension locks, are defined as a mechanism to prevent progress of threads to provide safety.
     260\newterm{Parallelism} is running multiple threads simultaneously.
     261Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution.
     262As such, parallelism is only observable in differences in performance, which is observed through differences in timing.
     263
     264Hence, there are two problems to be solved in the design of concurrency for a programming language: concurrency and parallelism.
     265While these two concepts are often combined, they are in fact distinct, requiring different tools~\cite[\S~2]{Buhr05a}.
     266Concurrency tools handle synchronization and mutual exclusion, while parallelism tools handle performance, cost and resource utilization.
     267
     268The proposed concurrency API is implemented in a dialect of C, called \CFA.
     269The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-perforamnce runtime-library to implement the concurrency features.
    98270
    99271% ======================================================================
     
    105277The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
    106278
    107 \CFA is an extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory layouts and calling conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (e.g., {\tt this}), it does have some notion of objects\footnote{C defines the term objects as : ``region of data storage in the execution environment, the contents of which can represent
    108 values''~\cite[3.15]{C11}}, most importantly construction and destruction of objects. Most of the following code examples can be found on the \CFA website~\cite{www-cfa}.
    109 
    110 % ======================================================================
     279\CFA is an extension of ISO-C and therefore supports all of the same paradigms as C.
     280It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily.
     281Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code.
     282The vast majority of the code produced by the \CFA translator respects memory layouts and calling conventions laid out by C.
     283Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (\eg {\tt this}), it does have some notion of objects\footnote{C defines the term objects as : ``region of data storage in the execution environment, the contents of which can represent
     284values''~\cite[3.15]{C11}}, most importantly construction and destruction of objects.
     285Most of the following code examples can be found on the \CFA website~\cite{Cforall}.
     286
     287
    111288\subsection{References}
    112289
    113 Like \CC, \CFA introduces rebind-able references providing multiple dereferencing as an alternative to pointers. In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics:
    114 \begin{cfacode}
     290Like \CC, \CFA introduces rebind-able references providing multiple dereferencing as an alternative to pointers.
     291In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics:
     292\begin{cfa}
    115293int x, *p1 = &x, **p2 = &p1, ***p3 = &p2,
    116294        &r1 = x,    &&r2 = r1,   &&&r3 = r2;
    117 ***p3 = 3;                                                      //change x
    118 r3    = 3;                                                      //change x, ***r3
    119 **p3  = ...;                                            //change p1
    120 *p3   = ...;                                            //change p2
    121 int y, z, & ar[3] = {x, y, z};          //initialize array of references
    122 typeof( ar[1]) p;                                       //is int, referenced object type
    123 typeof(&ar[1]) q;                                       //is int &, reference type
    124 sizeof( ar[1]) == sizeof(int);          //is true, referenced object size
    125 sizeof(&ar[1]) == sizeof(int *);        //is true, reference size
    126 \end{cfacode}
     295***p3 = 3;                                                      $\C{// change x}$
     296r3    = 3;                                                      $\C{// change x, ***r3}$
     297**p3  = ...;                                            $\C{// change p1}$
     298*p3   = ...;                                            $\C{// change p2}$
     299int y, z, & ar[3] = {x, y, z};          $\C{// initialize array of references}$
     300typeof( ar[1]) p;                                       $\C{// is int, referenced object type}$
     301typeof(&ar[1]) q;                                       $\C{// is int \&, reference type}$
     302sizeof( ar[1]) == sizeof(int);          $\C{// is true, referenced object size}$
     303sizeof(&ar[1]) == sizeof(int *);        $\C{// is true, reference size}$
     304\end{cfa}
    127305The important take away from this code example is that a reference offers a handle to an object, much like a pointer, but which is automatically dereferenced for convenience.
    128306
     
    130308\subsection{Overloading}
    131309
    132 Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada~\cite{Ada}. For routines with multiple parameters and returns, the selection is complex.
    133 \begin{cfacode}
    134 //selection based on type and number of parameters
    135 void f(void);                   //(1)
    136 void f(char);                   //(2)
    137 void f(int, double);    //(3)
    138 f();                                    //select (1)
    139 f('a');                                 //select (2)
    140 f(3, 5.2);                              //select (3)
    141 
    142 //selection based on  type and number of returns
    143 char   f(int);                  //(1)
    144 double f(int);                  //(2)
    145 char   c = f(3);                //select (1)
    146 double d = f(4);                //select (2)
    147 \end{cfacode}
    148 This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes. As seen in section \ref{basics}, routine \code{main} is an example that benefits from overloading.
     310Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments.
     311As well, \CFA uses the return type as part of the selection criteria, as in Ada~\cite{Ada}.
     312For routines with multiple parameters and returns, the selection is complex.
     313\begin{cfa}
     314// selection based on type and number of parameters
     315void f(void);                   $\C{// (1)}$
     316void f(char);                   $\C{// (2)}$
     317void f(int, double);    $\C{// (3)}$
     318f();                                    $\C{// select (1)}$
     319f('a');                                 $\C{// select (2)}$
     320f(3, 5.2);                              $\C{// select (3)}$
     321
     322// selection based on  type and number of returns
     323char   f(int);                  $\C{// (1)}$
     324double f(int);                  $\C{// (2)}$
     325char   c = f(3);                $\C{// select (1)}$
     326double d = f(4);                $\C{// select (2)}$
     327\end{cfa}
     328This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects.
     329Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes.
     330As seen in section \ref{basics}, routine @main@ is an example that benefits from overloading.
    149331
    150332% ======================================================================
    151333\subsection{Operators}
    152 Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation appear, e.g.:
    153 \begin{cfacode}
    154 int ++? (int op);                       //unary prefix increment
    155 int ?++ (int op);                       //unary postfix increment
    156 int ?+? (int op1, int op2);             //binary plus
    157 int ?<=?(int op1, int op2);             //binary less than
    158 int ?=? (int & op1, int op2);           //binary assignment
    159 int ?+=?(int & op1, int op2);           //binary plus-assignment
     334Overloading also extends to operators.
     335The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation appear, \eg:
     336\begin{cfa}
     337int ++? (int op);                       $\C{// unary prefix increment}$
     338int ?++ (int op);                       $\C{// unary postfix increment}$
     339int ?+? (int op1, int op2);             $\C{// binary plus}$
     340int ?<=?(int op1, int op2);             $\C{// binary less than}$
     341int ?=? (int & op1, int op2);           $\C{// binary assignment}$
     342int ?+=?(int & op1, int op2);           $\C{// binary plus-assignment}$
    160343
    161344struct S {int i, j;};
    162 S ?+?(S op1, S op2) {                           //add two structures
     345S ?+?(S op1, S op2) {                           $\C{// add two structures}$
    163346        return (S){op1.i + op2.i, op1.j + op2.j};
    164347}
    165348S s1 = {1, 2}, s2 = {2, 3}, s3;
    166 s3 = s1 + s2;                                           //compute sum: s3 == {2, 5}
    167 \end{cfacode}
     349s3 = s1 + s2;                                           $\C{// compute sum: s3 == {2, 5}}$
     350\end{cfa}
    168351While concurrency does not use operator overloading directly, this feature is more important as an introduction for the syntax of constructors.
    169352
    170353% ======================================================================
    171354\subsection{Constructors/Destructors}
    172 Object lifetime is often a challenge in concurrency. \CFA uses the approach of giving concurrent meaning to object lifetime as a means of synchronization and/or mutual exclusion. Since \CFA relies heavily on the lifetime of objects, constructors and destructors is a core feature required for concurrency and parallelism. \CFA uses the following syntax for constructors and destructors:
    173 \begin{cfacode}
     355Object lifetime is often a challenge in concurrency. \CFA uses the approach of giving concurrent meaning to object lifetime as a means of synchronization and/or mutual exclusion.
     356Since \CFA relies heavily on the lifetime of objects, constructors and destructors is a core feature required for concurrency and parallelism. \CFA uses the following syntax for constructors and destructors:
     357\begin{cfa}
    174358struct S {
    175359        size_t size;
    176360        int * ia;
    177361};
    178 void ?{}(S & s, int asize) {    //constructor operator
    179         s.size = asize;                         //initialize fields
     362void ?{}(S & s, int asize) {    $\C{// constructor operator}$
     363        s.size = asize;                         $\C{// initialize fields}$
    180364        s.ia = calloc(size, sizeof(S));
    181365}
    182 void ^?{}(S & s) {                              //destructor operator
    183         free(ia);                                       //de-initialization fields
     366void ^?{}(S & s) {                              $\C{// destructor operator}$
     367        free(ia);                                       $\C{// de-initialization fields}$
    184368}
    185369int main() {
    186         S x = {10}, y = {100};          //implicit calls: ?{}(x, 10), ?{}(y, 100)
    187         ...                                                     //use x and y
    188         ^x{};  ^y{};                            //explicit calls to de-initialize
    189         x{20};  y{200};                         //explicit calls to reinitialize
    190         ...                                                     //reuse x and y
    191 }                                                               //implicit calls: ^?{}(y), ^?{}(x)
    192 \end{cfacode}
    193 The language guarantees that every object and all their fields are constructed. Like \CC, construction of an object is automatically done on allocation and destruction of the object is done on deallocation. Allocation and deallocation can occur on the stack or on the heap.
    194 \begin{cfacode}
     370        S x = {10}, y = {100};          $\C{// implicit calls: ?\{\}(x, 10), ?\{\}(y, 100)}$
     371        ...                                                     $\C{// use x and y}$
     372        ^x{};  ^y{};                            $\C{// explicit calls to de-initialize}$
     373        x{20};  y{200};                         $\C{// explicit calls to reinitialize}$
     374        ...                                                     $\C{// reuse x and y}$
     375}                                                               $\C{// implicit calls: \^?\{\}(y), \^?\{\}(x)}$
     376\end{cfa}
     377The language guarantees that every object and all their fields are constructed.
     378Like \CC, construction of an object is automatically done on allocation and destruction of the object is done on deallocation.
     379Allocation and deallocation can occur on the stack or on the heap.
     380\begin{cfa}
    195381{
    196         struct S s = {10};      //allocation, call constructor
     382        struct S s = {10};      $\C{// allocation, call constructor}$
    197383        ...
    198 }                                               //deallocation, call destructor
    199 struct S * s = new();   //allocation, call constructor
     384}                                               $\C{// deallocation, call destructor}$
     385struct S * s = new();   $\C{// allocation, call constructor}$
    200386...
    201 delete(s);                              //deallocation, call destructor
    202 \end{cfacode}
    203 Note that like \CC, \CFA introduces \code{new} and \code{delete}, which behave like \code{malloc} and \code{free} in addition to constructing and destructing objects, after calling \code{malloc} and before calling \code{free}, respectively.
     387delete(s);                              $\C{// deallocation, call destructor}$
     388\end{cfa}
     389Note that like \CC, \CFA introduces @new@ and @delete@, which behave like @malloc@ and @free@ in addition to constructing and destructing objects, after calling @malloc@ and before calling @free@, respectively.
    204390
    205391% ======================================================================
    206392\subsection{Parametric Polymorphism}
    207393\label{s:ParametricPolymorphism}
    208 Routines in \CFA can also be reused for multiple types. This capability is done using the \code{forall} clauses, which allow separately compiled routines to support generic usage over multiple types. For example, the following sum function works for any type that supports construction from 0 and addition:
    209 \begin{cfacode}
    210 //constraint type, 0 and +
     394Routines in \CFA can also be reused for multiple types.
     395This capability is done using the @forall@ clauses, which allow separately compiled routines to support generic usage over multiple types.
     396For example, the following sum function works for any type that supports construction from 0 and addition:
     397\begin{cfa}
     398// constraint type, 0 and +
    211399forall(otype T | { void ?{}(T *, zero_t); T ?+?(T, T); })
    212400T sum(T a[ ], size_t size) {
    213         T total = 0;                            //construct T from 0
     401        T total = 0;                            $\C{// construct T from 0}$
    214402        for(size_t i = 0; i < size; i++)
    215                 total = total + a[i];   //select appropriate +
     403                total = total + a[i];   $\C{// select appropriate +}$
    216404        return total;
    217405}
    218406
    219407S sa[5];
    220 int i = sum(sa, 5);                             //use S's 0 construction and +
    221 \end{cfacode}
    222 
    223 Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits. Traits are named collection of constraints that can be used both instead and in addition to regular constraints:
    224 \begin{cfacode}
     408int i = sum(sa, 5);                             $\C{// use S's 0 construction and +}$
     409\end{cfa}
     410
     411Since writing constraints on types can become cumbersome for more constrained functions, \CFA also has the concept of traits.
     412Traits are named collection of constraints that can be used both instead and in addition to regular constraints:
     413\begin{cfa}
    225414trait summable( otype T ) {
    226         void ?{}(T *, zero_t);          //constructor from 0 literal
    227         T ?+?(T, T);                            //assortment of additions
     415        void ?{}(T *, zero_t);          $\C{// constructor from 0 literal}$
     416        T ?+?(T, T);                            $\C{// assortment of additions}$
    228417        T ?+=?(T *, T);
    229418        T ++?(T *);
    230419        T ?++(T *);
    231420};
    232 forall( otype T | summable(T) ) //use trait
     421forall( otype T | summable(T) ) $\C{// use trait}$
    233422T sum(T a[], size_t size);
    234 \end{cfacode}
    235 
    236 Note that the type use for assertions can be either an \code{otype} or a \code{dtype}. Types declared as \code{otype} refer to ``complete'' objects, i.e., objects with a size, a default constructor, a copy constructor, a destructor and an assignment operator. Using \code{dtype,} on the other hand, has none of these assumptions but is extremely restrictive, it only guarantees the object is addressable.
     423\end{cfa}
     424
     425Note that the type use for assertions can be either an @otype@ or a @dtype@.
     426Types declared as @otype@ refer to ``complete'' objects, \ie objects with a size, a default constructor, a copy constructor, a destructor and an assignment operator.
     427Using @dtype@, on the other hand, has none of these assumptions but is extremely restrictive, it only guarantees the object is addressable.
    237428
    238429% ======================================================================
    239430\subsection{with Clause/Statement}
    240 Since \CFA lacks the concept of a receiver, certain functions end up needing to repeat variable names often. To remove this inconvenience, \CFA provides the \code{with} statement, which opens an aggregate scope making its fields directly accessible (like Pascal).
    241 \begin{cfacode}
     431Since \CFA lacks the concept of a receiver, certain functions end up needing to repeat variable names often.
     432To remove this inconvenience, \CFA provides the @with@ statement, which opens an aggregate scope making its fields directly accessible (like Pascal).
     433\begin{cfa}
    242434struct S { int i, j; };
    243 int mem(S & this) with (this)           //with clause
    244         i = 1;                                                  //this->i
    245         j = 2;                                                  //this->j
     435int mem(S & this) with (this)           $\C{// with clause}$
     436        i = 1;                                                  $\C{// this->i}$
     437        j = 2;                                                  $\C{// this->j}$
    246438}
    247439int foo() {
    248440        struct S1 { ... } s1;
    249441        struct S2 { ... } s2;
    250         with (s1)                                               //with statement
     442        with (s1)                                               $\C{// with statement}$
    251443        {
    252                 //access fields of s1 without qualification
    253                 with (s2)                                       //nesting
     444                // access fields of s1 without qualification
     445                with (s2)                                       $\C{// nesting}$
    254446                {
    255                         //access fields of s1 and s2 without qualification
     447                        // access fields of s1 and s2 without qualification
    256448                }
    257449        }
    258         with (s1, s2)                                   //scopes open in parallel
     450        with (s1, s2)                                   $\C{// scopes open in parallel}$
    259451        {
    260                 //access fields of s1 and s2 without qualification
     452                // access fields of s1 and s2 without qualification
    261453        }
    262454}
    263 \end{cfacode}
    264 
    265 For more information on \CFA see \cite{cforall-ug,rob-thesis,www-cfa}.
     455\end{cfa}
     456
     457For more information on \CFA see \cite{cforall-ug,Schluntz17,www-cfa}.
    266458
    267459% ======================================================================
     
    270462% ======================================================================
    271463% ======================================================================
    272 Before any detailed discussion of the concurrency and parallelism in \CFA, it is important to describe the basics of concurrency and how they are expressed in \CFA user code.
    273 
    274 \section{Basics of concurrency}
    275 At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks. Concurrency without parallelism only requires having multiple call stacks (or contexts) for a single thread of execution.
    276 
    277 Execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining. Execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread's perspective) across the stacks is called concurrency.
    278 
    279 Therefore, a minimal concurrency system can be achieved by creating coroutines (see Section \ref{coroutine}), which instead of context-switching among each other, always ask an oracle where to context-switch next. While coroutines can execute on the caller's stack-frame, stack-full coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a., non-preemptive scheduling). The oracle/scheduler can either be a stack-less or stack-full entity and correspondingly require one or two context-switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption.
    280 
    281 A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context switches occur. Mutual exclusion and synchronization are ways of limiting non-determinism in a concurrent system. Now it is important to understand that uncertainty is desirable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
    282 
    283 \section{\protect\CFA's Thread Building Blocks}
    284 One of the important features that are missing in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional. As such, library support for threading is far from widespread. At the time of writing the paper, neither \texttt{gcc} nor \texttt{clang} support ``threads.h'' in their respective standard libraries.}. On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write efficient concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
    285 
    286 \section{Coroutines: A Stepping Stone}\label{coroutine}
    287 While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. \textbf{Coroutine}s are generalized routines which have predefined points where execution is suspended and can be resumed at a later time. Therefore, they need to deal with context switches and other context-management operations. This proposal includes coroutines both as an intermediate step for the implementation of threads, and a first-class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \textbf{api} of coroutines revolves around two features: independent call-stacks and \code{suspend}/\code{resume}.
    288 
    289 \begin{table}
    290 \begin{center}
    291 \begin{tabular}{c @{\hskip 0.025in}|@{\hskip 0.025in} c @{\hskip 0.025in}|@{\hskip 0.025in} c}
    292 \begin{ccode}[tabsize=2]
    293 //Using callbacks
    294 void fibonacci_func(
    295         int n,
    296         void (*callback)(int)
    297 ) {
    298         int first = 0;
    299         int second = 1;
    300         int next, i;
    301         for(i = 0; i < n; i++)
    302         {
    303                 if(i <= 1)
    304                         next = i;
    305                 else {
    306                         next = f1 + f2;
    307                         f1 = f2;
    308                         f2 = next;
    309                 }
    310                 callback(next);
     464
     465At its core, concurrency is based on having multiple call-stacks and scheduling among threads of execution executing on these stacks.
     466Multiple call stacks (or contexts) and a single thread of execution does \emph{not} imply concurrency.
     467Execution with a single thread and multiple stacks where the thread is deterministically self-scheduling across the stacks is called \newterm{coroutining};
     468execution with a single thread and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread's perspective) across the stacks is called concurrency~\cite[\S~3]{Buhr05a}.
     469Therefore, a minimal concurrency system can be achieved using coroutines (see Section \ref{coroutine}), which instead of context-switching among each other, always defer to an oracle for where to context-switch next.
     470
     471While coroutines can execute on the caller's stack-frame, stack-full coroutines allow full generality and are sufficient as the basis for concurrency.
     472The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a., non-preemptive scheduling).
     473The oracle/scheduler can either be a stack-less or stack-full entity and correspondingly require one or two context-switches to run a different coroutine.
     474In any case, a subset of concurrency related challenges start to appear.
     475For the complete set of concurrency challenges to occur, the only feature missing is preemption.
     476
     477A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context switches occur.
     478Mutual exclusion and synchronization are ways of limiting non-determinism in a concurrent system.
     479Now it is important to understand that uncertainty is desirable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel.
     480Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
     481
     482
     483\subsection{\protect\CFA's Thread Building Blocks}
     484
     485One of the important features that are missing in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
     486As such, library support for threading is far from widespread.
     487At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support ``threads.h'' in their standard libraries.}.
     488On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write efficient concurrent programs to take advantage of parallelism.
     489As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages.
     490And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay.
     491
     492
     493\subsection{Coroutines: A Stepping Stone}\label{coroutine}
     494
     495While the focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system.
     496\newterm{Coroutine}s are generalized routines with points where execution is suspended and resumed at a later time.
     497Suspend/resume is a context switche and coroutines have other context-management operations.
     498Many design challenges of threads are partially present in designing coroutines, which makes the design effort relevant.
     499The core \textbf{api} of coroutines has two features: independent call-stacks and @suspend@/@resume@.
     500
     501A coroutine handles the class of problems that need to retain state between calls (\eg plugin, device driver, finite-state machine).
     502For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers:
     503\begin{displaymath}
     504f(n) = \left \{
     505\begin{array}{ll}
     5060                               & n = 0         \\
     5071                               & n = 1         \\
     508f(n-1) + f(n-2) & n \ge 2       \\
     509\end{array}
     510\right.
     511\end{displaymath}
     512Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
     513
     514Figure~\ref{f:GlobalVariables} illustrates the following problems:
     515unencapsulated global variables necessary to retain state between calls;
     516only one fibonacci generator can run at a time;
     517execution state must be explicitly retained.
     518Figure~\ref{f:ExternalState} addresses these issues:
     519unencapsulated program global variables become encapsulated structure variables;
     520multiple fibonacci generators can run at a time by declaring multiple fibonacci objects;
     521explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $f(n-2)$.
     522
     523\begin{figure}
     524\centering
     525\newbox\myboxA
     526\begin{lrbox}{\myboxA}
     527\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
     528`int f1, f2, state = 1;`   // single global variables
     529int fib() {
     530        int fn;
     531        `switch ( state )` {  // explicit execution state
     532          case 1: fn = 0;  f1 = fn;  state = 2;  break;
     533          case 2: fn = 1;  f2 = f1;  f1 = fn;  state = 3;  break;
     534          case 3: fn = f1 + f2;  f2 = f1;  f1 = fn;  break;
    311535        }
    312 }
    313 
     536        return fn;
     537}
    314538int main() {
    315         void print_fib(int n) {
    316                 printf("%d\n", n);
     539
     540        for ( int i = 0; i < 10; i += 1 ) {
     541                printf( "%d\n", fib() );
    317542        }
    318 
    319         fibonacci_func(
    320                 10, print_fib
    321         );
    322 
    323 
    324 
    325 }
    326 \end{ccode}&\begin{ccode}[tabsize=2]
    327 //Using output array
    328 void fibonacci_array(
    329         int n,
    330         int* array
    331 ) {
    332         int f1 = 0; int f2 = 1;
    333         int next, i;
    334         for(i = 0; i < n; i++)
    335         {
    336                 if(i <= 1)
    337                         next = i;
    338                 else {
    339                         next = f1 + f2;
    340                         f1 = f2;
    341                         f2 = next;
    342                 }
    343                 array[i] = next;
     543}
     544\end{lstlisting}
     545\end{lrbox}
     546
     547\newbox\myboxB
     548\begin{lrbox}{\myboxB}
     549\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
     550#define FIB_INIT `{ 0, 1 }`
     551typedef struct { int f2, f1; } Fib;
     552int fib( Fib * f ) {
     553
     554        int ret = f->f2;
     555        int fn = f->f1 + f->f2;
     556        f->f2 = f->f1; f->f1 = fn;
     557
     558        return ret;
     559}
     560int main() {
     561        Fib f1 = FIB_INIT, f2 = FIB_INIT;
     562        for ( int i = 0; i < 10; i += 1 ) {
     563                printf( "%d %d\n", fib( &f1 ), fib( &f2 ) );
    344564        }
    345565}
    346 
    347 
     566\end{lstlisting}
     567\end{lrbox}
     568
     569\subfloat[3 States: global variables]{\label{f:GlobalVariables}\usebox\myboxA}
     570\qquad
     571\subfloat[1 State: external variables]{\label{f:ExternalState}\usebox\myboxB}
     572\caption{C Fibonacci Implementations}
     573\label{f:C-fibonacci}
     574
     575\bigskip
     576
     577\newbox\myboxA
     578\begin{lrbox}{\myboxA}
     579\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
     580`coroutine` Fib { int fn; };
     581void main( Fib & f ) with( f ) {
     582        int f1, f2;
     583        fn = 0;  f1 = fn;  `suspend()`;
     584        fn = 1;  f2 = f1;  f1 = fn;  `suspend()`;
     585        for ( ;; ) {
     586                fn = f1 + f2;  f2 = f1;  f1 = fn;  `suspend()`;
     587        }
     588}
     589int next( Fib & fib ) with( fib ) {
     590        `resume( fib );`
     591        return fn;
     592}
    348593int main() {
    349         int a[10];
    350 
    351         fibonacci_func(
    352                 10, a
    353         );
    354 
    355         for(int i=0;i<10;i++){
    356                 printf("%d\n", a[i]);
    357         }
    358 
    359 }
    360 \end{ccode}&\begin{ccode}[tabsize=2]
    361 //Using external state
    362 typedef struct {
    363         int f1, f2;
    364 } Iterator_t;
    365 
    366 int fibonacci_state(
    367         Iterator_t* it
    368 ) {
    369         int f;
    370         f = it->f1 + it->f2;
    371         it->f2 = it->f1;
    372         it->f1 = max(f,1);
    373         return f;
    374 }
    375 
    376 
    377 
    378 
    379 
    380 
    381 
    382 int main() {
    383         Iterator_t it={0,0};
    384 
    385         for(int i=0;i<10;i++){
    386                 printf("%d\n",
    387                         fibonacci_state(
    388                                 &it
    389                         );
    390                 );
    391         }
    392 
    393 }
    394 \end{ccode}
    395 \end{tabular}
    396 \end{center}
    397 \caption{Different implementations of a Fibonacci sequence generator in C.}
    398 \label{lst:fibonacci-c}
    399 \end{table}
    400 
    401 A good example of a problem made easier with coroutines is generators, e.g., generating the Fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Listing \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and centre approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed.
    402 
    403 Listing \ref{lst:fibonacci-cfa} is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the next generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the implementation is very similar to the \code{fibonacci_func} example.
    404 
    405 \begin{figure}
    406 \begin{cfacode}[caption={Implementation of Fibonacci using coroutines},label={lst:fibonacci-cfa}]
    407 coroutine Fibonacci {
    408         int fn; //used for communication
    409 };
    410 
    411 void ?{}(Fibonacci& this) { //constructor
    412         this.fn = 0;
    413 }
    414 
    415 //main automatically called on first resume
    416 void main(Fibonacci& this) with (this) {
    417         int fn1, fn2;           //retained between resumes
    418         fn  = 0;
    419         fn1 = fn;
    420         suspend(this);          //return to last resume
    421 
    422         fn  = 1;
    423         fn2 = fn1;
    424         fn1 = fn;
    425         suspend(this);          //return to last resume
    426 
    427         for ( ;; ) {
    428                 fn  = fn1 + fn2;
    429                 fn2 = fn1;
    430                 fn1 = fn;
    431                 suspend(this);  //return to last resume
    432         }
    433 }
    434 
    435 int next(Fibonacci& this) {
    436         resume(this); //transfer to last suspend
    437         return this.fn;
    438 }
    439 
    440 void main() { //regular program main
    441         Fibonacci f1, f2;
     594        Fib f1, f2;
    442595        for ( int i = 1; i <= 10; i += 1 ) {
    443596                sout | next( f1 ) | next( f2 ) | endl;
    444597        }
    445598}
    446 \end{cfacode}
     599\end{lstlisting}
     600\end{lrbox}
     601\newbox\myboxB
     602\begin{lrbox}{\myboxB}
     603\begin{lstlisting}[aboveskip=0pt,belowskip=0pt]
     604`coroutine` Fib { int ret; };
     605void main( Fib & f ) with( f ) {
     606        int fn, f1 = 1, f2 = 0;
     607        for ( ;; ) {
     608                ret = f2;
     609
     610                fn = f1 + f2;  f2 = f1;  f1 = fn; `suspend();`
     611        }
     612}
     613int next( Fib & fib ) with( fib ) {
     614        `resume( fib );`
     615        return ret;
     616}
     617
     618
     619
     620
     621
     622
     623\end{lstlisting}
     624\end{lrbox}
     625\subfloat[3 States, internal variables]{\label{f:Coroutine3States}\usebox\myboxA}
     626\qquad
     627\subfloat[1 State, internal variables]{\label{f:Coroutine1State}\usebox\myboxB}
     628\caption{\CFA Coroutine Fibonacci Implementations}
     629\label{f:fibonacci-cfa}
    447630\end{figure}
    448631
    449 Listing \ref{lst:fmt-line} shows the \code{Format} coroutine for restructuring text into groups of character blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
     632Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, which provides communication for multiple interface functions, and the \newterm{coroutine main}, which runs on the coroutine stack.
     633\begin{cfa}
     634`coroutine C { char c; int i; _Bool s; };`      $\C{// used for communication}$
     635void ?{}( C & c ) { s = false; }                        $\C{// constructor}$
     636void main( C & cor ) with( cor ) {                      $\C{// actual coroutine}$
     637        while ( ! s ) // process c
     638        if ( v == ... ) s = false;
     639}
     640// interface functions
     641char cont( C & cor, char ch ) { c = ch; resume( cor ); return c; }
     642_Bool stop( C & cor, int v ) { s = true; i = v; resume( cor ); return s; }
     643\end{cfa}
     644
     645encapsulates the Fibonacci state in the  shows is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the next generation.
     646This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used.
     647Indeed, this version is as easy to use as the @fibonacci_state@ solution, while the implementation is very similar to the @fibonacci_func@ example.
     648
     649Figure~\ref{f:fmt-line} shows the @Format@ coroutine for restructuring text into groups of character blocks of fixed size.
     650The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor.
    450651
    451652\begin{figure}
    452 \begin{cfacode}[tabsize=3,caption={Formatting text into lines of 5 blocks of 4 characters.},label={lst:fmt-line}]
    453 //format characters into blocks of 4 and groups of 5 blocks per line
    454 coroutine Format {
    455         char ch;                                                                        //used for communication
    456         int g, b;                                                               //global because used in destructor
     653\centering
     654\begin{cfa}
     655`coroutine` Format {
     656        char ch;                                                                $\C{// used for communication}$
     657        int g, b;                                                               $\C{// global because used in destructor}$
    457658};
    458 
    459 void  ?{}(Format& fmt) {
    460         resume( fmt );                                                  //prime (start) coroutine
    461 }
    462 
    463 void ^?{}(Format& fmt) with fmt {
    464         if ( fmt.g != 0 || fmt.b != 0 )
    465         sout | endl;
    466 }
    467 
    468 void main(Format& fmt) with fmt {
    469         for ( ;; ) {                                                    //for as many characters
    470                 for(g = 0; g < 5; g++) {                //groups of 5 blocks
    471                         for(b = 0; b < 4; fb++) {       //blocks of 4 characters
    472                                 suspend();
    473                                 sout | ch;                                      //print character
     659void ?{}( Format & fmt ) { `resume( fmt );` } $\C{// prime (start) coroutine}$
     660void ^?{}( Format & fmt ) with( fmt ) { if ( g != 0 || b != 0 ) sout | endl; }
     661void main( Format & fmt ) with( fmt ) {
     662        for ( ;; ) {                                                    $\C{// for as many characters}$
     663                for ( g = 0; g < 5; g += 1 ) {          $\C{// groups of 5 blocks}$
     664                        for ( b = 0; b < 4; b += 1 ) {  $\C{// blocks of 4 characters}$
     665                                `suspend();`
     666                                sout | ch;                                      $\C{// print character}$
    474667                        }
    475                         sout | "  ";                                    //print block separator
     668                        sout | "  ";                                    $\C{// print block separator}$
    476669                }
    477                 sout | endl;                                            //print group separator
     670                sout | endl;                                            $\C{// print group separator}$
    478671        }
    479672}
    480 
    481 void prt(Format & fmt, char ch) {
     673void prt( Format & fmt, char ch ) {
    482674        fmt.ch = ch;
    483         resume(fmt);
    484 }
    485 
     675        `resume( fmt );`
     676}
    486677int main() {
    487678        Format fmt;
    488679        char ch;
    489         Eof: for ( ;; ) {                                               //read until end of file
    490                 sin | ch;                                                       //read one character
    491                 if(eof(sin)) break Eof;                 //eof ?
    492                 prt(fmt, ch);                                           //push character for formatting
     680        for ( ;; ) {                                                    $\C{// read until end of file}$
     681                sin | ch;                                                       $\C{// read one character}$
     682          if ( eof( sin ) ) break;                              $\C{// eof ?}$
     683                prt( fmt, ch );                                         $\C{// push character for formatting}$
    493684        }
    494685}
    495 \end{cfacode}
     686\end{cfa}
     687\caption{Formatting text into lines of 5 blocks of 4 characters.}
     688\label{f:fmt-line}
    496689\end{figure}
    497690
    498 \subsection{Construction}
    499 One important design challenge for implementing coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads.
    500 
    501 The runtime system needs to create the coroutine's stack and, more importantly, prepare it for the first resumption. The timing of the creation is non-trivial since users expect both to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. There are several solutions to this problem but the chosen option effectively forces the design of the coroutine.
    502 
    503 Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when cast to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
    504 
    505 \begin{cfacode}
    506 //async: Runs function asynchronously on another thread
     691\begin{figure}
     692\centering
     693\lstset{language=CFA,escapechar={},moredelim=**[is][\protect\color{red}]{`}{`}}
     694\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{}}
     695\begin{cfa}
     696`coroutine` Prod {
     697        Cons & c;
     698        int N, money, receipt;
     699};
     700void main( Prod & prod ) with( prod ) {
     701        // 1st resume starts here
     702        for ( int i = 0; i < N; i += 1 ) {
     703                int p1 = random( 100 ), p2 = random( 100 );
     704                sout | p1 | " " | p2 | endl;
     705                int status = delivery( c, p1, p2 );
     706                sout | " $" | money | endl | status | endl;
     707                receipt += 1;
     708        }
     709        stop( c );
     710        sout | "prod stops" | endl;
     711}
     712int payment( Prod & prod, int money ) {
     713        prod.money = money;
     714        `resume( prod );`
     715        return prod.receipt;
     716}
     717void start( Prod & prod, int N, Cons &c ) {
     718        &prod.c = &c;
     719        prod.[N, receipt] = [N, 0];
     720        `resume( prod );`
     721}
     722int main() {
     723        Prod prod;
     724        Cons cons = { prod };
     725        srandom( getpid() );
     726        start( prod, 5, cons );
     727}
     728\end{cfa}
     729&
     730\begin{cfa}
     731`coroutine` Cons {
     732        Prod & p;
     733        int p1, p2, status;
     734        _Bool done;
     735};
     736void ?{}( Cons & cons, Prod & p ) {
     737        &cons.p = &p;
     738        cons.[status, done ] = [0, false];
     739}
     740void ^?{}( Cons & cons ) {}
     741void main( Cons & cons ) with( cons ) {
     742        // 1st resume starts here
     743        int money = 1, receipt;
     744        for ( ; ! done; ) {
     745                sout | p1 | " " | p2 | endl | " $" | money | endl;
     746                status += 1;
     747                receipt = payment( p, money );
     748                sout | " #" | receipt | endl;
     749                money += 1;
     750        }
     751        sout | "cons stops" | endl;
     752}
     753int delivery( Cons & cons, int p1, int p2 ) {
     754        cons.[p1, p2] = [p1, p2];
     755        `resume( cons );`
     756        return cons.status;
     757}
     758void stop( Cons & cons ) {
     759        cons.done = true;
     760        `resume( cons );`
     761}
     762
     763\end{cfa}
     764\end{tabular}
     765\caption{Producer / consumer: resume-resume cycle, bi-directional communication}
     766\label{f:ProdCons}
     767\end{figure}
     768
     769
     770\subsubsection{Construction}
     771
     772One important design challenge for implementing coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system.
     773In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling.
     774However, the underlying challenge remains the same for coroutines and threads.
     775
     776The runtime system needs to create the coroutine's stack and, more importantly, prepare it for the first resumption.
     777The timing of the creation is non-trivial since users expect both to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor.
     778There are several solutions to this problem but the chosen option effectively forces the design of the coroutine.
     779
     780Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when cast to non-polymorphic routines and these thunks have function scope.
     781For example, the following code, while looking benign, can run into undefined behaviour because of thunks:
     782
     783\begin{cfa}
     784// async: Runs function asynchronously on another thread
    507785forall(otype T)
    508786extern void async(void (*func)(T*), T* obj);
     
    513791void bar() {
    514792        int a;
    515         async(noop, &a); //start thread running noop with argument a
    516 }
    517 \end{cfacode}
     793        async(noop, &a); // start thread running noop with argument a
     794}
     795\end{cfa}
    518796
    519797The generated C code\footnote{Code trimmed down for brevity} creates a local thunk to hold type information:
    520798
    521 \begin{ccode}
     799\begin{cfa}
    522800extern void async(/* omitted */, void (*func)(void*), void* obj);
    523801
     
    533811        async(/* omitted */, ((void (*)(void*))(&_thunk0)), (&a));
    534812}
    535 \end{ccode}
    536 The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behaviour; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call stacks.
    537 
    538 \subsection{Alternative: Composition}
     813\end{cfa}
     814The problem in this example is a storage management issue, the function pointer @_thunk0@ is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behaviour; \ie the stack-based thunk being destroyed before it can be used.
     815This challenge is an extension of challenges that come with second-class routines.
     816Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope.
     817The case of coroutines and threads is simply an extension of this problem to multiple call stacks.
     818
     819
     820\subsubsection{Alternative: Composition}
     821
    539822One solution to this challenge is to use composition/containment, where coroutine fields are added to manage the coroutine.
    540823
    541 \begin{cfacode}
     824\begin{cfa}
    542825struct Fibonacci {
    543         int fn; //used for communication
    544         coroutine c; //composition
     826        int fn; // used for communication
     827        coroutine c; // composition
    545828};
    546829
     
    551834void ?{}(Fibonacci& this) {
    552835        this.fn = 0;
    553         //Call constructor to initialize coroutine
     836        // Call constructor to initialize coroutine
    554837        (this.c){myMain};
    555838}
    556 \end{cfacode}
    557 The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, the user must carefully choose construction order to prevent usage of objects not yet constructed. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically.
    558 
    559 \subsection{Alternative: Reserved keyword}
     839\end{cfa}
     840The downside of this approach is that users need to correctly construct the coroutine handle before using it.
     841Like any other objects, the user must carefully choose construction order to prevent usage of objects not yet constructed.
     842However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example.
     843This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically.
     844
     845
     846\subsubsection{Alternative: Reserved keyword}
     847
    560848The next alternative is to use language support to annotate coroutines as follows:
    561 
    562 \begin{cfacode}
     849\begin{cfa}
    563850coroutine Fibonacci {
    564         int fn; //used for communication
     851        int fn; // used for communication
    565852};
    566 \end{cfacode}
    567 The \code{coroutine} keyword means the compiler can find and inject code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users wanting to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases.
    568 
    569 \subsection{Alternative: Lambda Objects}
    570 
    571 For coroutines as for threads, many implementations are based on routine pointers or function objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object types:
    572 \begin{cfacode}
     853\end{cfa}
     854The @coroutine@ keyword means the compiler can find and inject code where needed.
     855The downside of this approach is that it makes coroutine a special case in the language.
     856Users wanting to extend coroutines or build their own for various reasons can only do so in ways offered by the language.
     857Furthermore, implementing coroutines without language supports also displays the power of the programming language used.
     858While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support.
     859The reserved keywords are only present to improve ease of use for the common cases.
     860
     861
     862\subsubsection{Alternative: Lambda Objects}
     863
     864For coroutines as for threads, many implementations are based on routine pointers or function objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
     865For example, Boost implements coroutines in terms of four functor object types:
     866\begin{cfa}
    573867asymmetric_coroutine<>::pull_type
    574868asymmetric_coroutine<>::push_type
    575869symmetric_coroutine<>::call_type
    576870symmetric_coroutine<>::yield_type
    577 \end{cfacode}
    578 Often, the canonical threading paradigm in languages is based on function pointers, \texttt{pthread} being one of the most well-known examples. The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type. Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
    579 
    580 A variation of this would be to use a simple function pointer in the same way \texttt{pthread} does for threads:
    581 \begin{cfacode}
     871\end{cfa}
     872Often, the canonical threading paradigm in languages is based on function pointers, @pthread@ being one of the most well-known examples.
     873The main problem of this approach is that the thread usage is limited to a generic handle that must otherwise be wrapped in a custom type.
     874Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda based coroutines adds very little.
     875
     876A variation of this would be to use a simple function pointer in the same way @pthread@ does for threads:
     877\begin{cfa}
    582878void foo( coroutine_t cid, void* arg ) {
    583879        int* value = (int*)arg;
    584         //Coroutine body
     880        // Coroutine body
    585881}
    586882
     
    590886        coroutine_resume( &cid );
    591887}
    592 \end{cfacode}
    593 This semantics is more common for thread interfaces but coroutines work equally well. As discussed in section \ref{threads}, this approach is superseded by static approaches in terms of expressivity.
    594 
    595 \subsection{Alternative: Trait-Based Coroutines}
    596 
    597 Finally, the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines. This approach defines a coroutine as anything that satisfies the trait \code{is_coroutine} (as defined below) and is used as a coroutine.
    598 
    599 \begin{cfacode}
     888\end{cfa}
     889This semantics is more common for thread interfaces but coroutines work equally well.
     890As discussed in section \ref{threads}, this approach is superseded by static approaches in terms of expressivity.
     891
     892
     893\subsubsection{Alternative: Trait-Based Coroutines}
     894
     895Finally, the underlying approach, which is the one closest to \CFA idioms, is to use trait-based lazy coroutines.
     896This approach defines a coroutine as anything that satisfies the trait @is_coroutine@ (as defined below) and is used as a coroutine.
     897
     898\begin{cfa}
    600899trait is_coroutine(dtype T) {
    601900      void main(T& this);
     
    605904forall( dtype T | is_coroutine(T) ) void suspend(T&);
    606905forall( dtype T | is_coroutine(T) ) void resume (T&);
    607 \end{cfacode}
    608 This ensures that an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} simply has the effect of implementing the getter and forward declarations required for users to implement the main routine.
     906\end{cfa}
     907This ensures that an object is not a coroutine until @resume@ is called on the object.
     908Correspondingly, any object that is passed to @resume@ is a coroutine since it must satisfy the @is_coroutine@ trait to compile.
     909The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ routine.
     910The \CFA keyword @coroutine@ simply has the effect of implementing the getter and forward declarations required for users to implement the main routine.
    609911
    610912\begin{center}
    611913\begin{tabular}{c c c}
    612 \begin{cfacode}[tabsize=3]
     914\begin{cfa}[tabsize=3]
    613915coroutine MyCoroutine {
    614916        int someValue;
    615917};
    616 \end{cfacode} & == & \begin{cfacode}[tabsize=3]
     918\end{cfa} & == & \begin{cfa}[tabsize=3]
    617919struct MyCoroutine {
    618920        int someValue;
     
    628930
    629931void main(struct MyCoroutine* this);
    630 \end{cfacode}
     932\end{cfa}
    631933\end{tabular}
    632934\end{center}
     
    634936The combination of these two approaches allows users new to coroutining and concurrency to have an easy and concise specification, while more advanced users have tighter control on memory layout and initialization.
    635937
    636 \section{Thread Interface}\label{threads}
    637 The basic building blocks of multithreading in \CFA are \textbf{cfathread}. Both user and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism. User threads offer a flexible and lightweight interface. A thread can be declared using a struct declaration \code{thread} as follows:
    638 
    639 \begin{cfacode}
     938\subsection{Thread Interface}\label{threads}
     939The basic building blocks of multithreading in \CFA are \textbf{cfathread}.
     940Both user and kernel threads are supported, where user threads are the concurrency mechanism and kernel threads are the parallel mechanism.
     941User threads offer a flexible and lightweight interface.
     942A thread can be declared using a struct declaration @thread@ as follows:
     943
     944\begin{cfa}
    640945thread foo {};
    641 \end{cfacode}
     946\end{cfa}
    642947
    643948As for coroutines, the keyword is a thin wrapper around a \CFA trait:
    644949
    645 \begin{cfacode}
     950\begin{cfa}
    646951trait is_thread(dtype T) {
    647952      void ^?{}(T & mutex this);
     
    649954      thread_desc* get_thread(T & this);
    650955};
    651 \end{cfacode}
    652 
    653 Obviously, for this thread implementation to be useful it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread supersedes this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics to use overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as
    654 \begin{cfacode}
     956\end{cfa}
     957
     958Obviously, for this thread implementation to be useful it must run some user code.
     959Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}).
     960However, this proposal considers that statically tying a @main@ routine to a thread supersedes this approach.
     961Since the @main@ routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics to use overloading to declare mains for different threads (the normal main being the main of the initial thread).
     962As such the @main@ routine of a thread can be defined as
     963\begin{cfa}
    655964thread foo {};
    656965
     
    658967        sout | "Hello World!" | endl;
    659968}
    660 \end{cfacode}
    661 
    662 In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!".} While this paper encourages this approach to enforce strongly typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With the static semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously.
    663 \begin{cfacode}
     969\end{cfa}
     970
     971In this example, threads of type @foo@ start execution in the @void main(foo &)@ routine, which prints @"Hello World!".@ While this paper encourages this approach to enforce strongly typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity.
     972With the static semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously.
     973\begin{cfa}
    664974typedef void (*voidFunc)(int);
    665975
     
    675985
    676986void main(FuncRunner & this) {
    677         //thread starts here and runs the function
     987        // thread starts here and runs the function
    678988        this.func( this.arg );
    679989}
     
    687997        return 0?
    688998}
    689 \end{cfacode}
     999\end{cfa}
    6901000
    6911001A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{api}.
    6921002
    693 Of course, for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution. While using an \textbf{api} such as \code{fork} and \code{join} is relatively common in the literature, such an interface is unnecessary. Indeed, the simplest approach is to use \textbf{raii} principles and have threads \code{fork} after the constructor has completed and \code{join} before the destructor runs.
    694 \begin{cfacode}
     1003Of course, for threads to be useful, it must be possible to start and stop threads and wait for them to complete execution.
     1004While using an \textbf{api} such as @fork@ and @join@ is relatively common in the literature, such an interface is unnecessary.
     1005Indeed, the simplest approach is to use \textbf{raii} principles and have threads @fork@ after the constructor has completed and @join@ before the destructor runs.
     1006\begin{cfa}
    6951007thread World;
    6961008
     
    7011013void main() {
    7021014        World w;
    703         //Thread forks here
    704 
    705         //Printing "Hello " and "World!" are run concurrently
     1015        // Thread forks here
     1016
     1017        // Printing "Hello " and "World!" are run concurrently
    7061018        sout | "Hello " | endl;
    7071019
    708         //Implicit join at end of scope
    709 }
    710 \end{cfacode}
     1020        // Implicit join at end of scope
     1021}
     1022\end{cfa}
    7111023
    7121024This semantic has several advantages over explicit semantics: a thread is always started and stopped exactly once, users cannot make any programming errors, and it naturally scales to multiple threads meaning basic synchronization is very simple.
    7131025
    714 \begin{cfacode}
     1026\begin{cfa}
    7151027thread MyThread {
    7161028        //...
    7171029};
    7181030
    719 //main
     1031// main
    7201032void main(MyThread& this) {
    7211033        //...
     
    7241036void foo() {
    7251037        MyThread thrds[10];
    726         //Start 10 threads at the beginning of the scope
     1038        // Start 10 threads at the beginning of the scope
    7271039
    7281040        DoStuff();
    7291041
    730         //Wait for the 10 threads to finish
    731 }
    732 \end{cfacode}
    733 
    734 However, one of the drawbacks of this approach is that threads always form a tree where nodes must always outlive their children, i.e., they are always destroyed in the opposite order of construction because of C scoping rules. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
    735 
    736 \begin{cfacode}
     1042        // Wait for the 10 threads to finish
     1043}
     1044\end{cfa}
     1045
     1046However, one of the drawbacks of this approach is that threads always form a tree where nodes must always outlive their children, \ie they are always destroyed in the opposite order of construction because of C scoping rules.
     1047This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
     1048
     1049\begin{cfa}
    7371050thread MyThread {
    7381051        //...
     
    7461059        MyThread* long_lived;
    7471060        {
    748                 //Start a thread at the beginning of the scope
     1061                // Start a thread at the beginning of the scope
    7491062                MyThread short_lived;
    7501063
    751                 //create another thread that will outlive the thread in this scope
     1064                // create another thread that will outlive the thread in this scope
    7521065                long_lived = new MyThread;
    7531066
    7541067                DoStuff();
    7551068
    756                 //Wait for the thread short_lived to finish
     1069                // Wait for the thread short_lived to finish
    7571070        }
    7581071        DoMoreStuff();
    7591072
    760         //Now wait for the long_lived to finish
     1073        // Now wait for the long_lived to finish
    7611074        delete long_lived;
    7621075}
    763 \end{cfacode}
     1076\end{cfa}
    7641077
    7651078
     
    7691082% ======================================================================
    7701083% ======================================================================
    771 Several tools can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels~\cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine calls). This distinction in turn means that, in order to be effective, programmers need to learn two sets of design patterns. While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account.
    772 
    773 Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desirable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}.
    774 
    775 An approach that is worth mentioning because it is gaining in popularity is transactional memory~\cite{Herlihy93}. While this approach is even pursued by system languages like \CC~\cite{Cpp-Transactions}, the performance and feature set is currently too restrictive to be the main concurrency paradigm for system languages, which is why it was rejected as the core paradigm for concurrency in \CFA.
    776 
    777 One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared-memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency construct.
    778 
    779 \section{Basics}
    780 Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads.
    781 
    782 \subsection{Mutual-Exclusion}
    783 As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to  higher-level concurrency techniques, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic<T>} offers an easy way to express mutual-exclusion on a restricted set of operations (e.g., reading/writing large types atomically). Another challenge with low-level locks is composability. Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
    784 
    785 \subsection{Synchronization}
    786 As with mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanisms often simplify usage by adding either better coupling between synchronization and data (e.g., message passing) or offering a simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property is called \textbf{barging}. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
     1084Several tools can be used to solve concurrency challenges.
     1085Since many of these challenges appear with the use of mutable shared state, some languages and libraries simply disallow mutable shared state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}).
     1086In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels~\cite{CSP,Go} for example).
     1087However, in languages that use routine calls as their core abstraction mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (\ie message passing versus routine calls).
     1088This distinction in turn means that, in order to be effective, programmers need to learn two sets of design patterns.
     1089While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account.
     1090
     1091Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects.
     1092At the lowest level, concurrent paradigms are implemented as atomic operations and locks.
     1093Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}.
     1094However, for productivity reasons it is desirable to have a higher-level construct be the core concurrency paradigm~\cite{Hochstein05}.
     1095
     1096An approach that is worth mentioning because it is gaining in popularity is transactional memory~\cite{Herlihy93}.
     1097While this approach is even pursued by system languages like \CC~\cite{Cpp-Transactions}, the performance and feature set is currently too restrictive to be the main concurrency paradigm for system languages, which is why it was rejected as the core paradigm for concurrency in \CFA.
     1098
     1099One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared-memory systems, is the \emph{monitor}.
     1100Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}.
     1101Many programming languages---\eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs.
     1102In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors.
     1103For these reasons, this project proposes monitors as the core concurrency construct.
     1104
     1105
     1106\subsection{Basics}
     1107
     1108Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization.
     1109Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access.
     1110On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads.
     1111
     1112
     1113\subsubsection{Mutual-Exclusion}
     1114
     1115As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once.
     1116However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
     1117Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to  higher-level concurrency techniques, which sacrifice some performance in order to improve ease of use.
     1118Ease of use comes by either guaranteeing some problems cannot occur (\eg being deadlock free) or by offering a more explicit coupling between data and corresponding critical section.
     1119For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing large types atomically).
     1120Another challenge with low-level locks is composability.
     1121Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks.
     1122Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
     1123
     1124
     1125\subsubsection{Synchronization}
     1126
     1127As with mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use.
     1128Again, higher-level mechanisms often simplify usage by adding either better coupling between synchronization and data (\eg message passing) or offering a simpler solution to otherwise involved challenges.
     1129As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}.
     1130Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order.
     1131However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}.
     1132Not satisfying this property is called \textbf{barging}.
     1133For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}.
     1134The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete to acquire the resource.
     1135Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
     1136This challenge is often split into two different methods, barging avoidance and barging prevention.
     1137Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
     1138
    7871139
    7881140% ======================================================================
     
    7911143% ======================================================================
    7921144% ======================================================================
    793 A \textbf{monitor} is a set of routines that ensure mutual-exclusion when accessing shared state. More precisely, a monitor is a programming technique that associates mutual-exclusion to routine scopes, as opposed to mutex locks, where mutual-exclusion is defined by lock/release calls independently of any scoping of the calling routine. This strong association eases readability and maintainability, at the cost of flexibility. Note that both monitors and mutex locks, require an abstract handle to identify them. This concept is generally associated with object-oriented languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO semantics. The only requirement is the ability to declare a handle to a shared object and a set of routines that act on it:
    794 \begin{cfacode}
     1145A \textbf{monitor} is a set of routines that ensure mutual-exclusion when accessing shared state.
     1146More precisely, a monitor is a programming technique that associates mutual-exclusion to routine scopes, as opposed to mutex locks, where mutual-exclusion is defined by lock/release calls independently of any scoping of the calling routine.
     1147This strong association eases readability and maintainability, at the cost of flexibility.
     1148Note that both monitors and mutex locks, require an abstract handle to identify them.
     1149This concept is generally associated with object-oriented languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO semantics.
     1150The only requirement is the ability to declare a handle to a shared object and a set of routines that act on it:
     1151\begin{cfa}
    7951152typedef /*some monitor type*/ monitor;
    7961153int f(monitor & m);
    7971154
    7981155int main() {
    799         monitor m;  //Handle m
    800         f(m);       //Routine using handle
    801 }
    802 \end{cfacode}
     1156        monitor m;  // Handle m
     1157        f(m);       // Routine using handle
     1158}
     1159\end{cfa}
    8031160
    8041161% ======================================================================
     
    8071164% ======================================================================
    8081165% ======================================================================
    809 The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important, because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are non-copy-able objects (\code{dtype}).
    810 
    811 Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Passthrough can occur for generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter:
    812 
    813 \begin{cfacode}
     1166The above monitor example displays some of the intrinsic characteristics.
     1167First, it is necessary to use pass-by-reference over pass-by-value for monitor routines.
     1168This semantics is important, because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied.
     1169Therefore, monitors are non-copy-able objects (@dtype@).
     1170
     1171Another aspect to consider is when a monitor acquires its mutual exclusion.
     1172For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry.
     1173Passthrough can occur for generic helper routines (@swap@, @sort@, \etc) or specific helper routines like the following to implement an atomic counter:
     1174
     1175\begin{cfa}
    8141176monitor counter_t { /*...see section $\ref{data}$...*/ };
    8151177
    816 void ?{}(counter_t & nomutex this); //constructor
    817 size_t ++?(counter_t & mutex this); //increment
    818 
    819 //need for mutex is platform dependent
    820 void ?{}(size_t * this, counter_t & mutex cnt); //conversion
    821 \end{cfacode}
     1178void ?{}(counter_t & nomutex this); // constructor
     1179size_t ++?(counter_t & mutex this); // increment
     1180
     1181// need for mutex is platform dependent
     1182void ?{}(size_t * this, counter_t & mutex cnt); // conversion
     1183\end{cfa}
    8221184This counter is used as follows:
    8231185\begin{center}
    8241186\begin{tabular}{c @{\hskip 0.35in} c @{\hskip 0.35in} c}
    825 \begin{cfacode}
    826 //shared counter
     1187\begin{cfa}
     1188// shared counter
    8271189counter_t cnt1, cnt2;
    8281190
    829 //multiple threads access counter
     1191// multiple threads access counter
    8301192thread 1 : cnt1++; cnt2++;
    8311193thread 2 : cnt1++; cnt2++;
     
    8331195        ...
    8341196thread N : cnt1++; cnt2++;
    835 \end{cfacode}
     1197\end{cfa}
    8361198\end{tabular}
    8371199\end{center}
    838 Notice how the counter is used without any explicit synchronization and yet supports thread-safe semantics for both reading and writing, which is similar in usage to the \CC template \code{std::atomic}.
    839 
    840 Here, the constructor (\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. Furthermore, it allows the implementation greater freedom when it initializes the monitor locking. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation.
    841 
    842 For maximum usability, monitors use \textbf{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, listing \ref{fig:search} uses recursion and \textbf{multi-acq} to print values inside a binary tree.
     1200Notice how the counter is used without any explicit synchronization and yet supports thread-safe semantics for both reading and writing, which is similar in usage to the \CC template @std::atomic@.
     1201
     1202Here, the constructor (@?{}@) uses the @nomutex@ keyword to signify that it does not acquire the monitor mutual-exclusion when constructing.
     1203This semantics is because an object not yet constructed should never be shared and therefore does not require mutual exclusion.
     1204Furthermore, it allows the implementation greater freedom when it initializes the monitor locking.
     1205The prefix increment operator uses @mutex@ to protect the incrementing process from race conditions.
     1206Finally, there is a conversion operator from @counter_t@ to @size_t@.
     1207This conversion may or may not require the @mutex@ keyword depending on whether or not reading a @size_t@ is an atomic operation.
     1208
     1209For maximum usability, monitors use \textbf{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock.
     1210For example, listing \ref{fig:search} uses recursion and \textbf{multi-acq} to print values inside a binary tree.
    8431211\begin{figure}
    844 \begin{cfacode}[caption={Recursive printing algorithm using \textbf{multi-acq}.},label={fig:search}]
     1212\begin{cfa}[caption={Recursive printing algorithm using \textbf{multi-acq}.},label={fig:search}]
    8451213monitor printer { ... };
    8461214struct tree {
     
    8551223        print(p, t->right);
    8561224}
    857 \end{cfacode}
     1225\end{cfa}
    8581226\end{figure}
    8591227
    860 Having both \code{mutex} and \code{nomutex} keywords can be redundant, depending on the meaning of a routine having neither of these keywords. For example, it is reasonable that it should default to the safest option (\code{mutex}) when given a routine without qualifiers \code{void foo(counter_t & this)}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. On the other hand, \code{nomutex} is the ``normal'' parameter behaviour, it effectively states explicitly that ``this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}.
    861 
    862 The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations:
    863 \begin{cfacode}
     1228Having both @mutex@ and @nomutex@ keywords can be redundant, depending on the meaning of a routine having neither of these keywords.
     1229For example, it is reasonable that it should default to the safest option (@mutex@) when given a routine without qualifiers @void foo(counter_t & this)@, whereas assuming @nomutex@ is unsafe and may cause subtle errors.
     1230On the other hand, @nomutex@ is the ``normal'' parameter behaviour, it effectively states explicitly that ``this routine is not special''.
     1231Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword.
     1232Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing.
     1233While there are several benefits to mandatory keywords, they do bring a few challenges.
     1234Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not.
     1235Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred.
     1236For this reason, \CFA only has the @mutex@ keyword and uses no keyword to mean @nomutex@.
     1237
     1238The next semantic decision is to establish when @mutex@ may be used as a type qualifier.
     1239Consider the following declarations:
     1240\begin{cfa}
    8641241int f1(monitor & mutex m);
    8651242int f2(const monitor & mutex m);
     
    8671244int f4(monitor * mutex m []);
    8681245int f5(graph(monitor *) & mutex m);
    869 \end{cfacode}
    870 The problem is to identify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to identify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C, and even then, making sure objects are only acquired once becomes none-trivial. This problem can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is acquired, passing an array to this routine would be type-safe and yet result in undefined behaviour because only the first element of the array is acquired. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed:
    871 \begin{cfacode}
    872 int f1(monitor & mutex m);    //Okay : recommended case
    873 int f2(monitor * mutex m);    //Not Okay : Could be an array
    874 int f3(monitor mutex m []);  //Not Okay : Array of unknown length
    875 int f4(monitor ** mutex m);   //Not Okay : Could be an array
    876 int f5(monitor * mutex m []); //Not Okay : Array of unknown length
    877 \end{cfacode}
    878 Note that not all array functions are actually distinct in the type system. However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.
    879 
    880 Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion of the receiver object, \CFA uses an explicit mechanism to specify the object that acquires mutual-exclusion. A consequence of this approach is that it extends naturally to multi-monitor calls.
    881 \begin{cfacode}
     1246\end{cfa}
     1247The problem is to identify which object(s) should be acquired.
     1248Furthermore, each object needs to be acquired only once.
     1249In the case of simple routines like @f1@ and @f2@ it is easy to identify an exhaustive list of objects to acquire on entry.
     1250Adding indirections (@f3@) still allows the compiler and programmer to identify which object is acquired.
     1251However, adding in arrays (@f4@) makes it much harder.
     1252Array lengths are not necessarily known in C, and even then, making sure objects are only acquired once becomes none-trivial.
     1253This problem can be extended to absurd limits like @f5@, which uses a graph of monitors.
     1254To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers).
     1255Also note that while routine @f3@ can be supported, meaning that monitor @**m@ is acquired, passing an array to this routine would be type-safe and yet result in undefined behaviour because only the first element of the array is acquired.
     1256However, this ambiguity is part of the C type-system with respects to arrays.
     1257For this reason, @mutex@ is disallowed in the context where arrays may be passed:
     1258\begin{cfa}
     1259int f1(monitor & mutex m);    // Okay : recommended case
     1260int f2(monitor * mutex m);    // Not Okay : Could be an array
     1261int f3(monitor mutex m []);  // Not Okay : Array of unknown length
     1262int f4(monitor ** mutex m);   // Not Okay : Could be an array
     1263int f5(monitor * mutex m []); // Not Okay : Array of unknown length
     1264\end{cfa}
     1265Note that not all array functions are actually distinct in the type system.
     1266However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.
     1267
     1268Unlike object-oriented monitors, where calling a mutex member \emph{implicitly} acquires mutual-exclusion of the receiver object, \CFA uses an explicit mechanism to specify the object that acquires mutual-exclusion.
     1269A consequence of this approach is that it extends naturally to multi-monitor calls.
     1270\begin{cfa}
    8821271int f(MonitorA & mutex a, MonitorB & mutex b);
    8831272
     
    8851274MonitorB b;
    8861275f(a,b);
    887 \end{cfacode}
    888 While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capability to acquire multiple locks before entering a critical section is called \emph{\textbf{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of acquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors is safe from deadlock when using \textbf{bulk-acq}. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects acquiring order:
    889 \begin{cfacode}
    890 void foo(A& mutex a, B& mutex b) { //acquire a & b
     1276\end{cfa}
     1277While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found.
     1278The capability to acquire multiple locks before entering a critical section is called \emph{\textbf{bulk-acq}}.
     1279In practice, writing multi-locking routines that do not lead to deadlocks is tricky.
     1280Having language support for such a feature is therefore a significant asset for \CFA.
     1281In the case presented above, \CFA guarantees that the order of acquisition is consistent across calls to different routines using the same monitors as arguments.
     1282This consistent ordering means acquiring multiple monitors is safe from deadlock when using \textbf{bulk-acq}.
     1283However, users can still force the acquiring order.
     1284For example, notice which routines use @mutex@/@nomutex@ and how this affects acquiring order:
     1285\begin{cfa}
     1286void foo(A& mutex a, B& mutex b) { // acquire a & b
    8911287        ...
    8921288}
    8931289
    894 void bar(A& mutex a, B& /*nomutex*/ b) { //acquire a
    895         ... foo(a, b); ... //acquire b
    896 }
    897 
    898 void baz(A& /*nomutex*/ a, B& mutex b) { //acquire b
    899         ... foo(a, b); ... //acquire a
    900 }
    901 \end{cfacode}
    902 The \textbf{multi-acq} monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order.
    903 
    904 However, such use leads to lock acquiring order problems. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle difference means that calling these routines concurrently may lead to deadlock and is therefore undefined behaviour. As shown~\cite{Lister77}, solving this problem requires:
     1290void bar(A& mutex a, B& /*nomutex*/ b) { // acquire a
     1291        ... foo(a, b); ... // acquire b
     1292}
     1293
     1294void baz(A& /*nomutex*/ a, B& mutex b) { // acquire b
     1295        ... foo(a, b); ... // acquire a
     1296}
     1297\end{cfa}
     1298The \textbf{multi-acq} monitor lock allows a monitor lock to be acquired by both @bar@ or @baz@ and acquired again in @foo@.
     1299In the calls to @bar@ and @baz@ the monitors are acquired in opposite order.
     1300
     1301However, such use leads to lock acquiring order problems.
     1302In the example above, the user uses implicit ordering in the case of function @foo@ but explicit ordering in the case of @bar@ and @baz@.
     1303This subtle difference means that calling these routines concurrently may lead to deadlock and is therefore undefined behaviour.
     1304As shown~\cite{Lister77}, solving this problem requires:
    9051305\begin{enumerate}
    9061306        \item Dynamically tracking the monitor-call order.
    9071307        \item Implement rollback semantics.
    9081308\end{enumerate}
    909 While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex~\cite{Dice10}. In \CFA, users simply need to be careful when acquiring multiple monitors at the same time or only use \textbf{bulk-acq} of all the monitors. While \CFA provides only a partial solution, most systems provide no solution and the \CFA partial solution handles many useful cases.
     1309While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex~\cite{Dice10}.
     1310In \CFA, users simply need to be careful when acquiring multiple monitors at the same time or only use \textbf{bulk-acq} of all the monitors.
     1311While \CFA provides only a partial solution, most systems provide no solution and the \CFA partial solution handles many useful cases.
    9101312
    9111313For example, \textbf{multi-acq} and \textbf{bulk-acq} can be used together in interesting ways:
    912 \begin{cfacode}
     1314\begin{cfa}
    9131315monitor bank { ... };
    9141316
     
    9191321        deposit( yourbank, me2you );
    9201322}
    921 \end{cfacode}
    922 This example shows a trivial solution to the bank-account transfer problem~\cite{BankTransfer}. Without \textbf{multi-acq} and \textbf{bulk-acq}, the solution to this problem is much more involved and requires careful engineering.
    923 
    924 \subsection{\code{mutex} statement} \label{mutex-stmt}
    925 
    926 The call semantics discussed above have one software engineering issue: only a routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to work around the need for unnecessary names, avoiding a major software engineering problem~\cite{2FTwoHardThings}. Table \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters.
     1323\end{cfa}
     1324This example shows a trivial solution to the bank-account transfer problem~\cite{BankTransfer}.
     1325Without \textbf{multi-acq} and \textbf{bulk-acq}, the solution to this problem is much more involved and requires careful engineering.
     1326
     1327
     1328\subsection{\protect\lstinline|mutex| statement} \label{mutex-stmt}
     1329
     1330The call semantics discussed above have one software engineering issue: only a routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the @mutex@ statement to work around the need for unnecessary names, avoiding a major software engineering problem~\cite{2FTwoHardThings}.
     1331Table \ref{f:mutex-stmt} shows an example of the @mutex@ statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired.
     1332Beyond naming, the @mutex@ statement has no semantic difference from a routine call with @mutex@ parameters.
    9271333
    9281334\begin{table}
    9291335\begin{center}
    9301336\begin{tabular}{|c|c|}
    931 function call & \code{mutex} statement \\
     1337function call & @mutex@ statement \\
    9321338\hline
    933 \begin{cfacode}[tabsize=3]
     1339\begin{cfa}[tabsize=3]
    9341340monitor M {};
    9351341void foo( M & mutex m1, M & mutex m2 ) {
    936         //critical section
     1342        // critical section
    9371343}
    9381344
     
    9401346        foo( m1, m2 );
    9411347}
    942 \end{cfacode}&\begin{cfacode}[tabsize=3]
     1348\end{cfa}&\begin{cfa}[tabsize=3]
    9431349monitor M {};
    9441350void bar( M & m1, M & m2 ) {
    9451351        mutex(m1, m2) {
    946                 //critical section
     1352                // critical section
    9471353        }
    9481354}
    9491355
    9501356
    951 \end{cfacode}
     1357\end{cfa}
    9521358\end{tabular}
    9531359\end{center}
    954 \caption{Regular call semantics vs. \code{mutex} statement}
    955 \label{lst:mutex-stmt}
     1360\caption{Regular call semantics vs. \protect\lstinline|mutex| statement}
     1361\label{f:mutex-stmt}
    9561362\end{table}
    9571363
     
    9611367% ======================================================================
    9621368% ======================================================================
    963 Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contain shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appropriate protection. For example, here is a complete version of the counter shown in section \ref{call}:
    964 \begin{cfacode}
     1369Once the call semantics are established, the next step is to establish data semantics.
     1370Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contain shared data.
     1371This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appropriate protection.
     1372For example, here is a complete version of the counter shown in section \ref{call}:
     1373\begin{cfa}
    9651374monitor counter_t {
    9661375        int value;
     
    9751384}
    9761385
    977 //need for mutex is platform dependent here
     1386// need for mutex is platform dependent here
    9781387void ?{}(int * this, counter_t & mutex cnt) {
    9791388        *this = (int)cnt;
    9801389}
    981 \end{cfacode}
    982 
    983 Like threads and coroutines, monitors are defined in terms of traits with some additional language support in the form of the \code{monitor} keyword. The monitor trait is:
    984 \begin{cfacode}
     1390\end{cfa}
     1391
     1392Like threads and coroutines, monitors are defined in terms of traits with some additional language support in the form of the @monitor@ keyword.
     1393The monitor trait is:
     1394\begin{cfa}
    9851395trait is_monitor(dtype T) {
    9861396        monitor_desc * get_monitor( T & );
    9871397        void ^?{}( T & mutex );
    9881398};
    989 \end{cfacode}
    990 Note that the destructor of a monitor must be a \code{mutex} routine to prevent deallocation while a thread is accessing the monitor. As with any object, calls to a monitor, using \code{mutex} or otherwise, is undefined behaviour after the destructor has run.
     1399\end{cfa}
     1400Note that the destructor of a monitor must be a @mutex@ routine to prevent deallocation while a thread is accessing the monitor.
     1401As with any object, calls to a monitor, using @mutex@ or otherwise, is undefined behaviour after the destructor has run.
    9911402
    9921403% ======================================================================
     
    9951406% ======================================================================
    9961407% ======================================================================
    997 In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronization. With monitors, this capability is generally achieved with internal or external scheduling as in~\cite{Hoare74}. With \textbf{scheduling} loosely defined as deciding which thread acquires the critical section next, \textbf{internal scheduling} means making the decision from inside the critical section (i.e., with access to the shared state), while \textbf{external scheduling} means making the decision when entering the critical section (i.e., without access to the shared state). Since internal scheduling within a single monitor is mostly a solved problem, this paper concentrates on extending internal scheduling to multiple monitors. Indeed, like the \textbf{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side.
     1408In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronization.
     1409With monitors, this capability is generally achieved with internal or external scheduling as in~\cite{Hoare74}.
     1410With \textbf{scheduling} loosely defined as deciding which thread acquires the critical section next, \textbf{internal scheduling} means making the decision from inside the critical section (\ie with access to the shared state), while \textbf{external scheduling} means making the decision when entering the critical section (\ie without access to the shared state).
     1411Since internal scheduling within a single monitor is mostly a solved problem, this paper concentrates on extending internal scheduling to multiple monitors.
     1412Indeed, like the \textbf{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side.
    9981413
    9991414First, here is a simple example of internal scheduling:
    10001415
    1001 \begin{cfacode}
     1416\begin{cfa}
    10021417monitor A {
    10031418        condition e;
     
    10061421void foo(A& mutex a1, A& mutex a2) {
    10071422        ...
    1008         //Wait for cooperation from bar()
     1423        // Wait for cooperation from bar()
    10091424        wait(a1.e);
    10101425        ...
     
    10121427
    10131428void bar(A& mutex a1, A& mutex a2) {
    1014         //Provide cooperation for foo()
     1429        // Provide cooperation for foo()
    10151430        ...
    1016         //Unblock foo
     1431        // Unblock foo
    10171432        signal(a1.e);
    10181433}
    1019 \end{cfacode}
    1020 There are two details to note here. First, \code{signal} is a delayed operation; it only unblocks the waiting thread when it reaches the end of the critical section. This semantics is needed to respect mutual-exclusion, i.e., the signaller and signalled thread cannot be in the monitor simultaneously. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, ensuring a basic ordering.
    1021 
    1022 An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to be the next thread to acquire the monitor (unless some other thread waited on the same condition). This guarantee offers the benefit of not having to loop around waits to recheck that a condition is met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
     1434\end{cfa}
     1435There are two details to note here.
     1436First, @signal@ is a delayed operation; it only unblocks the waiting thread when it reaches the end of the critical section.
     1437This semantics is needed to respect mutual-exclusion, \ie the signaller and signalled thread cannot be in the monitor simultaneously.
     1438The alternative is to return immediately after the call to @signal@, which is significantly more restrictive.
     1439Second, in \CFA, while it is common to store a @condition@ as a field of the monitor, a @condition@ variable can be stored/created independently of a monitor.
     1440Here routine @foo@ waits for the @signal@ from @bar@ before making further progress, ensuring a basic ordering.
     1441
     1442An important aspect of the implementation is that \CFA does not allow barging, which means that once function @bar@ releases the monitor, @foo@ is guaranteed to be the next thread to acquire the monitor (unless some other thread waited on the same condition).
     1443This guarantee offers the benefit of not having to loop around waits to recheck that a condition is met.
     1444The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support.
     1445Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
    10231446
    10241447% ======================================================================
     
    10271450% ======================================================================
    10281451% ======================================================================
    1029 It is easy to understand the problem of multi-monitor scheduling using a series of pseudo-code examples. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition variable as parameters and explicitly name the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a \emph{group} of monitors on first use (called branding), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. The example below shows the simple case of having two threads (one for each column) and a single monitor A.
     1452It is easy to understand the problem of multi-monitor scheduling using a series of pseudo-code examples.
     1453Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors.
     1454Indeed, @wait@ statements always use the implicit condition variable as parameters and explicitly name the monitors (A and B) associated with the condition.
     1455Note that in \CFA, condition variables are tied to a \emph{group} of monitors on first use (called branding), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors.
     1456The example below shows the simple case of having two threads (one for each column) and a single monitor A.
    10301457
    10311458\begin{multicols}{2}
    10321459thread 1
    1033 \begin{pseudo}
     1460\begin{cfa}
    10341461acquire A
    10351462        wait A
    10361463release A
    1037 \end{pseudo}
     1464\end{cfa}
    10381465
    10391466\columnbreak
    10401467
    10411468thread 2
    1042 \begin{pseudo}
     1469\begin{cfa}
    10431470acquire A
    10441471        signal A
    10451472release A
    1046 \end{pseudo}
     1473\end{cfa}
    10471474\end{multicols}
    1048 One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. It is important to note here that both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This semantic is a logical requirement for barging prevention.
     1475One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling.
     1476It is important to note here that both @wait@ and @signal@ must be called with the proper monitor(s) already acquired.
     1477This semantic is a logical requirement for barging prevention.
    10491478
    10501479A direct extension of the previous example is a \textbf{bulk-acq} version:
    10511480\begin{multicols}{2}
    1052 \begin{pseudo}
     1481\begin{cfa}
    10531482acquire A & B
    10541483        wait A & B
    10551484release A & B
    1056 \end{pseudo}
     1485\end{cfa}
    10571486\columnbreak
    1058 \begin{pseudo}
     1487\begin{cfa}
    10591488acquire A & B
    10601489        signal A & B
    10611490release A & B
    1062 \end{pseudo}
     1491\end{cfa}
    10631492\end{multicols}
    1064 \noindent This version uses \textbf{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers a group of monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate.
    1065 
    1066 While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well-known deadlock problem is the Nested Monitor Problem~\cite{Lister77}, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem:
     1493\noindent This version uses \textbf{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning.
     1494Synchronization happens between the two threads in exactly the same way and order.
     1495The only difference is that mutual exclusion covers a group of monitors.
     1496On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate.
     1497
     1498While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable.
     1499For monitors, a well-known deadlock problem is the Nested Monitor Problem~\cite{Lister77}, which occurs when a @wait@ is made by a thread that holds more than one monitor.
     1500For example, the following cfa-code runs into the nested-monitor problem:
    10671501\begin{multicols}{2}
    1068 \begin{pseudo}
     1502\begin{cfa}
    10691503acquire A
    10701504        acquire B
     
    10721506        release B
    10731507release A
    1074 \end{pseudo}
     1508\end{cfa}
    10751509
    10761510\columnbreak
    10771511
    1078 \begin{pseudo}
     1512\begin{cfa}
    10791513acquire A
    10801514        acquire B
     
    10821516        release B
    10831517release A
    1084 \end{pseudo}
     1518\end{cfa}
    10851519\end{multicols}
    1086 \noindent The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} introduces a different set of problems, such as releasing monitor \code{C}, which has nothing to do with the \code{signal}.
    1087 
    1088 However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the Nested Monitor Problem~\cite{Lister77}.
     1520\noindent The @wait@ only releases monitor @B@ so the signalling thread cannot acquire monitor @A@ to get to the @signal@.
     1521Attempting release of all acquired monitors at the @wait@ introduces a different set of problems, such as releasing monitor @C@, which has nothing to do with the @signal@.
     1522
     1523However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly.
     1524For example, the next cfa-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the Nested Monitor Problem~\cite{Lister77}.
    10891525
    10901526\begin{multicols}{2}
    1091 \begin{pseudo}
     1527\begin{cfa}
    10921528acquire A
    10931529        acquire B
     
    10951531        release B
    10961532release A
    1097 \end{pseudo}
     1533\end{cfa}
    10981534
    10991535\columnbreak
    11001536
    1101 \begin{pseudo}
     1537\begin{cfa}
    11021538
    11031539acquire B
     
    11051541release B
    11061542
    1107 \end{pseudo}
     1543\end{cfa}
    11081544\end{multicols}
    11091545
     
    11161552% ======================================================================
    11171553
    1118 A larger example is presented to show complex issues for \textbf{bulk-acq} and its implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \textbf{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code to implement the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code, any method of introducing a monitor is acceptable, e.g., \code{mutex} parameters, global variables, pointer parameters, or using locals with the \code{mutex} statement.
    1119 
    1120 \begin{figure}[!t]
     1554A larger example is presented to show complex issues for \textbf{bulk-acq} and its implementation options are analyzed.
     1555Figure~\ref{f:int-bulk-cfa} shows an example where \textbf{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{f:int-bulk-cfa} shows the corresponding \CFA code to implement the cfa-code in listing \ref{f:int-bulk-cfa}.
     1556For the purpose of translating the given cfa-code into \CFA-code, any method of introducing a monitor is acceptable, \eg @mutex@ parameters, global variables, pointer parameters, or using locals with the @mutex@ statement.
     1557
     1558\begin{figure}
    11211559\begin{multicols}{2}
    11221560Waiting thread
    1123 \begin{pseudo}[numbers=left]
     1561\begin{cfa}[numbers=left]
    11241562acquire A
    1125         //Code Section 1
     1563        // Code Section 1
    11261564        acquire A & B
    1127                 //Code Section 2
     1565                // Code Section 2
    11281566                wait A & B
    1129                 //Code Section 3
     1567                // Code Section 3
    11301568        release A & B
    1131         //Code Section 4
     1569        // Code Section 4
    11321570release A
    1133 \end{pseudo}
     1571\end{cfa}
    11341572\columnbreak
    11351573Signalling thread
    1136 \begin{pseudo}[numbers=left, firstnumber=10,escapechar=|]
     1574\begin{cfa}[numbers=left, firstnumber=10,escapechar=|]
    11371575acquire A
    1138         //Code Section 5
     1576        // Code Section 5
    11391577        acquire A & B
    1140                 //Code Section 6
     1578                // Code Section 6
    11411579                |\label{line:signal1}|signal A & B
    1142                 //Code Section 7
     1580                // Code Section 7
    11431581        |\label{line:releaseFirst}|release A & B
    1144         //Code Section 8
     1582        // Code Section 8
    11451583|\label{line:lastRelease}|release A
    1146 \end{pseudo}
     1584\end{cfa}
    11471585\end{multicols}
    1148 \begin{cfacode}[caption={Internal scheduling with \textbf{bulk-acq}},label={lst:int-bulk-pseudo}]
    1149 \end{cfacode}
     1586\begin{cfa}[caption={Internal scheduling with \textbf{bulk-acq}},label={f:int-bulk-cfa}]
     1587\end{cfa}
    11501588\begin{center}
    1151 \begin{cfacode}[xleftmargin=.4\textwidth]
     1589\begin{cfa}[xleftmargin=.4\textwidth]
    11521590monitor A a;
    11531591monitor B b;
    11541592condition c;
    1155 \end{cfacode}
     1593\end{cfa}
    11561594\end{center}
    11571595\begin{multicols}{2}
    11581596Waiting thread
    1159 \begin{cfacode}
     1597\begin{cfa}
    11601598mutex(a) {
    1161         //Code Section 1
     1599        // Code Section 1
    11621600        mutex(a, b) {
    1163                 //Code Section 2
     1601                // Code Section 2
    11641602                wait(c);
    1165                 //Code Section 3
     1603                // Code Section 3
    11661604        }
    1167         //Code Section 4
    1168 }
    1169 \end{cfacode}
     1605        // Code Section 4
     1606}
     1607\end{cfa}
    11701608\columnbreak
    11711609Signalling thread
    1172 \begin{cfacode}
     1610\begin{cfa}
    11731611mutex(a) {
    1174         //Code Section 5
     1612        // Code Section 5
    11751613        mutex(a, b) {
    1176                 //Code Section 6
     1614                // Code Section 6
    11771615                signal(c);
    1178                 //Code Section 7
     1616                // Code Section 7
    11791617        }
    1180         //Code Section 8
    1181 }
    1182 \end{cfacode}
     1618        // Code Section 8
     1619}
     1620\end{cfa}
    11831621\end{multicols}
    1184 \begin{cfacode}[caption={Equivalent \CFA code for listing \ref{lst:int-bulk-pseudo}},label={lst:int-bulk-cfa}]
    1185 \end{cfacode}
     1622\begin{cfa}[caption={Equivalent \CFA code for listing \ref{f:int-bulk-cfa}},label={f:int-bulk-cfa}]
     1623\end{cfa}
    11861624\begin{multicols}{2}
    11871625Waiter
    1188 \begin{pseudo}[numbers=left]
     1626\begin{cfa}[numbers=left]
    11891627acquire A
    11901628        acquire A & B
     
    11921630        release A & B
    11931631release A
    1194 \end{pseudo}
     1632\end{cfa}
    11951633
    11961634\columnbreak
    11971635
    11981636Signaller
    1199 \begin{pseudo}[numbers=left, firstnumber=6,escapechar=|]
     1637\begin{cfa}[numbers=left, firstnumber=6,escapechar=|]
    12001638acquire A
    12011639        acquire A & B
    12021640                signal A & B
    12031641        release A & B
    1204         |\label{line:secret}|//Secretly keep B here
     1642        |\label{line:secret}|// Secretly keep B here
    12051643release A
    1206 //Wakeup waiter and transfer A & B
    1207 \end{pseudo}
     1644// Wakeup waiter and transfer A & B
     1645\end{cfa}
    12081646\end{multicols}
    1209 \begin{cfacode}[caption={Listing \ref{lst:int-bulk-pseudo}, with delayed signalling comments},label={lst:int-secret}]
    1210 \end{cfacode}
     1647\begin{cfa}[caption={Figure~\ref{f:int-bulk-cfa}, with delayed signalling comments},label={f:int-secret}]
     1648\end{cfa}
    12111649\end{figure}
    12121650
    1213 The complexity begins at code sections 4 and 8 in listing \ref{lst:int-bulk-pseudo}, which are where the existing semantics of internal scheduling needs to be extended for multiple monitors. The root of the problem is that \textbf{bulk-acq} is used in a context where one of the monitors is already acquired, which is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should ``release \code{A & B}'' (listing \ref{lst:int-bulk-pseudo} line \ref{line:releaseFirst}), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership transfer is required in order to prevent barging into \code{B} by another thread, since both the signalling and signalled threads still need monitor \code{A}. There are three options:
     1651The complexity begins at code sections 4 and 8 in listing \ref{f:int-bulk-cfa}, which are where the existing semantics of internal scheduling needs to be extended for multiple monitors.
     1652The root of the problem is that \textbf{bulk-acq} is used in a context where one of the monitors is already acquired, which is why it is important to define the behaviour of the previous cfa-code.
     1653When the signaller thread reaches the location where it should ``release @A & B@'' (listing \ref{f:int-bulk-cfa} line \ref{line:releaseFirst}), it must actually transfer ownership of monitor @B@ to the waiting thread.
     1654This ownership transfer is required in order to prevent barging into @B@ by another thread, since both the signalling and signalled threads still need monitor @A@.
     1655There are three options:
    12141656
    12151657\subsubsection{Delaying Signals}
    1216 The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. This solution releases the monitors once every monitor in a group can be released. However, since some monitors are never released (e.g., the monitor of a thread), this interpretation means a group might never be released. A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks.
    1217 
    1218 However, listing \ref{lst:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors. Listing \ref{lst:dependency} shows a slightly different example where a third thread is waiting on monitor \code{A}, using a different condition variable. Because the third thread is signalled when secretly holding \code{B}, the goal  becomes unreachable. Depending on the order of signals (listing \ref{lst:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen:
    1219 
    1220 \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor \code{A} needs to be passed to thread $\beta$ when thread $\alpha$ is done with it.
    1221 \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor \code{B} needs to be retained and passed to thread $\alpha$ along with monitor \code{A}, which can be done directly or possibly using thread $\beta$ as an intermediate.
     1658The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred.
     1659It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling.
     1660This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups.
     1661This solution releases the monitors once every monitor in a group can be released.
     1662However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released.
     1663A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks.
     1664
     1665However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors.
     1666Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable.
     1667Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable.
     1668Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen:
     1669
     1670\paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it.
     1671\paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate.
    12221672\\
    12231673
    1224 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{lst:dependency}.
     1674Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order.
     1675However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}.
    12251676
    12261677In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach.
     
    12321683\begin{multicols}{3}
    12331684Thread $\alpha$
    1234 \begin{pseudo}[numbers=left, firstnumber=1]
     1685\begin{cfa}[numbers=left, firstnumber=1]
    12351686acquire A
    12361687        acquire A & B
     
    12381689        release A & B
    12391690release A
    1240 \end{pseudo}
     1691\end{cfa}
    12411692\columnbreak
    12421693Thread $\gamma$
    1243 \begin{pseudo}[numbers=left, firstnumber=6, escapechar=|]
     1694\begin{cfa}[numbers=left, firstnumber=6, escapechar=|]
    12441695acquire A
    12451696        acquire A & B
     
    12481699        |\label{line:signal-a}|signal A
    12491700|\label{line:release-a}|release A
    1250 \end{pseudo}
     1701\end{cfa}
    12511702\columnbreak
    12521703Thread $\beta$
    1253 \begin{pseudo}[numbers=left, firstnumber=12, escapechar=|]
     1704\begin{cfa}[numbers=left, firstnumber=12, escapechar=|]
    12541705acquire A
    12551706        wait A
    12561707|\label{line:release-aa}|release A
    1257 \end{pseudo}
     1708\end{cfa}
    12581709\end{multicols}
    1259 \begin{cfacode}[caption={Pseudo-code for the three thread example.},label={lst:dependency}]
    1260 \end{cfacode}
     1710\begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}]
     1711\end{cfa}
    12611712\begin{center}
    12621713\input{dependency}
    12631714\end{center}
    1264 \caption{Dependency graph of the statements in listing \ref{lst:dependency}}
     1715\caption{Dependency graph of the statements in listing \ref{f:dependency}}
    12651716\label{fig:dependency}
    12661717\end{figure}
    12671718
    1268 In listing \ref{lst:int-bulk-pseudo}, there is a solution that satisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} back to the signaller when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem is effectively resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and has a super-linear complexity. This complexity can be seen in listing \ref{lst:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks.
     1719In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion.
     1720If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point).
     1721Dynamically finding the correct order is therefore the second possible solution.
     1722The problem is effectively resolving a dependency graph of ownership requirements.
     1723Here even the simplest of code snippets requires two transfers and has a super-linear complexity.
     1724This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions.
     1725Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks.
    12691726\begin{figure}
    12701727\begin{multicols}{2}
    1271 \begin{pseudo}
     1728\begin{cfa}
    12721729acquire A
    12731730        acquire B
     
    12771734        release B
    12781735release A
    1279 \end{pseudo}
     1736\end{cfa}
    12801737
    12811738\columnbreak
    12821739
    1283 \begin{pseudo}
     1740\begin{cfa}
    12841741acquire A
    12851742        acquire B
     
    12891746        release B
    12901747release A
    1291 \end{pseudo}
     1748\end{cfa}
    12921749\end{multicols}
    1293 \begin{cfacode}[caption={Extension to three monitors of listing \ref{lst:int-bulk-pseudo}},label={lst:explosion}]
    1294 \end{cfacode}
     1750\begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}]
     1751\end{cfa}
    12951752\end{figure}
    12961753
    1297 Given the three threads example in listing \ref{lst:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one.
     1754Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$).
     1755The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold.
     1756Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one.
    12981757
    12991758\subsubsection{Partial Signalling} \label{partial-sig}
    1300 Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor \code{B} at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor \code{A}. Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any significant downsides.
    1301 
    1302 Using partial signalling, listing \ref{lst:dependency} can be solved easily:
     1759Finally, the solution that is chosen for \CFA is to use partial signalling.
     1760Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@.
     1761Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread.
     1762This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
     1763This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen.
     1764Furthermore, after being fully implemented, this solution does not appear to have any significant downsides.
     1765
     1766Using partial signalling, listing \ref{f:dependency} can be solved easily:
    13031767\begin{itemize}
    1304         \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor \code{B} to thread $\alpha$ and continues to hold monitor \code{A}.
    1305         \item When thread $\gamma$ reaches line \ref{line:release-a}  it transfers monitor \code{A} to thread $\beta$  and wakes it up.
    1306         \item When thread $\beta$  reaches line \ref{line:release-aa} it transfers monitor \code{A} to thread $\alpha$ and wakes it up.
     1768        \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@.
     1769        \item When thread $\gamma$ reaches line \ref{line:release-a}  it transfers monitor @A@ to thread $\beta$  and wakes it up.
     1770        \item When thread $\beta$  reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up.
    13071771\end{itemize}
    13081772
     
    13141778\begin{table}
    13151779\begin{tabular}{|c|c|}
    1316 \code{signal} & \code{signal_block} \\
     1780@signal@ & @signal_block@ \\
    13171781\hline
    1318 \begin{cfacode}[tabsize=3]
    1319 monitor DatingService
    1320 {
    1321         //compatibility codes
     1782\begin{cfa}[tabsize=3]
     1783monitor DatingService {
     1784        // compatibility codes
    13221785        enum{ CCodes = 20 };
    13231786
     
    13301793condition exchange;
    13311794
    1332 int girl(int phoneNo, int ccode)
    1333 {
    1334         //no compatible boy ?
    1335         if(empty(boys[ccode]))
    1336         {
    1337                 //wait for boy
    1338                 wait(girls[ccode]);
    1339 
    1340                 //make phone number available
    1341                 girlPhoneNo = phoneNo;
    1342 
    1343                 //wake boy from chair
    1344                 signal(exchange);
    1345         }
    1346         else
    1347         {
    1348                 //make phone number available
    1349                 girlPhoneNo = phoneNo;
    1350 
    1351                 //wake boy
    1352                 signal(boys[ccode]);
    1353 
    1354                 //sit in chair
    1355                 wait(exchange);
     1795int girl(int phoneNo, int cfa) {
     1796        // no compatible boy ?
     1797        if(empty(boys[cfa])) {
     1798                wait(girls[cfa]);               // wait for boy
     1799                girlPhoneNo = phoneNo;          // make phone number available
     1800                signal(exchange);               // wake boy from chair
     1801        } else {
     1802                girlPhoneNo = phoneNo;          // make phone number available
     1803                signal(boys[cfa]);              // wake boy
     1804                wait(exchange);         // sit in chair
    13561805        }
    13571806        return boyPhoneNo;
    13581807}
    1359 
    1360 int boy(int phoneNo, int ccode)
    1361 {
    1362         //same as above
    1363         //with boy/girl interchanged
    1364 }
    1365 \end{cfacode}&\begin{cfacode}[tabsize=3]
    1366 monitor DatingService
    1367 {
    1368         //compatibility codes
    1369         enum{ CCodes = 20 };
     1808int boy(int phoneNo, int cfa) {
     1809        // same as above
     1810        // with boy/girl interchanged
     1811}
     1812\end{cfa}&\begin{cfa}[tabsize=3]
     1813monitor DatingService {
     1814
     1815        enum{ CCodes = 20 };    // compatibility codes
    13701816
    13711817        int girlPhoneNo;
     
    13751821condition girls[CCodes];
    13761822condition boys [CCodes];
    1377 //exchange is not needed
    1378 
    1379 int girl(int phoneNo, int ccode)
    1380 {
    1381         //no compatible boy ?
    1382         if(empty(boys[ccode]))
    1383         {
    1384                 //wait for boy
    1385                 wait(girls[ccode]);
    1386 
    1387                 //make phone number available
    1388                 girlPhoneNo = phoneNo;
    1389 
    1390                 //wake boy from chair
    1391                 signal(exchange);
    1392         }
    1393         else
    1394         {
    1395                 //make phone number available
    1396                 girlPhoneNo = phoneNo;
    1397 
    1398                 //wake boy
    1399                 signal_block(boys[ccode]);
    1400 
    1401                 //second handshake unnecessary
     1823// exchange is not needed
     1824
     1825int girl(int phoneNo, int cfa) {
     1826        // no compatible boy ?
     1827        if(empty(boys[cfa])) {
     1828                wait(girls[cfa]);               // wait for boy
     1829                girlPhoneNo = phoneNo;          // make phone number available
     1830                signal(exchange);               // wake boy from chair
     1831        } else {
     1832                girlPhoneNo = phoneNo;          // make phone number available
     1833                signal_block(boys[cfa]);                // wake boy
     1834
     1835                // second handshake unnecessary
    14021836
    14031837        }
     
    14051839}
    14061840
    1407 int boy(int phoneNo, int ccode)
    1408 {
    1409         //same as above
    1410         //with boy/girl interchanged
    1411 }
    1412 \end{cfacode}
     1841int boy(int phoneNo, int cfa) {
     1842        // same as above
     1843        // with boy/girl interchanged
     1844}
     1845\end{cfa}
    14131846\end{tabular}
    1414 \caption{Dating service example using \code{signal} and \code{signal_block}. }
     1847\caption{Dating service example using \protect\lstinline|signal| and \protect\lstinline|signal_block|. }
    14151848\label{tbl:datingservice}
    14161849\end{table}
    1417 An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine.
    1418 
    1419 The example in table \ref{tbl:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed. To avoid this explicit synchronization, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This feature removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to \code{signal_block}, meaning no other thread can acquire the monitor either before or after the call.
     1850An important note is that, until now, signalling a monitor was a delayed operation.
     1851The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the @signal@ statement.
     1852However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the @signal_block@ routine.
     1853
     1854The example in table \ref{tbl:datingservice} highlights the difference in behaviour.
     1855As mentioned, @signal@ only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed.
     1856To avoid this explicit synchronization, the @condition@ type offers the @signal_block@ routine, which handles the two-way handshake as shown in the example.
     1857This feature removes the need for a second condition variables and simplifies programming.
     1858Like every other monitor semantic, @signal_block@ uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to @signal_block@, meaning no other thread can acquire the monitor either before or after the call.
    14201859
    14211860% ======================================================================
     
    14291868Internal Scheduling & External Scheduling & Go\\
    14301869\hline
    1431 \begin{ucppcode}[tabsize=3]
     1870\begin{uC++}[tabsize=3]
    14321871_Monitor Semaphore {
    14331872        condition c;
     
    14441883        }
    14451884}
    1446 \end{ucppcode}&\begin{ucppcode}[tabsize=3]
     1885\end{uC++}&\begin{uC++}[tabsize=3]
    14471886_Monitor Semaphore {
    14481887
     
    14591898        }
    14601899}
    1461 \end{ucppcode}&\begin{gocode}[tabsize=3]
     1900\end{uC++}&\begin{Go}[tabsize=3]
    14621901type MySem struct {
    14631902        inUse bool
     
    14791918        s.inUse = false
    14801919
    1481         //This actually deadlocks
    1482         //when single thread
     1920        // This actually deadlocks
     1921        // when single thread
    14831922        s.c <- false
    14841923}
    1485 \end{gocode}
     1924\end{Go}
    14861925\end{tabular}
    14871926\caption{Different forms of scheduling.}
    14881927\label{tbl:sched}
    14891928\end{table}
    1490 This method is more constrained and explicit, which helps users reduce the non-deterministic nature of concurrency. Indeed, as the following examples demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occurring. External scheduling can generally be done either in terms of control flow (e.g., Ada with \code{accept}, \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \textbf{api}s.
    1491 
    1492 For the \code{P} member above using internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor, allowing a third routine, say \code{isInUse()}, acquire mutual exclusion several times while routine \code{P} is waiting. On the other hand, external scheduling guarantees that while routine \code{P} is waiting, no other routine than \code{V} can acquire the monitor.
     1929This method is more constrained and explicit, which helps users reduce the non-deterministic nature of concurrency.
     1930Indeed, as the following examples demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occurring.
     1931External scheduling can generally be done either in terms of control flow (\eg Ada with @accept@, \uC with @_Accept@) or in terms of data (\eg Go with channels).
     1932Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics.
     1933Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines.
     1934The previous example shows a simple use @_Accept@ versus @wait@/@signal@ and its advantages.
     1935Note that while other languages often use @accept@/@select@ as the core external scheduling keyword, \CFA uses @waitfor@ to prevent name collisions with existing socket \textbf{api}s.
     1936
     1937For the @P@ member above using internal scheduling, the call to @wait@ only guarantees that @V@ is the last routine to access the monitor, allowing a third routine, say @isInUse()@, acquire mutual exclusion several times while routine @P@ is waiting.
     1938On the other hand, external scheduling guarantees that while routine @P@ is waiting, no other routine than @V@ can acquire the monitor.
    14931939
    14941940% ======================================================================
     
    14971943% ======================================================================
    14981944% ======================================================================
    1499 In \uC, a monitor class declaration includes an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user:
    1500 
    1501 \begin{cfacode}
     1945In \uC, a monitor class declaration includes an exhaustive list of monitor operations.
     1946Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user:
     1947
     1948\begin{cfa}
    15021949monitor A {};
    15031950
    15041951void f(A & mutex a);
    15051952void g(A & mutex a) {
    1506         waitfor(f); //Obvious which f() to wait for
    1507 }
    1508 
    1509 void f(A & mutex a, int); //New different F added in scope
     1953        waitfor(f); // Obvious which f() to wait for
     1954}
     1955
     1956void f(A & mutex a, int); // New different F added in scope
    15101957void h(A & mutex a) {
    1511         waitfor(f); //Less obvious which f() to wait for
    1512 }
    1513 \end{cfacode}
    1514 
    1515 Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Here is the pseudo-code for the entering phase of a monitor:
     1958        waitfor(f); // Less obvious which f() to wait for
     1959}
     1960\end{cfa}
     1961
     1962Furthermore, external scheduling is an example where implementation constraints become visible from the interface.
     1963Here is the cfa-code for the entering phase of a monitor:
    15161964\begin{center}
    15171965\begin{tabular}{l}
    1518 \begin{pseudo}
     1966\begin{cfa}
    15191967        if monitor is free
    15201968                enter
     
    15251973        else
    15261974                block
    1527 \end{pseudo}
     1975\end{cfa}
    15281976\end{tabular}
    15291977\end{center}
    1530 For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}.
     1978For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions.
     1979However, a fast check for @monitor accepts me@ is much harder to implement depending on the constraints put on the monitors.
     1980Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}.
    15311981
    15321982\begin{figure}
     
    15441994\end{figure}
    15451995
    1546 There are other alternatives to these pictures, but in the case of the left picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This approach requires a unique dense ordering of routines with an upper-bound and that ordering must be consistent across translation units. For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritance. However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit. This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects.
     1996There are other alternatives to these pictures, but in the case of the left picture, implementing a fast accept check is relatively easy.
     1997Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (\eg 128) of mutex members.
     1998This approach requires a unique dense ordering of routines with an upper-bound and that ordering must be consistent across translation units.
     1999For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritance.
     2000However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit.
     2001This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects.
    15472002
    15482003The alternative is to alter the implementation as in Figure~\ref{fig:BulkMonitor}.
    1549 Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate. Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additional searches for the \code{waitfor} statement to check if a routine is already queued.
     2004Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate.
     2005Generating a mask dynamically means that the storage for the mask information can vary between calls to @waitfor@, allowing for more flexibility and extensions.
     2006Storing an array of accepted function pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search.
     2007Furthermore, supporting nested external scheduling (\eg listing \ref{f:nest-ext}) may now require additional searches for the @waitfor@ statement to check if a routine is already queued.
    15502008
    15512009\begin{figure}
    1552 \begin{cfacode}[caption={Example of nested external scheduling},label={lst:nest-ext}]
     2010\begin{cfa}[caption={Example of nested external scheduling},label={f:nest-ext}]
    15532011monitor M {};
    15542012void foo( M & mutex a ) {}
    15552013void bar( M & mutex b ) {
    1556         //Nested in the waitfor(bar, c) call
     2014        // Nested in the waitfor(bar, c) call
    15572015        waitfor(foo, b);
    15582016}
     
    15612019}
    15622020
    1563 \end{cfacode}
     2021\end{cfa}
    15642022\end{figure}
    15652023
    1566 Note that in the right picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a function pointer and a set of monitors, as is discussed in the next section. These details are omitted from the picture for the sake of simplicity.
    1567 
    1568 At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here, however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be  hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA.
     2024Note that in the right picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a function pointer and a set of monitors, as is discussed in the next section.
     2025These details are omitted from the picture for the sake of simplicity.
     2026
     2027At this point, a decision must be made between flexibility and performance.
     2028Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost.
     2029Here, however, the cost of flexibility cannot be trivially removed.
     2030In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be  hard to write.
     2031This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA.
    15692032
    15702033% ======================================================================
     
    15742037% ======================================================================
    15752038
    1576 External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax. Even in the simplest possible case, some new semantics needs to be established:
    1577 \begin{cfacode}
     2039External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax.
     2040Even in the simplest possible case, some new semantics needs to be established:
     2041\begin{cfa}
    15782042monitor M {};
    15792043
     
    15812045
    15822046void g(M & mutex b, M & mutex c) {
    1583         waitfor(f); //two monitors M => unknown which to pass to f(M & mutex)
    1584 }
    1585 \end{cfacode}
     2047        waitfor(f); // two monitors M => unknown which to pass to f(M & mutex)
     2048}
     2049\end{cfa}
    15862050The obvious solution is to specify the correct monitor as follows:
    15872051
    1588 \begin{cfacode}
     2052\begin{cfa}
    15892053monitor M {};
    15902054
     
    15922056
    15932057void g(M & mutex a, M & mutex b) {
    1594         //wait for call to f with argument b
     2058        // wait for call to f with argument b
    15952059        waitfor(f, b);
    15962060}
    1597 \end{cfacode}
    1598 This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behaviour can be extended to the multi-monitor \code{waitfor} statement as follows.
    1599 
    1600 \begin{cfacode}
     2061\end{cfa}
     2062This syntax is unambiguous.
     2063Both locks are acquired and kept by @g@.
     2064When routine @f@ is called, the lock for monitor @b@ is temporarily transferred from @g@ to @f@ (while @g@ still holds lock @a@).
     2065This behaviour can be extended to the multi-monitor @waitfor@ statement as follows.
     2066
     2067\begin{cfa}
    16012068monitor M {};
    16022069
     
    16042071
    16052072void g(M & mutex a, M & mutex b) {
    1606         //wait for call to f with arguments a and b
     2073        // wait for call to f with arguments a and b
    16072074        waitfor(f, a, b);
    16082075}
    1609 \end{cfacode}
    1610 
    1611 Note that the set of monitors passed to the \code{waitfor} statement must be entirely contained in the set of monitors already acquired in the routine. \code{waitfor} used in any other context is undefined behaviour.
     2076\end{cfa}
     2077
     2078Note that the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired in the routine. @waitfor@ used in any other context is undefined behaviour.
    16122079
    16132080An important behaviour to note is when a set of monitors only match partially:
    16142081
    1615 \begin{cfacode}
     2082\begin{cfa}
    16162083mutex struct A {};
    16172084
     
    16262093
    16272094void foo() {
    1628         g(a1, b); //block on accept
     2095        g(a1, b); // block on accept
    16292096}
    16302097
    16312098void bar() {
    1632         f(a2, b); //fulfill cooperation
    1633 }
    1634 \end{cfacode}
    1635 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wakeup the waiting thread. It is also important to note that in the case of external scheduling the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition.
    1636 
    1637 % ======================================================================
    1638 % ======================================================================
    1639 \subsection{\code{waitfor} Semantics}
    1640 % ======================================================================
    1641 % ======================================================================
    1642 
    1643 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expressions, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitors passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usages of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading, but overloading is possible.
     2099        f(a2, b); // fulfill cooperation
     2100}
     2101\end{cfa}
     2102While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables.
     2103In both cases, partially matching monitor sets does not wakeup the waiting thread.
     2104It is also important to note that in the case of external scheduling the order of parameters is irrelevant; @waitfor(f,a,b)@ and @waitfor(f,b,a)@ are indistinguishable waiting condition.
     2105
     2106% ======================================================================
     2107% ======================================================================
     2108\subsection{\protect\lstinline|waitfor| Semantics}
     2109% ======================================================================
     2110% ======================================================================
     2111
     2112Syntactically, the @waitfor@ statement takes a function identifier and a set of monitors.
     2113While the set of monitors can be any list of expressions, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the @waitfor@ statement.
     2114It checks that the set of monitors passed in matches the requirements for a function call.
     2115Figure~\ref{f:waitfor} shows various usages of the waitfor statement and which are acceptable.
     2116The choice of the function type is made ignoring any non-@mutex@ parameter.
     2117One limitation of the current implementation is that it does not handle overloading, but overloading is possible.
    16442118\begin{figure}
    1645 \begin{cfacode}[caption={Various correct and incorrect uses of the waitfor statement},label={lst:waitfor}]
     2119\begin{cfa}[caption={Various correct and incorrect uses of the waitfor statement},label={f:waitfor}]
    16462120monitor A{};
    16472121monitor B{};
     
    16572131        void (*fp)( A & mutex ) = f1;
    16582132
    1659         waitfor(f1, a1);     //Correct : 1 monitor case
    1660         waitfor(f2, a1, b1); //Correct : 2 monitor case
    1661         waitfor(f3, a1);     //Correct : non-mutex arguments are ignored
    1662         waitfor(f1, *ap);    //Correct : expression as argument
    1663 
    1664         waitfor(f1, a1, b1); //Incorrect : Too many mutex arguments
    1665         waitfor(f2, a1);     //Incorrect : Too few mutex arguments
    1666         waitfor(f2, a1, a2); //Incorrect : Mutex arguments don't match
    1667         waitfor(f1, 1);      //Incorrect : 1 not a mutex argument
    1668         waitfor(f9, a1);     //Incorrect : f9 function does not exist
    1669         waitfor(*fp, a1 );   //Incorrect : fp not an identifier
    1670         waitfor(f4, a1);     //Incorrect : f4 ambiguous
    1671 
    1672         waitfor(f2, a1, b2); //Undefined behaviour : b2 not mutex
    1673 }
    1674 \end{cfacode}
     2133        waitfor(f1, a1);     // Correct : 1 monitor case
     2134        waitfor(f2, a1, b1); // Correct : 2 monitor case
     2135        waitfor(f3, a1);     // Correct : non-mutex arguments are ignored
     2136        waitfor(f1, *ap);    // Correct : expression as argument
     2137
     2138        waitfor(f1, a1, b1); // Incorrect : Too many mutex arguments
     2139        waitfor(f2, a1);     // Incorrect : Too few mutex arguments
     2140        waitfor(f2, a1, a2); // Incorrect : Mutex arguments don't match
     2141        waitfor(f1, 1);      // Incorrect : 1 not a mutex argument
     2142        waitfor(f9, a1);     // Incorrect : f9 function does not exist
     2143        waitfor(*fp, a1 );   // Incorrect : fp not an identifier
     2144        waitfor(f4, a1);     // Incorrect : f4 ambiguous
     2145
     2146        waitfor(f2, a1, b2); // Undefined behaviour : b2 not mutex
     2147}
     2148\end{cfa}
    16752149\end{figure}
    16762150
    1677 Finally, for added flexibility, \CFA supports constructing a complex \code{waitfor} statement using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} clauses can be chained together using \code{or}; this chain forms a single statement that uses baton pass to any function that fits one of the function+monitor set passed in. To enable users to tell which accepted function executed, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement, which is executed after the clause is triggered. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, which checks for a matching function call already arrived and otherwise continues. Any and all of these clauses can be preceded by a \code{when} condition to dynamically toggle the accept clauses on or off based on some current state. Listing \ref{lst:waitfor2} demonstrates several complex masks and some incorrect ones.
     2151Finally, for added flexibility, \CFA supports constructing a complex @waitfor@ statement using the @or@, @timeout@ and @else@.
     2152Indeed, multiple @waitfor@ clauses can be chained together using @or@; this chain forms a single statement that uses baton pass to any function that fits one of the function+monitor set passed in.
     2153To enable users to tell which accepted function executed, @waitfor@s are followed by a statement (including the null statement @;@) or a compound statement, which is executed after the clause is triggered.
     2154A @waitfor@ chain can also be followed by a @timeout@, to signify an upper bound on the wait, or an @else@, to signify that the call should be non-blocking, which checks for a matching function call already arrived and otherwise continues.
     2155Any and all of these clauses can be preceded by a @when@ condition to dynamically toggle the accept clauses on or off based on some current state.
     2156Figure~\ref{f:waitfor2} demonstrates several complex masks and some incorrect ones.
    16782157
    16792158\begin{figure}
    1680 \begin{cfacode}[caption={Various correct and incorrect uses of the or, else, and timeout clause around a waitfor statement},label={lst:waitfor2}]
     2159\lstset{language=CFA,deletedelim=**[is][]{`}{`}}
     2160\begin{cfa}
    16812161monitor A{};
    16822162
     
    16852165
    16862166void foo( A & mutex a, bool b, int t ) {
    1687         //Correct : blocking case
    1688         waitfor(f1, a);
    1689 
    1690         //Correct : block with statement
    1691         waitfor(f1, a) {
     2167        waitfor(f1, a);                                                 $\C{// Correct : blocking case}$
     2168
     2169        waitfor(f1, a) {                                                $\C{// Correct : block with statement}$
    16922170                sout | "f1" | endl;
    16932171        }
    1694 
    1695         //Correct : block waiting for f1 or f2
    1696         waitfor(f1, a) {
     2172        waitfor(f1, a) {                                                $\C{// Correct : block waiting for f1 or f2}$
    16972173                sout | "f1" | endl;
    16982174        } or waitfor(f2, a) {
    16992175                sout | "f2" | endl;
    17002176        }
    1701 
    1702         //Correct : non-blocking case
    1703         waitfor(f1, a); or else;
    1704 
    1705         //Correct : non-blocking case
    1706         waitfor(f1, a) {
     2177        waitfor(f1, a); or else;                                $\C{// Correct : non-blocking case}$
     2178
     2179        waitfor(f1, a) {                                                $\C{// Correct : non-blocking case}$
    17072180                sout | "blocked" | endl;
    17082181        } or else {
    17092182                sout | "didn't block" | endl;
    17102183        }
    1711 
    1712         //Correct : block at most 10 seconds
    1713         waitfor(f1, a) {
     2184        waitfor(f1, a) {                                                $\C{// Correct : block at most 10 seconds}$
    17142185                sout | "blocked" | endl;
    17152186        } or timeout( 10`s) {
    17162187                sout | "didn't block" | endl;
    17172188        }
    1718 
    1719         //Correct : block only if b == true
    1720         //if b == false, don't even make the call
     2189        // Correct : block only if b == true if b == false, don't even make the call
    17212190        when(b) waitfor(f1, a);
    17222191
    1723         //Correct : block only if b == true
    1724         //if b == false, make non-blocking call
     2192        // Correct : block only if b == true if b == false, make non-blocking call
    17252193        waitfor(f1, a); or when(!b) else;
    17262194
    1727         //Correct : block only of t > 1
     2195        // Correct : block only of t > 1
    17282196        waitfor(f1, a); or when(t > 1) timeout(t); or else;
    17292197
    1730         //Incorrect : timeout clause is dead code
     2198        // Incorrect : timeout clause is dead code
    17312199        waitfor(f1, a); or timeout(t); or else;
    17322200
    1733         //Incorrect : order must be
    1734         //waitfor [or waitfor... [or timeout] [or else]]
     2201        // Incorrect : order must be waitfor [or waitfor... [or timeout] [or else]]
    17352202        timeout(t); or waitfor(f1, a); or else;
    17362203}
    1737 \end{cfacode}
     2204\end{cfa}
     2205\caption{Correct and incorrect uses of the or, else, and timeout clause around a waitfor statement}
     2206\label{f:waitfor2}
    17382207\end{figure}
    17392208
     
    17432212% ======================================================================
    17442213% ======================================================================
    1745 An interesting use for the \code{waitfor} statement is destructor semantics. Indeed, the \code{waitfor} statement can accept any \code{mutex} routine, which includes the destructor (see section \ref{data}). However, with the semantics discussed until now, waiting for the destructor does not make any sense, since using an object after its destructor is called is undefined behaviour. The simplest approach is to disallow \code{waitfor} on a destructor. However, a more expressive approach is to flip ordering of execution when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current \code{mutex} routine, similarly to how a condition is signalled.
     2214An interesting use for the @waitfor@ statement is destructor semantics.
     2215Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}).
     2216However, with the semantics discussed until now, waiting for the destructor does not make any sense, since using an object after its destructor is called is undefined behaviour.
     2217The simplest approach is to disallow @waitfor@ on a destructor.
     2218However, a more expressive approach is to flip ordering of execution when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current @mutex@ routine, similarly to how a condition is signalled.
    17462219\begin{figure}
    1747 \begin{cfacode}[caption={Example of an executor which executes action in series until the destructor is called.},label={lst:dtor-order}]
     2220\begin{cfa}[caption={Example of an executor which executes action in series until the destructor is called.},label={f:dtor-order}]
    17482221monitor Executer {};
    17492222struct  Action;
     
    17592232        }
    17602233}
    1761 \end{cfacode}
     2234\end{cfa}
    17622235\end{figure}
    1763 For example, listing \ref{lst:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction.
     2236For example, listing \ref{f:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop.
     2237Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction.
    17642238
    17652239
     
    17722246% #       #     # #     # #     # ####### ####### ####### ####### ###  #####  #     #
    17732247\section{Parallelism}
    1774 Historically, computer performance was about processor speeds and instruction counts. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is no longer reasonable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \textbf{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations, \textbf{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads.
     2248Historically, computer performance was about processor speeds and instruction counts.
     2249However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}.
     2250In this decade, it is no longer reasonable to create a high-performance application without caring about parallelism.
     2251Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization.
     2252The lowest-level approach of parallelism is to use \textbf{kthread} in combination with semantics like @fork@, @join@, \etc.
     2253However, since these have significant costs and limitations, \textbf{kthread} are now mostly used as an implementation tool rather than a user oriented one.
     2254There are several alternatives to solve these issues that all have strengths and weaknesses.
     2255While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads.
    17752256
    17762257\section{Paradigms}
    17772258\subsection{User-Level Threads}
    1778 A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}. These threads offer most of the same features that the operating system already provides but can be used on a much larger scale. This approach is the most powerful solution as it allows all the features of multithreading, while removing several of the more expensive costs of kernel threads. The downside is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong guarantees, but the parallelism toolkit offers very little to reduce complexity in itself.
     2259A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}.
     2260These threads offer most of the same features that the operating system already provides but can be used on a much larger scale.
     2261This approach is the most powerful solution as it allows all the features of multithreading, while removing several of the more expensive costs of kernel threads.
     2262The downside is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues.
     2263These issues can be somewhat alleviated by a concurrency toolkit with strong guarantees, but the parallelism toolkit offers very little to reduce complexity in itself.
    17792264
    17802265Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
    17812266
    17822267\subsection{Fibers : User-Level Threads Without Preemption} \label{fibers}
    1783 A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}. However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}. The significant difference between \textbf{uthread} and \textbf{fiber} is the lack of \textbf{preemption} in the latter. Advocates of \textbf{fiber} list their high performance and ease of implementation as major strengths, but the performance difference between \textbf{uthread} and \textbf{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignores fibers.
     2268A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}.
     2269However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}.
     2270The significant difference between \textbf{uthread} and \textbf{fiber} is the lack of \textbf{preemption} in the latter.
     2271Advocates of \textbf{fiber} list their high performance and ease of implementation as major strengths, but the performance difference between \textbf{uthread} and \textbf{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design.
     2272Therefore this proposal largely ignores fibers.
    17842273
    17852274An example of a language that uses fibers is Go~\cite{Go}
    17862275
    17872276\subsection{Jobs and Thread Pools}
    1788 An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}. Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface. In \textbf{pool} based systems, users express parallelism as units of work, called jobs, and a dependency graph (either explicit or implicit) that ties them together. This approach means users need not worry about concurrency but significantly limit the interaction that can occur among jobs. Indeed, any \textbf{job} that blocks also block the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably. It can be argued that a solution to this problem is to use more workers than available cores. However, unless the number of jobs and the number of workers are comparable, having a significant number of blocked jobs always results in idles cores.
     2277An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}.
     2278Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface.
     2279In \textbf{pool} based systems, users express parallelism as units of work, called jobs, and a dependency graph (either explicit or implicit) that ties them together.
     2280This approach means users need not worry about concurrency but significantly limit the interaction that can occur among jobs.
     2281Indeed, any \textbf{job} that blocks also block the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably.
     2282It can be argued that a solution to this problem is to use more workers than available cores.
     2283However, unless the number of jobs and the number of workers are comparable, having a significant number of blocked jobs always results in idles cores.
    17892284
    17902285The gold standard of this implementation is Intel's TBB library~\cite{TBB}.
    17912286
    17922287\subsection{Paradigm Performance}
    1793 While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantees equivalent performance across paradigms and that the \textbf{pool}-based system has the best efficiency thanks to the lower memory overhead (i.e., no thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilization, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done.
     2288While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level.
     2289Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload.
     2290Having a large amount of mostly independent units of work to execute almost guarantees equivalent performance across paradigms and that the \textbf{pool}-based system has the best efficiency thanks to the lower memory overhead (\ie no thread stack per job).
     2291However, interactions among jobs can easily exacerbate contention.
     2292User-level threads allow fine-grain context switching, which results in better resource utilization, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance.
     2293Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done.
    17942294
    17952295\section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel}
    1796 A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}. It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues. A \textbf{cfacluster} also offers a pluggable scheduler that can optimize the workload generated by the \textbf{uthread}.
    1797 
    1798 \textbf{cfacluster} have not been fully implemented in the context of this paper. Currently \CFA only supports one \textbf{cfacluster}, the initial one.
     2296A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}.
     2297It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues.
     2298A \textbf{cfacluster} also offers a pluggable scheduler that can optimize the workload generated by the \textbf{uthread}.
     2299
     2300\textbf{cfacluster} have not been fully implemented in the context of this paper.
     2301Currently \CFA only supports one \textbf{cfacluster}, the initial one.
    17992302
    18002303\subsection{Future Work: Machine Setup}\label{machine}
    1801 While this was not done in the context of this paper, another important aspect of clusters is affinity. While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. For example, a system using \textbf{numa} configurations may benefit from users being able to tie clusters and/or kernel threads to certain CPU cores. OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity.
     2304While this was not done in the context of this paper, another important aspect of clusters is affinity.
     2305While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups.
     2306For example, a system using \textbf{numa} configurations may benefit from users being able to tie clusters and/or kernel threads to certain CPU cores.
     2307OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity.
    18022308
    18032309\subsection{Paradigms}\label{cfaparadigms}
    1804 Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \textbf{uthread} is the default paradigm in \CFA. However, disabling \textbf{preemption} on the \textbf{cfacluster} means \textbf{cfathread} effectively become \textbf{fiber}. Since several \textbf{cfacluster} with different scheduling policy can coexist in the same application, this allows \textbf{fiber} and \textbf{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \textbf{uthread} or \textbf{fiber}, which includes specialized jobs like actors~\cite{Actors}.
     2310Given these building blocks, it is possible to reproduce all three of the popular paradigms.
     2311Indeed, \textbf{uthread} is the default paradigm in \CFA.
     2312However, disabling \textbf{preemption} on the \textbf{cfacluster} means \textbf{cfathread} effectively become \textbf{fiber}.
     2313Since several \textbf{cfacluster} with different scheduling policy can coexist in the same application, this allows \textbf{fiber} and \textbf{uthread} to coexist in the runtime of an application.
     2314Finally, it is possible to build executors for thread pools from \textbf{uthread} or \textbf{fiber}, which includes specialized jobs like actors~\cite{Actors}.
    18052315
    18062316
    18072317
    18082318\section{Behind the Scenes}
    1809 There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of \textbf{bulk-acq} and loose object definitions. These two constraints are the root cause of most design decisions in the implementation. Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs. This approach avoids the chicken and egg problem~\cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal means that memory management is a constant concern in the design of the system.
    1810 
    1811 The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each node has pre-allocated link fields for chaining, to avoid the need for memory allocation. Since several concurrency operations can use an unbound amount of memory (depending on \textbf{bulk-acq}), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers. Conveniently, the call stack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays. Since stack allocation is based on scopes, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array. The threads and the condition both have a fixed amount of memory, while \code{mutex} routines and blocking calls allow for an unbound amount, within the stack size.
     2319There are several challenges specific to \CFA when implementing concurrency.
     2320These challenges are a direct result of \textbf{bulk-acq} and loose object definitions.
     2321These two constraints are the root cause of most design decisions in the implementation.
     2322Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs.
     2323This approach avoids the chicken and egg problem~\cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime.
     2324This extra goal means that memory management is a constant concern in the design of the system.
     2325
     2326The main memory concern for concurrency is queues.
     2327All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each node has pre-allocated link fields for chaining, to avoid the need for memory allocation.
     2328Since several concurrency operations can use an unbound amount of memory (depending on \textbf{bulk-acq}), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers.
     2329Conveniently, the call stack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays.
     2330Since stack allocation is based on scopes, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array.
     2331The threads and the condition both have a fixed amount of memory, while @mutex@ routines and blocking calls allow for an unbound amount, within the stack size.
    18122332
    18132333Note that since the major contributions of this paper are extending monitor semantics to \textbf{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed.
     
    18192339% ======================================================================
    18202340
    1821 The first step towards the monitor implementation is simple \code{mutex} routines. In the single monitor case, mutual-exclusion is done using the entry/exit procedure in listing \ref{lst:entry1}. The entry/exit procedures do not have to be extended to support multiple monitors. Indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlock~\cite{Havender68}. In \CFA, ordering of monitor acquisition relies on memory ordering. This approach is sufficient because all objects are guaranteed to have distinct non-overlapping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behaviour. When a mutex call is made, the concerned monitors are aggregated into a variable-length pointer array and sorted based on pointer values. This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively.
     2341The first step towards the monitor implementation is simple @mutex@ routines.
     2342In the single monitor case, mutual-exclusion is done using the entry/exit procedure in listing \ref{f:entry1}.
     2343The entry/exit procedures do not have to be extended to support multiple monitors.
     2344Indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlock~\cite{Havender68}.
     2345In \CFA, ordering of monitor acquisition relies on memory ordering.
     2346This approach is sufficient because all objects are guaranteed to have distinct non-overlapping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behaviour.
     2347When a mutex call is made, the concerned monitors are aggregated into a variable-length pointer array and sorted based on pointer values.
     2348This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively.
    18222349\begin{figure}
    18232350\begin{multicols}{2}
    18242351Entry
    1825 \begin{pseudo}
     2352\begin{cfa}
    18262353if monitor is free
    18272354        enter
     
    18312358        block
    18322359increment recursions
    1833 \end{pseudo}
     2360\end{cfa}
    18342361\columnbreak
    18352362Exit
    1836 \begin{pseudo}
     2363\begin{cfa}
    18372364decrement recursion
    18382365if recursion == 0
    18392366        if entry queue not empty
    18402367                wake-up thread
    1841 \end{pseudo}
     2368\end{cfa}
    18422369\end{multicols}
    1843 \begin{pseudo}[caption={Initial entry and exit routine for monitors},label={lst:entry1}]
    1844 \end{pseudo}
     2370\begin{cfa}[caption={Initial entry and exit routine for monitors},label={f:entry1}]
     2371\end{cfa}
    18452372\end{figure}
    18462373
    18472374\subsection{Details: Interaction with polymorphism}
    1848 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues.
    1849 
    1850 First of all, interaction between \code{otype} polymorphism (see Section~\ref{s:ParametricPolymorphism}) and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. It is important to present the difference between the two acquiring options: \textbf{callsite-locking} and entry-point locking, i.e., acquiring the monitors before making a mutex routine-call or as the first operation of the mutex routine-call. For example:
    1851 \begin{table}[H]
     2375Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support.
     2376However, it is shown that entry-point locking solves most of the issues.
     2377
     2378First of all, interaction between @otype@ polymorphism (see Section~\ref{s:ParametricPolymorphism}) and monitors is impossible since monitors do not support copying.
     2379Therefore, the main question is how to support @dtype@ polymorphism.
     2380It is important to present the difference between the two acquiring options: \textbf{callsite-locking} and entry-point locking, \ie acquiring the monitors before making a mutex routine-call or as the first operation of the mutex routine-call.
     2381For example:
     2382\begin{table}
    18522383\begin{center}
    18532384\begin{tabular}{|c|c|c|}
    18542385Mutex & \textbf{callsite-locking} & \textbf{entry-point-locking} \\
    1855 call & pseudo-code & pseudo-code \\
     2386call & cfa-code & cfa-code \\
    18562387\hline
    1857 \begin{cfacode}[tabsize=3]
     2388\begin{cfa}[tabsize=3]
    18582389void foo(monitor& mutex a){
    18592390
    1860         //Do Work
     2391        // Do Work
    18612392        //...
    18622393
     
    18692400
    18702401}
    1871 \end{cfacode} & \begin{pseudo}[tabsize=3]
     2402\end{cfa} & \begin{cfa}[tabsize=3]
    18722403foo(& a) {
    18732404
    1874         //Do Work
     2405        // Do Work
    18752406        //...
    18762407
     
    18832414        release(a);
    18842415}
    1885 \end{pseudo} & \begin{pseudo}[tabsize=3]
     2416\end{cfa} & \begin{cfa}[tabsize=3]
    18862417foo(& a) {
    18872418        acquire(a);
    1888         //Do Work
     2419        // Do Work
    18892420        //...
    18902421        release(a);
     
    18972428
    18982429}
    1899 \end{pseudo}
     2430\end{cfa}
    19002431\end{tabular}
    19012432\end{center}
     
    19042435\end{table}
    19052436
    1906 Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor-routine is desired, writing the mutex routine is possible with the proper trait, e.g.:
    1907 \begin{cfacode}
    1908 //Incorrect: T may not be monitor
     2437Note the @mutex@ keyword relies on the type system, which means that in cases where a generic monitor-routine is desired, writing the mutex routine is possible with the proper trait, \eg:
     2438\begin{cfa}
     2439// Incorrect: T may not be monitor
    19092440forall(dtype T)
    19102441void foo(T * mutex t);
    19112442
    1912 //Correct: this function only works on monitors (any monitor)
     2443// Correct: this function only works on monitors (any monitor)
    19132444forall(dtype T | is_monitor(T))
    19142445void bar(T * mutex t));
    1915 \end{cfacode}
    1916 
    1917 Both entry point and \textbf{callsite-locking} are feasible implementations. The current \CFA implementation uses entry-point locking because it requires less work when using \textbf{raii}, effectively transferring the burden of implementation to object construction/destruction. It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, i.e., the function body. For example, the monitor call can appear in the middle of an expression. Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites.
     2446\end{cfa}
     2447
     2448Both entry point and \textbf{callsite-locking} are feasible implementations.
     2449The current \CFA implementation uses entry-point locking because it requires less work when using \textbf{raii}, effectively transferring the burden of implementation to object construction/destruction.
     2450It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, \ie the function body.
     2451For example, the monitor call can appear in the middle of an expression.
     2452Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites.
    19182453
    19192454% ======================================================================
     
    19232458% ======================================================================
    19242459
    1925 Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. Each component of the picture is explained in detail in the flowing sections.
     2460Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency.
     2461Each component of the picture is explained in detail in the flowing sections.
    19262462
    19272463\begin{figure}
     
    19342470
    19352471\subsection{Processors}
    1936 Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically \texttt{pthread}s in the current implementation of \CFA. Indeed, any parallelism must go through operating-system libraries. However, \textbf{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor \textbf{kthread} simply fetch a \textbf{uthread} from the scheduler and run it; they are effectively executers for user-threads. The main benefit of this approach is that it offers a well-defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics.
     2472Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA.
     2473Indeed, any parallelism must go through operating-system libraries.
     2474However, \textbf{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism.
     2475Indeed, processor \textbf{kthread} simply fetch a \textbf{uthread} from the scheduler and run it; they are effectively executers for user-threads.
     2476The main benefit of this approach is that it offers a well-defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling.
     2477Processors internally use coroutines to take advantage of the existing context-switching semantics.
    19372478
    19382479\subsection{Stack Management}
    1939 One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all \texttt{pthread}s created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create their own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the \textbf{kthread} stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, i.e., the initial \textbf{kthread} that is given to any program. In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large.
     2480One of the challenges of this system is to reduce the footprint as much as possible.
     2481Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible.
     2482Normally, coroutines also create their own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the \textbf{kthread} stack, effectively stealing the processor stack.
     2483The exception to this rule is the Main Processor, \ie the initial \textbf{kthread} that is given to any program.
     2484In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large.
    19402485
    19412486\subsection{Context Switching}
    1942 As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that the instruction pointer can be left untouched since the context-switch is always inside the same function. Threads, however, do not context-switch between each other directly. They context-switch to the scheduler. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operations happen. Obviously, this doubles the context-switch cost because threads must context-switch to an intermediate stack. The alternative 1-step context-switch uses the stack of the ``from'' thread to schedule and then context-switches directly to the ``to'' thread. However, the performance of the 2-step context-switch is still superior to a \code{pthread_yield} (see section \ref{results}). Additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch (akin to the Microsoft \code{SwitchToFiber}~\cite{switchToWindows} routine). This option is not currently present in \CFA, but the changes required to add it are strictly additive.
     2487As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks.
     2488To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call.
     2489This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread.
     2490Note that the instruction pointer can be left untouched since the context-switch is always inside the same function.
     2491Threads, however, do not context-switch between each other directly.
     2492They context-switch to the scheduler.
     2493This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operations happen.
     2494Obviously, this doubles the context-switch cost because threads must context-switch to an intermediate stack.
     2495The alternative 1-step context-switch uses the stack of the ``from'' thread to schedule and then context-switches directly to the ``to'' thread.
     2496However, the performance of the 2-step context-switch is still superior to a @pthread_yield@ (see section \ref{results}).
     2497Additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch (akin to the Microsoft @SwitchToFiber@~\cite{switchToWindows} routine).
     2498This option is not currently present in \CFA, but the changes required to add it are strictly additive.
    19432499
    19442500\subsection{Preemption} \label{preemption}
    1945 Finally, an important aspect for any complete threading system is preemption. As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. Indeed, preemption is desirable because it adds a degree of isolation among threads. In a fully cooperative system, any thread that runs a long loop can starve other threads, while in a preemptive system, starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden. Obviously, preemption is not optimal for every workload. However any preemptive system can become a cooperative system by making the time slices extremely large. Therefore, \CFA uses a preemptive threading system.
    1946 
    1947 Preemption in \CFA\footnote{Note that the implementation of preemption is strongly tied with the underlying threading system. For this reason, only the Linux implementation is cover, \CFA does not run on Windows at the time of writting} is based on kernel timers, which are used to run a discrete-event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption, it inserts the time in a sorted order and sets a kernel timer for the closest one, effectively stepping through preemption events on each signal sent by the timer. These timers use the Linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread. This results in an implementation problem, because when delivering signals to a process, the kernel can deliver the signal to any kernel thread for which the signal is not blocked, i.e.:
     2501Finally, an important aspect for any complete threading system is preemption.
     2502As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution.
     2503Indeed, preemption is desirable because it adds a degree of isolation among threads.
     2504In a fully cooperative system, any thread that runs a long loop can starve other threads, while in a preemptive system, starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden.
     2505Obviously, preemption is not optimal for every workload.
     2506However any preemptive system can become a cooperative system by making the time slices extremely large.
     2507Therefore, \CFA uses a preemptive threading system.
     2508
     2509Preemption in \CFA\footnote{Note that the implementation of preemption is strongly tied with the underlying threading system.
     2510For this reason, only the Linux implementation is cover, \CFA does not run on Windows at the time of writting} is based on kernel timers, which are used to run a discrete-event simulation.
     2511Every processor keeps track of the current time and registers an expiration time with the preemption system.
     2512When the preemption system receives a change in preemption, it inserts the time in a sorted order and sets a kernel timer for the closest one, effectively stepping through preemption events on each signal sent by the timer.
     2513These timers use the Linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread.
     2514This results in an implementation problem, because when delivering signals to a process, the kernel can deliver the signal to any kernel thread for which the signal is not blocked, \ie:
    19482515\begin{quote}
    1949 A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
     2516A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked.
     2517If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
    19502518SIGNAL(7) - Linux Programmer's Manual
    19512519\end{quote}
    19522520For the sake of simplicity, and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every kernel thread except one.
    19532521
    1954 Now because of how involuntary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread. Hence, involuntary context-switching is done by sending signal {\tt SIGUSR1} to the corresponding proces\-sor and having the thread yield from inside the signal handler. This approach effectively context-switches away from the signal handler back to the kernel and the signal handler frame is eventually unwound when the thread is scheduled again. As a result, a signal handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another. This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes ``async-signal-safe'' functions from other functions.}. However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks. For this reason, the alarm thread is in a tight loop around a system call to \code{sigwaitinfo}, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (e.g., on thread termination). This unblocking is also done using {\tt SIGALRM}, but sent through the \code{pthread_sigqueue}. Indeed, \code{sigwait} can differentiate signals sent from \code{pthread_sigqueue} from signals sent from alarms or the kernel.
     2522Now because of how involuntary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread.
     2523Hence, involuntary context-switching is done by sending signal {\tt SIGUSR1} to the corresponding proces\-sor and having the thread yield from inside the signal handler.
     2524This approach effectively context-switches away from the signal handler back to the kernel and the signal handler frame is eventually unwound when the thread is scheduled again.
     2525As a result, a signal handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread).
     2526It is important to note that signal handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another.
     2527This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes ``async-signal-safe'' functions from other functions.}.
     2528However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks.
     2529For this reason, the alarm thread is in a tight loop around a system call to @sigwaitinfo@, requiring very little CPU time for preemption.
     2530One final detail about the alarm thread is how to wake it when additional communication is required (\eg on thread termination).
     2531This unblocking is also done using {\tt SIGALRM}, but sent through the @pthread_sigqueue@.
     2532Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel.
    19552533
    19562534\subsection{Scheduler}
    1957 Finally, an aspect that was not mentioned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling. Further discussion on scheduling is present in section \ref{futur:sched}.
     2535Finally, an aspect that was not mentioned yet is the scheduling algorithm.
     2536Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling.
     2537Further discussion on scheduling is present in section \ref{futur:sched}.
    19582538
    19592539% ======================================================================
     
    19642544The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience):
    19652545
    1966 \begin{figure}[H]
     2546\begin{figure}
    19672547\begin{center}
    19682548{\resizebox{0.4\textwidth}{!}{\input{monitor}}}
     
    19712551\end{figure}
    19722552
    1973 This picture has several components, the two most important being the entry queue and the AS-stack. The entry queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor/signaller (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next.
    1974 
    1975 For \CFA, this picture does not have support for blocking multiple monitors on a single condition. To support \textbf{bulk-acq} two changes to this picture are required. First, it is no longer helpful to attach the condition to \emph{a single} monitor. Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}.
    1976 
    1977 \begin{figure}[H]
     2553This picture has several components, the two most important being the entry queue and the AS-stack.
     2554The entry queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor/signaller (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next.
     2555
     2556For \CFA, this picture does not have support for blocking multiple monitors on a single condition.
     2557To support \textbf{bulk-acq} two changes to this picture are required.
     2558First, it is no longer helpful to attach the condition to \emph{a single} monitor.
     2559Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}.
     2560
     2561\begin{figure}
    19782562\begin{center}
    19792563{\resizebox{0.8\textwidth}{!}{\input{int_monitor}}}
     
    19832567\end{figure}
    19842568
    1985 This picture and the proper entry and leave algorithms (see listing \ref{lst:entry2}) is the fundamental implementation of internal scheduling. Note that when a thread is moved from the condition to the AS-stack, it is conceptually split into N pieces, where N is the number of monitors specified in the parameter list. The thread is woken up when all the pieces have popped from the AS-stacks and made active. In this picture, the threads are split into halves but this is only because there are two monitors. For a specific signalling operation every monitor needs a piece of thread on its AS-stack.
    1986 
    1987 \begin{figure}[b]
     2569This picture and the proper entry and leave algorithms (see listing \ref{f:entry2}) is the fundamental implementation of internal scheduling.
     2570Note that when a thread is moved from the condition to the AS-stack, it is conceptually split into N pieces, where N is the number of monitors specified in the parameter list.
     2571The thread is woken up when all the pieces have popped from the AS-stacks and made active.
     2572In this picture, the threads are split into halves but this is only because there are two monitors.
     2573For a specific signalling operation every monitor needs a piece of thread on its AS-stack.
     2574
     2575\begin{figure}
    19882576\begin{multicols}{2}
    19892577Entry
    1990 \begin{pseudo}
     2578\begin{cfa}
    19912579if monitor is free
    19922580        enter
     
    19972585increment recursion
    19982586
    1999 \end{pseudo}
     2587\end{cfa}
    20002588\columnbreak
    20012589Exit
    2002 \begin{pseudo}
     2590\begin{cfa}
    20032591decrement recursion
    20042592if recursion == 0
     
    20102598        if entry queue not empty
    20112599                wake-up thread
    2012 \end{pseudo}
     2600\end{cfa}
    20132601\end{multicols}
    2014 \begin{pseudo}[caption={Entry and exit routine for monitors with internal scheduling},label={lst:entry2}]
    2015 \end{pseudo}
     2602\begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling},label={f:entry2}]
     2603\end{cfa}
    20162604\end{figure}
    20172605
    2018 The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{lst:entry2}. Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging. The data structures used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the call stack of the \code{wait} and \code{signal_block} routines.
    2019 
    2020 \begin{figure}[H]
     2606The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{f:entry2}.
     2607Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership.
     2608This solution is deadlock safe as well as preventing any potential barging.
     2609The data structures used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the call stack of the @wait@ and @signal_block@ routines.
     2610
     2611\begin{figure}
    20212612\begin{center}
    20222613{\resizebox{0.8\textwidth}{!}{\input{monitor_structs.pstex_t}}}
     
    20262617\end{figure}
    20272618
    2028 Figure \ref{fig:structs} shows a high-level representation of these data structures. The main idea behind them is that, a thread cannot contain an arbitrary number of intrusive ``next'' pointers for linking onto monitors. The \code{condition node} is the data structure that is queued onto a condition variable and, when signalled, the condition queue is popped and each \code{condition criterion} is moved to the AS-stack. Once all the criteria have been popped from their respective AS-stacks, the thread is woken up, which is what is shown in listing \ref{lst:entry2}.
     2619Figure \ref{fig:structs} shows a high-level representation of these data structures.
     2620The main idea behind them is that, a thread cannot contain an arbitrary number of intrusive ``next'' pointers for linking onto monitors.
     2621The @condition node@ is the data structure that is queued onto a condition variable and, when signalled, the condition queue is popped and each @condition criterion@ is moved to the AS-stack.
     2622Once all the criteria have been popped from their respective AS-stacks, the thread is woken up, which is what is shown in listing \ref{f:entry2}.
    20292623
    20302624% ======================================================================
     
    20332627% ======================================================================
    20342628% ======================================================================
    2035 Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentioned in section \ref{extsched}. For internal scheduling, these queues are part of condition variables, which are still unique for a given scheduling operation (i.e., no signal statement uses multiple conditions). However, in the case of external scheduling, there is no equivalent object which is associated with \code{waitfor} statements. This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired. These monitors being the only objects that have sufficient lifetime and are available on both sides of the \code{waitfor} statement. This requires an algorithm to choose which monitor holds the relevant queue. It is also important that said algorithm be independent of the order in which users list parameters. The proposed algorithm is to fall back on monitor lock ordering (sorting by address) and specify that the monitor that is acquired first is the one with the relevant waiting queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint.
     2629Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentioned in section \ref{extsched}.
     2630For internal scheduling, these queues are part of condition variables, which are still unique for a given scheduling operation (\ie no signal statement uses multiple conditions).
     2631However, in the case of external scheduling, there is no equivalent object which is associated with @waitfor@ statements.
     2632This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired.
     2633These monitors being the only objects that have sufficient lifetime and are available on both sides of the @waitfor@ statement.
     2634This requires an algorithm to choose which monitor holds the relevant queue.
     2635It is also important that said algorithm be independent of the order in which users list parameters.
     2636The proposed algorithm is to fall back on monitor lock ordering (sorting by address) and specify that the monitor that is acquired first is the one with the relevant waiting queue.
     2637This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint.
    20362638
    20372639This algorithm choice has two consequences:
    20382640\begin{itemize}
    2039         \item The queue of the monitor with the lowest address is no longer a true FIFO queue because threads can be moved to the front of the queue. These queues need to contain a set of monitors for each of the waiting threads. Therefore, another thread whose set contains the same lowest address monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing.
    2040         \item The queue of the lowest priority monitor is both required and potentially unused. Indeed, since it is not known at compile time which monitor is the monitor which has the lowest address, every monitor needs to have the correct queues even though it is possible that some queues go unused for the entire duration of the program, for example if a monitor is only used in a specific pair.
     2641        \item The queue of the monitor with the lowest address is no longer a true FIFO queue because threads can be moved to the front of the queue.
     2642These queues need to contain a set of monitors for each of the waiting threads.
     2643Therefore, another thread whose set contains the same lowest address monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing.
     2644        \item The queue of the lowest priority monitor is both required and potentially unused.
     2645Indeed, since it is not known at compile time which monitor is the monitor which has the lowest address, every monitor needs to have the correct queues even though it is possible that some queues go unused for the entire duration of the program, for example if a monitor is only used in a specific pair.
    20412646\end{itemize}
    20422647Therefore, the following modifications need to be made to support external scheduling:
    20432648\begin{itemize}
    2044         \item The threads waiting on the entry queue need to keep track of which routine they are trying to enter, and using which set of monitors. The \code{mutex} routine already has all the required information on its stack, so the thread only needs to keep a pointer to that information.
    2045         \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that if a thread has acquired two monitors but executes a \code{waitfor} with only one monitor as a parameter, setting the mask of acceptable routines to both monitors will not cause any problems since the extra monitor will not change ownership regardless. This becomes relevant when \code{when} clauses affect the number of monitors passed to a \code{waitfor} statement.
    2046         \item The entry/exit routines need to be updated as shown in listing \ref{lst:entry3}.
     2649        \item The threads waiting on the entry queue need to keep track of which routine they are trying to enter, and using which set of monitors.
     2650The @mutex@ routine already has all the required information on its stack, so the thread only needs to keep a pointer to that information.
     2651        \item The monitors need to keep a mask of acceptable routines.
     2652This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it.
     2653It also needs storage to keep track of which routine was accepted.
     2654Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread.
     2655Note that if a thread has acquired two monitors but executes a @waitfor@ with only one monitor as a parameter, setting the mask of acceptable routines to both monitors will not cause any problems since the extra monitor will not change ownership regardless.
     2656This becomes relevant when @when@ clauses affect the number of monitors passed to a @waitfor@ statement.
     2657        \item The entry/exit routines need to be updated as shown in listing \ref{f:entry3}.
    20472658\end{itemize}
    20482659
    20492660\subsection{External Scheduling - Destructors}
    2050 Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is needed for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular \code{waitfor} statements, the call stack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The \code{waitfor} semantics can then be adjusted correspondingly, as seen in listing \ref{lst:entry-dtor}
     2661Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine.
     2662This routine is needed because of the storage requirements of the call order inversion.
     2663Indeed, when waiting for the destructors, storage is needed for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for.
     2664For regular @waitfor@ statements, the call stack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later.
     2665The @waitfor@ semantics can then be adjusted correspondingly, as seen in listing \ref{f:entry-dtor}
    20512666
    20522667\begin{figure}
    20532668\begin{multicols}{2}
    20542669Entry
    2055 \begin{pseudo}
     2670\begin{cfa}
    20562671if monitor is free
    20572672        enter
     
    20642679        block
    20652680increment recursion
    2066 \end{pseudo}
     2681\end{cfa}
    20672682\columnbreak
    20682683Exit
    2069 \begin{pseudo}
     2684\begin{cfa}
    20702685decrement recursion
    20712686if recursion == 0
     
    20802695                wake-up thread
    20812696        endif
    2082 \end{pseudo}
     2697\end{cfa}
    20832698\end{multicols}
    2084 \begin{pseudo}[caption={Entry and exit routine for monitors with internal scheduling and external scheduling},label={lst:entry3}]
    2085 \end{pseudo}
     2699\begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling and external scheduling},label={f:entry3}]
     2700\end{cfa}
    20862701\end{figure}
    20872702
     
    20892704\begin{multicols}{2}
    20902705Destructor Entry
    2091 \begin{pseudo}
     2706\begin{cfa}
    20922707if monitor is free
    20932708        enter
     
    21032718        wait
    21042719increment recursion
    2105 \end{pseudo}
     2720\end{cfa}
    21062721\columnbreak
    21072722Waitfor
    2108 \begin{pseudo}
     2723\begin{cfa}
    21092724if matching thread is already there
    21102725        if found destructor
     
    21262741block
    21272742return
    2128 \end{pseudo}
     2743\end{cfa}
    21292744\end{multicols}
    2130 \begin{pseudo}[caption={Pseudo code for the \code{waitfor} routine and the \code{mutex} entry routine for destructors},label={lst:entry-dtor}]
    2131 \end{pseudo}
     2745\begin{cfa}[caption={Pseudo code for the \protect\lstinline|waitfor| routine and the \protect\lstinline|mutex| entry routine for destructors},label={f:entry-dtor}]
     2746\end{cfa}
    21322747\end{figure}
    21332748
     
    21412756
    21422757\section{Threads As Monitors}
    2143 As it was subtly alluded in section \ref{threads}, \code{thread}s in \CFA are in fact monitors, which means that all monitor features are available when using threads. For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine:
    2144 \begin{figure}[H]
    2145 \begin{cfacode}[caption={Toy simulator using \code{thread}s and \code{monitor}s.},label={lst:engine-v1}]
     2758As it was subtly alluded in section \ref{threads}, @thread@s in \CFA are in fact monitors, which means that all monitor features are available when using threads.
     2759For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine:
     2760\begin{figure}
     2761\begin{cfa}[caption={Toy simulator using \protect\lstinline|thread|s and \protect\lstinline|monitor|s.},label={f:engine-v1}]
    21462762// Visualization declaration
    21472763thread Renderer {} renderer;
     
    21702786        }
    21712787}
    2172 \end{cfacode}
     2788\end{cfa}
    21732789\end{figure}
    2174 One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner:
    2175 \begin{figure}[H]
    2176 \begin{cfacode}[caption={Same toy simulator with proper termination condition.},label={lst:engine-v2}]
     2790One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever.
     2791Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner:
     2792\begin{figure}
     2793\begin{cfa}[caption={Same toy simulator with proper termination condition.},label={f:engine-v2}]
    21772794// Visualization declaration
    21782795thread Renderer {} renderer;
     
    22122829// Call destructor for simulator once simulator finishes
    22132830// Call destructor for renderer to signify shutdown
    2214 \end{cfacode}
     2831\end{cfa}
    22152832\end{figure}
    22162833
    22172834\section{Fibers \& Threads}
    2218 As mentioned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand. Currently, using fibers is done by adding the following line of code to the program~:
    2219 \begin{cfacode}
     2835As mentioned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand.
     2836Currently, using fibers is done by adding the following line of code to the program~:
     2837\begin{cfa}
    22202838unsigned int default_preemption() {
    22212839        return 0;
    22222840}
    2223 \end{cfacode}
    2224 This function is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, i.e., no preemption. However, once clusters are fully implemented, it will be possible to create fibers and \textbf{uthread} in the same system, as in listing \ref{lst:fiber-uthread}
     2841\end{cfa}
     2842This function is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, \ie no preemption.
     2843However, once clusters are fully implemented, it will be possible to create fibers and \textbf{uthread} in the same system, as in listing \ref{f:fiber-uthread}
    22252844\begin{figure}
    2226 \begin{cfacode}[caption={Using fibers and \textbf{uthread} side-by-side in \CFA},label={lst:fiber-uthread}]
    2227 //Cluster forward declaration
     2845\lstset{language=CFA,deletedelim=**[is][]{`}{`}}
     2846\begin{cfa}[caption={Using fibers and \textbf{uthread} side-by-side in \CFA},label={f:fiber-uthread}]
     2847// Cluster forward declaration
    22282848struct cluster;
    22292849
    2230 //Processor forward declaration
     2850// Processor forward declaration
    22312851struct processor;
    22322852
    2233 //Construct clusters with a preemption rate
     2853// Construct clusters with a preemption rate
    22342854void ?{}(cluster& this, unsigned int rate);
    2235 //Construct processor and add it to cluster
     2855// Construct processor and add it to cluster
    22362856void ?{}(processor& this, cluster& cluster);
    2237 //Construct thread and schedule it on cluster
     2857// Construct thread and schedule it on cluster
    22382858void ?{}(thread& this, cluster& cluster);
    22392859
    2240 //Declare two clusters
    2241 cluster thread_cluster = { 10`ms };                     //Preempt every 10 ms
    2242 cluster fibers_cluster = { 0 };                         //Never preempt
    2243 
    2244 //Construct 4 processors
     2860// Declare two clusters
     2861cluster thread_cluster = { 10`ms };                     // Preempt every 10 ms
     2862cluster fibers_cluster = { 0 };                         // Never preempt
     2863
     2864// Construct 4 processors
    22452865processor processors[4] = {
    22462866        //2 for the thread cluster
     
    22522872};
    22532873
    2254 //Declares thread
     2874// Declares thread
    22552875thread UThread {};
    22562876void ?{}(UThread& this) {
    2257         //Construct underlying thread to automatically
    2258         //be scheduled on the thread cluster
     2877        // Construct underlying thread to automatically
     2878        // be scheduled on the thread cluster
    22592879        (this){ thread_cluster }
    22602880}
     
    22622882void main(UThread & this);
    22632883
    2264 //Declares fibers
     2884// Declares fibers
    22652885thread Fiber {};
    22662886void ?{}(Fiber& this) {
    2267         //Construct underlying thread to automatically
    2268         //be scheduled on the fiber cluster
     2887        // Construct underlying thread to automatically
     2888        // be scheduled on the fiber cluster
    22692889        (this.__thread){ fibers_cluster }
    22702890}
    22712891
    22722892void main(Fiber & this);
    2273 \end{cfacode}
     2893\end{cfa}
    22742894\end{figure}
    22752895
     
    22812901% ======================================================================
    22822902\section{Machine Setup}
    2283 Table \ref{tab:machine} shows the characteristics of the machine used to run the benchmarks. All tests were made on this machine.
    2284 \begin{table}[H]
     2903Table \ref{tab:machine} shows the characteristics of the machine used to run the benchmarks.
     2904All tests were made on this machine.
     2905\begin{table}
    22852906\begin{center}
    22862907\begin{tabular}{| l | r | l | r |}
     
    23142935
    23152936\section{Micro Benchmarks}
    2316 All benchmarks are run using the same harness to produce the results, seen as the \code{BENCH()} macro in the following examples. This macro uses the following logic to benchmark the code:
    2317 \begin{pseudo}
     2937All benchmarks are run using the same harness to produce the results, seen as the @BENCH()@ macro in the following examples.
     2938This macro uses the following logic to benchmark the code:
     2939\begin{cfa}
    23182940#define BENCH(run, result) \
    23192941        before = gettime(); \
     
    23212943        after  = gettime(); \
    23222944        result = (after - before) / N;
    2323 \end{pseudo}
    2324 The method used to get time is \code{clock_gettime(CLOCK_THREAD_CPUTIME_ID);}. Each benchmark is using many iterations of a simple call to measure the cost of the call. The specific number of iterations depends on the specific benchmark.
     2945\end{cfa}
     2946The method used to get time is @clock_gettime(CLOCK_THREAD_CPUTIME_ID);@.
     2947Each benchmark is using many iterations of a simple call to measure the cost of the call.
     2948The specific number of iterations depends on the specific benchmark.
    23252949
    23262950\subsection{Context-Switching}
    2327 The first interesting benchmark is to measure how long context-switches take. The simplest approach to do this is to yield on a thread, which executes a 2-step context switch. Yielding causes the thread to context-switch to the scheduler and back, more precisely: from the \textbf{uthread} to the \textbf{kthread} then from the \textbf{kthread} back to the same \textbf{uthread} (or a different one in the general case). In order to make the comparison fair, coroutines also execute a 2-step context-switch by resuming another coroutine which does nothing but suspending in a tight loop, which is a resume/suspend cycle instead of a yield. Listing \ref{lst:ctx-switch} shows the code for coroutines and threads with the results in table \ref{tab:ctx-switch}. All omitted tests are functionally identical to one of these tests. The difference between coroutines and threads can be attributed to the cost of scheduling.
     2951The first interesting benchmark is to measure how long context-switches take.
     2952The simplest approach to do this is to yield on a thread, which executes a 2-step context switch.
     2953Yielding causes the thread to context-switch to the scheduler and back, more precisely: from the \textbf{uthread} to the \textbf{kthread} then from the \textbf{kthread} back to the same \textbf{uthread} (or a different one in the general case).
     2954In order to make the comparison fair, coroutines also execute a 2-step context-switch by resuming another coroutine which does nothing but suspending in a tight loop, which is a resume/suspend cycle instead of a yield.
     2955Figure~\ref{f:ctx-switch} shows the code for coroutines and threads with the results in table \ref{tab:ctx-switch}.
     2956All omitted tests are functionally identical to one of these tests.
     2957The difference between coroutines and threads can be attributed to the cost of scheduling.
    23282958\begin{figure}
    23292959\begin{multicols}{2}
    23302960\CFA Coroutines
    2331 \begin{cfacode}
     2961\begin{cfa}
    23322962coroutine GreatSuspender {};
    23332963void main(GreatSuspender& this) {
     
    23452975        printf("%llu\n", result);
    23462976}
    2347 \end{cfacode}
     2977\end{cfa}
    23482978\columnbreak
    23492979\CFA Threads
    2350 \begin{cfacode}
     2980\begin{cfa}
    23512981
    23522982
     
    23642994        printf("%llu\n", result);
    23652995}
    2366 \end{cfacode}
     2996\end{cfa}
    23672997\end{multicols}
    2368 \begin{cfacode}[caption={\CFA benchmark code used to measure context-switches for coroutines and threads.},label={lst:ctx-switch}]
    2369 \end{cfacode}
     2998\begin{cfa}[caption={\CFA benchmark code used to measure context-switches for coroutines and threads.},label={f:ctx-switch}]
     2999\end{cfa}
    23703000\end{figure}
    23713001
     
    23863016\end{tabular}
    23873017\end{center}
    2388 \caption{Context Switch comparison. All numbers are in nanoseconds(\si{\nano\second})}
     3018\caption{Context Switch comparison.
     3019All numbers are in nanoseconds(\si{\nano\second})}
    23893020\label{tab:ctx-switch}
    23903021\end{table}
    23913022
    23923023\subsection{Mutual-Exclusion}
    2393 The next interesting benchmark is to measure the overhead to enter/leave a critical-section. For monitors, the simplest approach is to measure how long it takes to enter and leave a monitor routine. Listing \ref{lst:mutex} shows the code for \CFA. To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a \code{pthread_mutex} lock is also measured. The results can be shown in table \ref{tab:mutex}.
     3024The next interesting benchmark is to measure the overhead to enter/leave a critical-section.
     3025For monitors, the simplest approach is to measure how long it takes to enter and leave a monitor routine.
     3026Figure~\ref{f:mutex} shows the code for \CFA.
     3027To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured.
     3028The results can be shown in table \ref{tab:mutex}.
    23943029
    23953030\begin{figure}
    2396 \begin{cfacode}[caption={\CFA benchmark code used to measure mutex routines.},label={lst:mutex}]
     3031\begin{cfa}[caption={\CFA benchmark code used to measure mutex routines.},label={f:mutex}]
    23973032monitor M {};
    23983033void __attribute__((noinline)) call( M & mutex m /*, m2, m3, m4*/ ) {}
     
    24083043        printf("%llu\n", result);
    24093044}
    2410 \end{cfacode}
     3045\end{cfa}
    24113046\end{figure}
    24123047
     
    24203055FetchAdd + FetchSub                             & 26            & 26            & 0    \\
    24213056Pthreads Mutex Lock                             & 31            & 31.86 & 0.99 \\
    2422 \uC \code{monitor} member routine               & 30            & 30            & 0    \\
    2423 \CFA \code{mutex} routine, 1 argument   & 41            & 41.57 & 0.9  \\
    2424 \CFA \code{mutex} routine, 2 argument   & 76            & 76.96 & 1.57 \\
    2425 \CFA \code{mutex} routine, 4 argument   & 145           & 146.68        & 3.85 \\
     3057\uC @monitor@ member routine            & 30            & 30            & 0    \\
     3058\CFA @mutex@ routine, 1 argument        & 41            & 41.57 & 0.9  \\
     3059\CFA @mutex@ routine, 2 argument        & 76            & 76.96 & 1.57 \\
     3060\CFA @mutex@ routine, 4 argument        & 145           & 146.68        & 3.85 \\
    24263061Java synchronized routine                       & 27            & 28.57 & 2.6  \\
    24273062\hline
    24283063\end{tabular}
    24293064\end{center}
    2430 \caption{Mutex routine comparison. All numbers are in nanoseconds(\si{\nano\second})}
     3065\caption{Mutex routine comparison.
     3066All numbers are in nanoseconds(\si{\nano\second})}
    24313067\label{tab:mutex}
    24323068\end{table}
    24333069
    24343070\subsection{Internal Scheduling}
    2435 The internal-scheduling benchmark measures the cost of waiting on and signalling a condition variable. Listing \ref{lst:int-sched} shows the code for \CFA, with results table \ref{tab:int-sched}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
     3071The internal-scheduling benchmark measures the cost of waiting on and signalling a condition variable.
     3072Figure~\ref{f:int-sched} shows the code for \CFA, with results table \ref{tab:int-sched}.
     3073As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
    24363074
    24373075\begin{figure}
    2438 \begin{cfacode}[caption={Benchmark code for internal scheduling},label={lst:int-sched}]
     3076\begin{cfa}[caption={Benchmark code for internal scheduling},label={f:int-sched}]
    24393077volatile int go = 0;
    24403078condition c;
     
    24663104        return do_wait(m1);
    24673105}
    2468 \end{cfacode}
     3106\end{cfa}
    24693107\end{figure}
    24703108
     
    24763114\hline
    24773115Pthreads Condition Variable                     & 5902.5        & 6093.29       & 714.78 \\
    2478 \uC \code{signal}                                       & 322           & 323   & 3.36   \\
    2479 \CFA \code{signal}, 1 \code{monitor}    & 352.5 & 353.11        & 3.66   \\
    2480 \CFA \code{signal}, 2 \code{monitor}    & 430           & 430.29        & 8.97   \\
    2481 \CFA \code{signal}, 4 \code{monitor}    & 594.5 & 606.57        & 18.33  \\
    2482 Java \code{notify}                              & 13831.5       & 15698.21      & 4782.3 \\
     3116\uC @signal@                                    & 322           & 323   & 3.36   \\
     3117\CFA @signal@, 1 @monitor@      & 352.5 & 353.11        & 3.66   \\
     3118\CFA @signal@, 2 @monitor@      & 430           & 430.29        & 8.97   \\
     3119\CFA @signal@, 4 @monitor@      & 594.5 & 606.57        & 18.33  \\
     3120Java @notify@                           & 13831.5       & 15698.21      & 4782.3 \\
    24833121\hline
    24843122\end{tabular}
    24853123\end{center}
    2486 \caption{Internal scheduling comparison. All numbers are in nanoseconds(\si{\nano\second})}
     3124\caption{Internal scheduling comparison.
     3125All numbers are in nanoseconds(\si{\nano\second})}
    24873126\label{tab:int-sched}
    24883127\end{table}
    24893128
    24903129\subsection{External Scheduling}
    2491 The Internal scheduling benchmark measures the cost of the \code{waitfor} statement (\code{_Accept} in \uC). Listing \ref{lst:ext-sched} shows the code for \CFA, with results in table \ref{tab:ext-sched}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
     3130The Internal scheduling benchmark measures the cost of the @waitfor@ statement (@_Accept@ in \uC).
     3131Figure~\ref{f:ext-sched} shows the code for \CFA, with results in table \ref{tab:ext-sched}.
     3132As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
    24923133
    24933134\begin{figure}
    2494 \begin{cfacode}[caption={Benchmark code for external scheduling},label={lst:ext-sched}]
     3135\begin{cfa}[caption={Benchmark code for external scheduling},label={f:ext-sched}]
    24953136volatile int go = 0;
    24963137monitor M {};
     
    25213162        return do_wait(m1);
    25223163}
    2523 \end{cfacode}
     3164\end{cfa}
    25243165\end{figure}
    25253166
     
    25303171\multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\
    25313172\hline
    2532 \uC \code{Accept}                                       & 350           & 350.61        & 3.11  \\
    2533 \CFA \code{waitfor}, 1 \code{monitor}   & 358.5 & 358.36        & 3.82  \\
    2534 \CFA \code{waitfor}, 2 \code{monitor}   & 422           & 426.79        & 7.95  \\
    2535 \CFA \code{waitfor}, 4 \code{monitor}   & 579.5 & 585.46        & 11.25 \\
     3173\uC @Accept@                                    & 350           & 350.61        & 3.11  \\
     3174\CFA @waitfor@, 1 @monitor@     & 358.5 & 358.36        & 3.82  \\
     3175\CFA @waitfor@, 2 @monitor@     & 422           & 426.79        & 7.95  \\
     3176\CFA @waitfor@, 4 @monitor@     & 579.5 & 585.46        & 11.25 \\
    25363177\hline
    25373178\end{tabular}
    25383179\end{center}
    2539 \caption{External scheduling comparison. All numbers are in nanoseconds(\si{\nano\second})}
     3180\caption{External scheduling comparison.
     3181All numbers are in nanoseconds(\si{\nano\second})}
    25403182\label{tab:ext-sched}
    25413183\end{table}
    25423184
     3185
    25433186\subsection{Object Creation}
    2544 Finally, the last benchmark measures the cost of creation for concurrent objects. Listing \ref{lst:creation} shows the code for \texttt{pthread}s and \CFA threads, with results shown in table \ref{tab:creation}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine, the creation cost is very low.
     3187Finally, the last benchmark measures the cost of creation for concurrent objects.
     3188Figure~\ref{f:creation} shows the code for @pthread@s and \CFA threads, with results shown in table \ref{tab:creation}.
     3189As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
     3190The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine, the creation cost is very low.
    25453191
    25463192\begin{figure}
    25473193\begin{center}
    2548 \texttt{pthread}
    2549 \begin{ccode}
     3194@pthread@
     3195\begin{cfa}
    25503196int main() {
    25513197        BENCH(
     
    25663212        printf("%llu\n", result);
    25673213}
    2568 \end{ccode}
     3214\end{cfa}
    25693215
    25703216
    25713217
    25723218\CFA Threads
    2573 \begin{cfacode}
     3219\begin{cfa}
    25743220int main() {
    25753221        BENCH(
     
    25813227        printf("%llu\n", result);
    25823228}
    2583 \end{cfacode}
     3229\end{cfa}
    25843230\end{center}
    2585 \begin{cfacode}[caption={Benchmark code for \texttt{pthread}s and \CFA to measure object creation},label={lst:creation}]
    2586 \end{cfacode}
     3231\caption{Benchmark code for \protect\lstinline|pthread|s and \CFA to measure object creation}
     3232\label{f:creation}
    25873233\end{figure}
    25883234
     
    26043250\end{tabular}
    26053251\end{center}
    2606 \caption{Creation comparison. All numbers are in nanoseconds(\si{\nano\second}).}
     3252\caption{Creation comparison.
     3253All numbers are in nanoseconds(\si{\nano\second}).}
    26073254\label{tab:creation}
    26083255\end{table}
     
    26113258
    26123259\section{Conclusion}
    2613 This paper has achieved a minimal concurrency \textbf{api} that is simple, efficient and usable as the basis for higher-level features. The approach presented is based on a lightweight thread-system for parallelism, which sits on top of clusters of processors. This M:N model is judged to be both more efficient and allow more flexibility for users. Furthermore, this document introduces monitors as the main concurrency tool for users. This paper also offers a novel approach allowing multiple monitors to be accessed simultaneously without running into the Nested Monitor Problem~\cite{Lister77}. It also offers a full implementation of the concurrency runtime written entirely in \CFA, effectively the largest \CFA code base to date.
     3260This paper has achieved a minimal concurrency \textbf{api} that is simple, efficient and usable as the basis for higher-level features.
     3261The approach presented is based on a lightweight thread-system for parallelism, which sits on top of clusters of processors.
     3262This M:N model is judged to be both more efficient and allow more flexibility for users.
     3263Furthermore, this document introduces monitors as the main concurrency tool for users.
     3264This paper also offers a novel approach allowing multiple monitors to be accessed simultaneously without running into the Nested Monitor Problem~\cite{Lister77}.
     3265It also offers a full implementation of the concurrency runtime written entirely in \CFA, effectively the largest \CFA code base to date.
    26143266
    26153267
     
    26213273
    26223274\subsection{Performance} \label{futur:perf}
    2623 This paper presents a first implementation of the \CFA concurrency runtime. Therefore, there is still significant work to improve performance. Many of the data structures and algorithms may change in the future to more efficient versions. For example, the number of monitors in a single \textbf{bulk-acq} is only bound by the stack size, this is probably unnecessarily generous. It may be possible that limiting the number helps increase performance. However, it is not obvious that the benefit would be significant.
     3275This paper presents a first implementation of the \CFA concurrency runtime.
     3276Therefore, there is still significant work to improve performance.
     3277Many of the data structures and algorithms may change in the future to more efficient versions.
     3278For example, the number of monitors in a single \textbf{bulk-acq} is only bound by the stack size, this is probably unnecessarily generous.
     3279It may be possible that limiting the number helps increase performance.
     3280However, it is not obvious that the benefit would be significant.
    26243281
    26253282\subsection{Flexible Scheduling} \label{futur:sched}
    2626 An important part of concurrency is scheduling. Different scheduling algorithms can affect performance (both in terms of average and variation). However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs. One solution is to offer various tweaking options to users, allowing the scheduler to be adjusted to the requirements of the workload. However, in order to be truly flexible, it would be interesting to allow users to add arbitrary data and arbitrary scheduling algorithms. For example, a web server could attach Type-of-Service information to threads and have a ``ToS aware'' scheduling algorithm tailored to this specific web server. This path of flexible schedulers will be explored for \CFA.
     3283An important part of concurrency is scheduling.
     3284Different scheduling algorithms can affect performance (both in terms of average and variation).
     3285However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs.
     3286One solution is to offer various tweaking options to users, allowing the scheduler to be adjusted to the requirements of the workload.
     3287However, in order to be truly flexible, it would be interesting to allow users to add arbitrary data and arbitrary scheduling algorithms.
     3288For example, a web server could attach Type-of-Service information to threads and have a ``ToS aware'' scheduling algorithm tailored to this specific web server.
     3289This path of flexible schedulers will be explored for \CFA.
    26273290
    26283291\subsection{Non-Blocking I/O} \label{futur:nbio}
    2629 While most of the parallelism tools are aimed at data parallelism and control-flow parallelism, many modern workloads are not bound on computation but on IO operations, a common case being web servers and XaaS (anything as a service). These types of workloads often require significant engineering around amortizing costs of blocking IO operations. At its core, non-blocking I/O is an operating system level feature that allows queuing IO operations (e.g., network operations) and registering for notifications instead of waiting for requests to complete. In this context, the role of the language makes Non-Blocking IO easily available and with low overhead. The current trend is to use asynchronous programming using tools like callbacks and/or futures and promises, which can be seen in frameworks like Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java and Django~\cite{Django} for Python. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear.
     3292While most of the parallelism tools are aimed at data parallelism and control-flow parallelism, many modern workloads are not bound on computation but on IO operations, a common case being web servers and XaaS (anything as a service).
     3293These types of workloads often require significant engineering around amortizing costs of blocking IO operations.
     3294At its core, non-blocking I/O is an operating system level feature that allows queuing IO operations (\eg network operations) and registering for notifications instead of waiting for requests to complete.
     3295In this context, the role of the language makes Non-Blocking IO easily available and with low overhead.
     3296The current trend is to use asynchronous programming using tools like callbacks and/or futures and promises, which can be seen in frameworks like Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java and Django~\cite{Django} for Python.
     3297However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear.
    26303298
    26313299\subsection{Other Concurrency Tools} \label{futur:tools}
    2632 While monitors offer a flexible and powerful concurrent core for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package. Examples of such tools can include simple locks and condition variables, futures and promises~\cite{promises}, executors and actors. These additional features are useful when monitors offer a level of abstraction that is inadequate for certain tasks.
     3300While monitors offer a flexible and powerful concurrent core for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package.
     3301Examples of such tools can include simple locks and condition variables, futures and promises~\cite{promises}, executors and actors.
     3302These additional features are useful when monitors offer a level of abstraction that is inadequate for certain tasks.
    26333303
    26343304\subsection{Implicit Threading} \label{futur:implcit}
    2635 Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The canonical example of implicit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithms~\cite{uC++book}. Table \ref{lst:parfor} shows three different code examples that accomplish point-wise sums of large arrays. Note that none of these examples explicitly declare any concurrency or parallelism objects.
     3305Simpler applications can benefit greatly from having implicit parallelism.
     3306That is, parallelism that does not rely on the user to write concurrency.
     3307This type of parallelism can be achieved both at the language level and at the library level.
     3308The canonical example of implicit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithms~\cite{uC++book}.
     3309Table \ref{f:parfor} shows three different code examples that accomplish point-wise sums of large arrays.
     3310Note that none of these examples explicitly declare any concurrency or parallelism objects.
    26363311
    26373312\begin{table}
     
    26393314\begin{tabular}[t]{|c|c|c|}
    26403315Sequential & Library Parallel & Language Parallel \\
    2641 \begin{cfacode}[tabsize=3]
     3316\begin{cfa}[tabsize=3]
    26423317void big_sum(
    26433318        int* a, int* b,
     
    26633338//... fill in a & b
    26643339big_sum(a,b,c,10000);
    2665 \end{cfacode} &\begin{cfacode}[tabsize=3]
     3340\end{cfa} &\begin{cfa}[tabsize=3]
    26663341void big_sum(
    26673342        int* a, int* b,
     
    26873362//... fill in a & b
    26883363big_sum(a,b,c,10000);
    2689 \end{cfacode}&\begin{cfacode}[tabsize=3]
     3364\end{cfa}&\begin{cfa}[tabsize=3]
    26903365void big_sum(
    26913366        int* a, int* b,
     
    27113386//... fill in a & b
    27123387big_sum(a,b,c,10000);
    2713 \end{cfacode}
     3388\end{cfa}
    27143389\end{tabular}
    27153390\end{center}
    27163391\caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.}
    2717 \label{lst:parfor}
     3392\label{f:parfor}
    27183393\end{table}
    27193394
    2720 Implicit parallelism is a restrictive solution and therefore has its limitations. However, it is a quick and simple approach to parallelism, which may very well be sufficient for smaller applications and reduces the amount of boilerplate needed to start benefiting from parallelism in modern CPUs.
     3395Implicit parallelism is a restrictive solution and therefore has its limitations.
     3396However, it is a quick and simple approach to parallelism, which may very well be sufficient for smaller applications and reduces the amount of boilerplate needed to start benefiting from parallelism in modern CPUs.
    27213397
    27223398
     
    27313407% B I B L I O G R A P H Y
    27323408% -----------------------------
    2733 \bibliographystyle{plain}
     3409%\bibliographystyle{plain}
    27343410\bibliography{pl,local}
    27353411
Note: See TracChangeset for help on using the changeset viewer.