Changeset b067d9b for doc/papers


Ignore:
Timestamp:
Oct 29, 2019, 4:01:24 PM (6 years ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
773db65, 9421f3d8
Parents:
7951100 (diff), 8364209 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Location:
doc/papers
Files:
46 added
1 deleted
10 edited

Legend:

Unmodified
Added
Removed
  • doc/papers/AMA/AMA-stix/ama/WileyNJD-v2.cls

    r7951100 rb067d9b  
    18541854    \vspace*{8.5\p@}%
    18551855    \rightskip0pt\raggedright\hspace*{7\p@}\hbox{\reset@font\abstractfont{\absheadfont#1}}\par\vskip3pt% LN20feb2016
    1856     {\abstractfont\baselineskip15pt\ifFWabstract\hsize\textwidth\fi#2\par\vspace*{0\p@}}%
     1856    {\abstractfont\baselineskip15pt\ifFWabstract\hsize\textwidth\fi\hsize0.68\textwidth#2\par\vspace*{0\p@}}%
    18571857    \addcontentsline{toc}{section}{\abstractname}%
    18581858}}%\abstract{}%
     
    18821882}%
    18831883%
    1884 \def\fundinginfohead#1{\gdef\@fundinginfo@head{#1}}\fundinginfohead{Funding Information}%
     1884\def\fundinginfohead#1{\gdef\@fundinginfo@head{#1}}\fundinginfohead{Funding information}%
    18851885\def\fundinginfoheadtext#1{\gdef\@fundinginfo@head@text{#1}}\fundinginfoheadtext{}%
    18861886\gdef\@fundinginfo{{%
     
    23192319%% Keywords %%
    23202320
    2321 \def\keywords#1{\def\@keywords{{\keywordsheadfont\textbf{KEYWORDS:}\par\removelastskip\nointerlineskip\vskip6pt \keywordsfont#1\par}}}\def\@keywords{}%
     2321\def\keywords#1{\def\@keywords{{\keywordsheadfont\textbf{KEYWORDS}\par\removelastskip\nointerlineskip\vskip6pt \keywordsfont#1\par}}}\def\@keywords{}%
    23222322
    23232323\def\@fnsymbol#1{\ifcase#1\or \dagger\or \ddagger\or
     
    24442444     \@afterheading}
    24452445
    2446 \renewcommand\section{\@startsection{section}{1}{\z@}{-27pt \@plus -2pt \@minus -2pt}{12\p@}{\sectionfont}}%
    2447 \renewcommand\subsection{\@startsection{subsection}{2}{\z@}{-23pt \@plus -2pt \@minus -2pt}{5\p@}{\subsectionfont}}%
     2446\renewcommand\section{\@startsection{section}{1}{\z@}{-25pt \@plus -2pt \@minus -2pt}{12\p@}{\sectionfont}}%
     2447\renewcommand\subsection{\@startsection{subsection}{2}{\z@}{-22pt \@plus -2pt \@minus -2pt}{5\p@}{\subsectionfont}}%
    24482448\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}{-20pt \@plus -2pt \@minus -2pt}{2\p@}{\subsubsectionfont}}%
    24492449%
     
    34063406\hskip-\parindentvalue\fbox{\vbox{\noindent\@jnlcitation}}}%
    34073407
    3408 \AtEndDocument{\ifappendixsec\else\printjnlcitation\fi}%
     3408%\AtEndDocument{\ifappendixsec\else\printjnlcitation\fi}%
    34093409
    34103410%% Misc math macros %%
  • doc/papers/OOPSLA17/Makefile

    r7951100 rb067d9b  
    3333
    3434DOCUMENT = generic_types.pdf
     35BASE = ${basename ${DOCUMENT}}
    3536
    3637# Directives #
     
    4142
    4243clean :
    43         @rm -frv ${DOCUMENT} ${basename ${DOCUMENT}}.ps ${Build}
     44        @rm -frv ${DOCUMENT} ${BASE}.ps ${Build}
    4445
    4546# File Dependencies #
    4647
    47 ${DOCUMENT} : ${basename ${DOCUMENT}}.ps
     48${DOCUMENT} : ${BASE}.ps
    4849        ps2pdf $<
    4950
    50 ${basename ${DOCUMENT}}.ps : ${basename ${DOCUMENT}}.dvi
     51${BASE}.ps : ${BASE}.dvi
    5152        dvips ${Build}/$< -o $@
    5253
    53 ${basename ${DOCUMENT}}.dvi : Makefile ${Build} ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} ../../bibliography/pl.bib
     54${BASE}.dvi : Makefile ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \
     55                ../../bibliography/pl.bib | ${Build}
    5456        # Must have *.aux file containing citations for bibtex
    5557        if [ ! -r ${basename $@}.aux ] ; then ${LaTeX} ${basename $@}.tex ; fi
     
    6365## Define the default recipes.
    6466
    65 ${Build}:
     67${Build} :
    6668        mkdir -p ${Build}
    6769
     
    6971        gnuplot -e Build="'${Build}/'" evaluation/timing.gp
    7072
    71 %.tex : %.fig
     73%.tex : %.fig | ${Build}
    7274        fig2dev -L eepic $< > ${Build}/$@
    7375
    74 %.ps : %.fig
     76%.ps : %.fig | ${Build}
    7577        fig2dev -L ps $< > ${Build}/$@
    7678
    77 %.pstex : %.fig
     79%.pstex : %.fig | ${Build}
    7880        fig2dev -L pstex $< > ${Build}/$@
    7981        fig2dev -L pstex_t -p ${Build}/$@ $< > ${Build}/$@_t
  • doc/papers/concurrency/Makefile

    r7951100 rb067d9b  
    44Figures = figures
    55Macros = ../AMA/AMA-stix/ama
    6 TeXLIB = .:annex:../../LaTeXmacros:${Macros}:${Build}:../../bibliography:
     6TeXLIB = .:../../LaTeXmacros:${Macros}:${Build}:
    77LaTeX  = TEXINPUTS=${TeXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${Build}
    8 BibTeX = BIBINPUTS=${TeXLIB} && export BIBINPUTS && bibtex
     8BibTeX = BIBINPUTS=annex:../../bibliography: && export BIBINPUTS && bibtex
    99
    1010MAKEFLAGS = --no-print-directory # --silent
     
    1515SOURCES = ${addsuffix .tex, \
    1616Paper \
    17 style/style \
    18 style/cfa-format \
    1917}
    2018
    2119FIGURES = ${addsuffix .tex, \
    22 monitor \
    23 ext_monitor \
    2420int_monitor \
    2521dependency \
     22RunTimeStructure \
    2623}
    2724
    2825PICTURES = ${addsuffix .pstex, \
     26FullProdConsStack \
     27FullCoroutinePhases \
     28corlayout \
     29CondSigWait \
     30monitor \
     31ext_monitor \
    2932system \
    3033monitor_structs \
     
    5962        dvips ${Build}/$< -o $@
    6063
    61 ${BASE}.dvi : Makefile ${Build} ${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \
    62                 annex/local.bib ../../bibliography/pl.bib
     64${BASE}.dvi : Makefile ${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \
     65                annex/local.bib ../../bibliography/pl.bib | ${Build}
    6366        # Must have *.aux file containing citations for bibtex
    6467        if [ ! -r ${basename $@}.aux ] ; then ${LaTeX} ${basename $@}.tex ; fi
    65         ${BibTeX} ${Build}/${basename $@}
     68        -${BibTeX} ${Build}/${basename $@}
    6669        # Some citations reference others so run again to resolve these citations
    6770        ${LaTeX} ${basename $@}.tex
    68         ${BibTeX} ${Build}/${basename $@}
     71        -${BibTeX} ${Build}/${basename $@}
    6972        # Run again to finish citations
    7073        ${LaTeX} ${basename $@}.tex
     
    7275## Define the default recipes.
    7376
    74 ${Build}:
     77${Build} :
    7578        mkdir -p ${Build}
    7679
    77 ${BASE}.out.ps: ${Build}
     80${BASE}.out.ps : | ${Build}
    7881        ln -fs ${Build}/Paper.out.ps .
    7982
    80 WileyNJD-AMA.bst:
     83WileyNJD-AMA.bst :
    8184        ln -fs ../AMA/AMA-stix/ama/WileyNJD-AMA.bst .
    8285
    83 %.tex : %.fig ${Build}
     86%.tex : %.fig | ${Build}
    8487        fig2dev -L eepic $< > ${Build}/$@
    8588
    86 %.ps : %.fig ${Build}
     89%.ps : %.fig | ${Build}
    8790        fig2dev -L ps $< > ${Build}/$@
    8891
    89 %.pstex : %.fig ${Build}
     92%.pstex : %.fig | ${Build}
    9093        fig2dev -L pstex $< > ${Build}/$@
    9194        fig2dev -L pstex_t -p ${Build}/$@ $< > ${Build}/$@_t
  • doc/papers/concurrency/Paper.tex

    r7951100 rb067d9b  
    33\articletype{RESEARCH ARTICLE}%
    44
    5 \received{26 April 2016}
    6 \revised{6 June 2016}
    7 \accepted{6 June 2016}
     5% Referees
     6% Doug Lea, dl@cs.oswego.edu, SUNY Oswego
     7% Herb Sutter, hsutter@microsoft.com, Microsoft Corp
     8% Gor Nishanov, gorn@microsoft.com, Microsoft Corp
     9% James Noble, kjx@ecs.vuw.ac.nz, Victoria University of Wellington, School of Engineering and Computer Science
     10
     11\received{XXXXX}
     12\revised{XXXXX}
     13\accepted{XXXXX}
    814
    915\raggedbottom
     
    1521\usepackage{epic,eepic}
    1622\usepackage{xspace}
     23\usepackage{enumitem}
    1724\usepackage{comment}
    1825\usepackage{upquote}                                            % switch curled `'" to straight
     
    2128\renewcommand{\thesubfigure}{(\Alph{subfigure})}
    2229\captionsetup{justification=raggedright,singlelinecheck=false}
    23 \usepackage{siunitx}
    24 \sisetup{binary-units=true}
     30\usepackage{dcolumn}                                            % align decimal points in tables
     31\usepackage{capt-of}
     32\setlength{\multicolsep}{6.0pt plus 2.0pt minus 1.5pt}
    2533
    2634\hypersetup{breaklinks=true}
     
    3240\renewcommand{\linenumberfont}{\scriptsize\sffamily}
    3341
     42\renewcommand{\topfraction}{0.8}                        % float must be greater than X of the page before it is forced onto its own page
     43\renewcommand{\bottomfraction}{0.8}                     % float must be greater than X of the page before it is forced onto its own page
     44\renewcommand{\floatpagefraction}{0.8}          % float must be greater than X of the page before it is forced onto its own page
    3445\renewcommand{\textfraction}{0.0}                       % the entire page maybe devoted to floats with no text on the page at all
    3546
     
    132143\makeatother
    133144
    134 \newenvironment{cquote}{%
    135         \list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
    136         \item\relax
    137 }{%
    138         \endlist
    139 }% cquote
     145\newenvironment{cquote}
     146               {\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
     147                \item\relax}
     148               {\endlist}
     149
     150%\newenvironment{cquote}{%
     151%\list{}{\lstset{resetmargins=true,aboveskip=0pt,belowskip=0pt}\topsep=3pt\parsep=0pt\leftmargin=\parindentlnth\rightmargin\leftmargin}%
     152%\item\relax%
     153%}{%
     154%\endlist%
     155%}% cquote
    140156
    141157% CFA programming language, based on ANSI C (with some gcc additions)
     
    145161                auto, _Bool, catch, catchResume, choose, _Complex, __complex, __complex__, __const, __const__,
    146162                coroutine, disable, dtype, enable, exception, __extension__, fallthrough, fallthru, finally,
    147                 __float80, float80, __float128, float128, forall, ftype, _Generic, _Imaginary, __imag, __imag__,
     163                __float80, float80, __float128, float128, forall, ftype, generator, _Generic, _Imaginary, __imag, __imag__,
    148164                inline, __inline, __inline__, __int128, int128, __label__, monitor, mutex, _Noreturn, one_t, or,
    149165                otype, restrict, __restrict, __restrict__, __signed, __signed__, _Static_assert, thread,
    150166                _Thread_local, throw, throwResume, timeout, trait, try, ttype, typeof, __typeof, __typeof__,
    151167                virtual, __volatile, __volatile__, waitfor, when, with, zero_t},
    152         moredirectives={defined,include_next}%
     168        moredirectives={defined,include_next},
     169        % replace/adjust listing characters that look bad in sanserif
     170        literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
     171                {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
     172                {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
     173                {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2,
    153174}
    154175
     
    167188aboveskip=4pt,                                                                                  % spacing above/below code block
    168189belowskip=3pt,
    169 % replace/adjust listing characters that look bad in sanserif
    170 literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
    171         {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
    172         {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
    173         {<-}{$\leftarrow$}2 {=>}{$\Rightarrow$}2 {->}{\makebox[1ex][c]{\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}\kern-0.2ex{\textrm{\textgreater}}}2,
    174190moredelim=**[is][\color{red}]{`}{`},
    175191}% lstset
     
    197213}
    198214
     215% Go programming language: https://github.com/julienc91/listings-golang/blob/master/listings-golang.sty
     216\lstdefinelanguage{Golang}{
     217        morekeywords=[1]{package,import,func,type,struct,return,defer,panic,recover,select,var,const,iota,},
     218        morekeywords=[2]{string,uint,uint8,uint16,uint32,uint64,int,int8,int16,int32,int64,
     219                bool,float32,float64,complex64,complex128,byte,rune,uintptr, error,interface},
     220        morekeywords=[3]{map,slice,make,new,nil,len,cap,copy,close,true,false,delete,append,real,imag,complex,chan,},
     221        morekeywords=[4]{for,break,continue,range,goto,switch,case,fallthrough,if,else,default,},
     222        morekeywords=[5]{Println,Printf,Error,},
     223        sensitive=true,
     224        morecomment=[l]{//},
     225        morecomment=[s]{/*}{*/},
     226        morestring=[b]',
     227        morestring=[b]",
     228        morestring=[s]{`}{`},
     229        % replace/adjust listing characters that look bad in sanserif
     230        literate={-}{\makebox[1ex][c]{\raisebox{0.4ex}{\rule{0.8ex}{0.1ex}}}}1 {^}{\raisebox{0.6ex}{$\scriptstyle\land\,$}}1
     231                {~}{\raisebox{0.3ex}{$\scriptstyle\sim\,$}}1 % {`}{\ttfamily\upshape\hspace*{-0.1ex}`}1
     232                {<}{\textrm{\textless}}1 {>}{\textrm{\textgreater}}1
     233                {<-}{\makebox[2ex][c]{\textrm{\textless}\raisebox{0.5ex}{\rule{0.8ex}{0.075ex}}}}2,
     234}
     235
    199236\lstnewenvironment{cfa}[1][]
    200237{\lstset{#1}}
     
    207244{}
    208245\lstnewenvironment{Go}[1][]
    209 {\lstset{#1}}
     246{\lstset{language=Golang,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
     247{}
     248\lstnewenvironment{python}[1][]
     249{\lstset{language=python,moredelim=**[is][\protect\color{red}]{`}{`},#1}\lstset{#1}}
    210250{}
    211251
     
    222262}
    223263
    224 \title{\texorpdfstring{Concurrency in \protect\CFA}{Concurrency in Cforall}}
     264\newbox\myboxA
     265\newbox\myboxB
     266\newbox\myboxC
     267\newbox\myboxD
     268
     269\title{\texorpdfstring{Advanced Control-flow and Concurrency in \protect\CFA}{Advanced Control-flow in Cforall}}
    225270
    226271\author[1]{Thierry Delisle}
     
    232277\corres{*Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}}
    233278
    234 \fundingInfo{Natural Sciences and Engineering Research Council of Canada}
     279% \fundingInfo{Natural Sciences and Engineering Research Council of Canada}
    235280
    236281\abstract[Summary]{
    237 \CFA is a modern, polymorphic, \emph{non-object-oriented} extension of the C programming language.
    238 This paper discusses the design of the concurrency and parallelism features in \CFA, and the concurrent runtime-system.
    239 These features are created from scratch as ISO C lacks concurrency, relying largely on the pthreads library.
    240 Coroutines and lightweight (user) threads are introduced into the language.
    241 In addition, monitors are added as a high-level mechanism for mutual exclusion and synchronization.
    242 A unique contribution is allowing multiple monitors to be safely acquired simultaneously.
    243 All features respect the expectations of C programmers, while being fully integrate with the \CFA polymorphic type-system and other language features.
    244 Finally, experimental results are presented to compare the performance of the new features with similar mechanisms in other concurrent programming-languages.
     282\CFA is a polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language.
     283This paper discusses the design philosophy and implementation of its advanced control-flow and concurrent/parallel features, along with the supporting runtime written in \CFA.
     284These features are created from scratch as ISO C has only low-level and/or unimplemented concurrency, so C programmers continue to rely on library features like pthreads.
     285\CFA introduces modern language-level control-flow mechanisms, like generators, coroutines, user-level threading, and monitors for mutual exclusion and synchronization.
     286% Library extension for executors, futures, and actors are built on these basic mechanisms.
     287The runtime provides significant programmer simplification and safety by eliminating spurious wakeup and monitor barging.
     288The runtime also ensures multiple monitors can be safely acquired \emph{simultaneously} (deadlock free), and this feature is fully integrated with all monitor synchronization mechanisms.
     289All control-flow features integrate with the \CFA polymorphic type-system and exception handling, while respecting the expectations and style of C programmers.
     290Experimental results show comparable performance of the new features with similar mechanisms in other concurrent programming languages.
    245291}%
    246292
    247 \keywords{concurrency, parallelism, coroutines, threads, monitors, runtime, C, Cforall}
     293\keywords{generator, coroutine, concurrency, parallelism, thread, monitor, runtime, C, \CFA (Cforall)}
    248294
    249295
     
    256302\section{Introduction}
    257303
    258 This paper provides a minimal concurrency \newterm{Application Program Interface} (API) that is simple, efficient and can be used to build other concurrency features.
    259 While the simplest concurrency system is a thread and a lock, this low-level approach is hard to master.
    260 An easier approach for programmers is to support higher-level constructs as the basis of concurrency.
    261 Indeed, for highly productive concurrent programming, high-level approaches are much more popular~\cite{Hochstein05}.
    262 Examples of high-level approaches are task (work) based~\cite{TBB}, implicit threading~\cite{OpenMP}, monitors~\cite{Java}, channels~\cite{CSP,Go}, and message passing~\cite{Erlang,MPI}.
    263 
    264 The following terminology is used.
    265 A \newterm{thread} is a fundamental unit of execution that runs a sequence of code and requires a stack to maintain state.
    266 Multiple simultaneous threads give rise to \newterm{concurrency}, which requires locking to ensure safe communication and access to shared data.
    267 % Correspondingly, concurrency is defined as the concepts and challenges that occur when multiple independent (sharing memory, timing dependencies, \etc) concurrent threads are introduced.
    268 \newterm{Locking}, and by extension \newterm{locks}, are defined as a mechanism to prevent progress of threads to provide safety.
    269 \newterm{Parallelism} is running multiple threads simultaneously.
    270 Parallelism implies \emph{actual} simultaneous execution, where concurrency only requires \emph{apparent} simultaneous execution.
    271 As such, parallelism only affects performance, which is observed through differences in space and/or time at runtime.
    272 
    273 Hence, there are two problems to be solved: concurrency and parallelism.
    274 While these two concepts are often combined, they are distinct, requiring different tools~\cite[\S~2]{Buhr05a}.
    275 Concurrency tools handle synchronization and mutual exclusion, while parallelism tools handle performance, cost and resource utilization.
    276 
    277 The proposed concurrency API is implemented in a dialect of C, called \CFA.
    278 The paper discusses how the language features are added to the \CFA translator with respect to parsing, semantic, and type checking, and the corresponding high-performance runtime-library to implement the concurrency features.
    279 
    280 
    281 \section{\CFA Overview}
    282 
    283 The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency.
    284 Extended versions and explanation of the following code examples are available at the \CFA website~\cite{Cforall} or in Moss~\etal~\cite{Moss18}.
    285 
    286 \CFA is an extension of ISO-C, and hence, supports all C paradigms.
    287 %It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opted out easily.
    288 Like C, the basics of \CFA revolve around structures and routines.
    289 Virtually all of the code generated by the \CFA translator respects C memory layouts and calling conventions.
    290 While \CFA is not an object-oriented language, lacking the concept of a receiver (\eg @this@) and nominal inheritance-relationships, C does have a notion of objects: ``region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}.
    291 While some \CFA features are common in object-oriented programming-languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm.
    292 
    293 
    294 \subsection{References}
    295 
    296 \CFA provides multi-level rebindable references, as an alternative to pointers, which significantly reduces syntactic noise.
    297 \begin{cfa}
    298 int x = 1, y = 2, z = 3;
    299 int * p1 = &x, ** p2 = &p1,  *** p3 = &p2,      $\C{// pointers to x}$
    300         `&` r1 = x,  `&&` r2 = r1,  `&&&` r3 = r2;      $\C{// references to x}$
    301 int * p4 = &z, `&` r4 = z;
    302 
    303 *p1 = 3; **p2 = 3; ***p3 = 3;       // change x
    304 r1 =  3;     r2 = 3;      r3 = 3;        // change x: implicit dereferences *r1, **r2, ***r3
    305 **p3 = &y; *p3 = &p4;                // change p1, p2
    306 `&`r3 = &y; `&&`r3 = &`&`r4;             // change r1, r2: cancel implicit dereferences (&*)**r3, (&(&*)*)*r3, &(&*)r4
    307 \end{cfa}
    308 A reference is a handle to an object, like a pointer, but is automatically dereferenced by the specified number of levels.
    309 Referencing (address-of @&@) a reference variable cancels one of the implicit dereferences, until there are no more implicit references, after which normal expression behaviour applies.
    310 
    311 
    312 \subsection{\texorpdfstring{\protect\lstinline{with} Statement}{with Statement}}
    313 \label{s:WithStatement}
    314 
    315 Heterogeneous data is aggregated into a structure/union.
    316 To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate field-qualification by opening a scope containing the field identifiers.
    317 \begin{cquote}
    318 \vspace*{-\baselineskip}%???
    319 \lstDeleteShortInline@%
    320 \begin{cfa}
    321 struct S { char c; int i; double d; };
    322 struct T { double m, n; };
    323 // multiple aggregate parameters
    324 \end{cfa}
    325 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    326 \begin{cfa}
    327 void f( S & s, T & t ) {
    328         `s.`c; `s.`i; `s.`d;
    329         `t.`m; `t.`n;
    330 }
    331 \end{cfa}
    332 &
    333 \begin{cfa}
    334 void f( S & s, T & t ) `with ( s, t )` {
    335         c; i; d;                // no qualification
    336         m; n;
    337 }
    338 \end{cfa}
    339 \end{tabular}
    340 \lstMakeShortInline@%
    341 \end{cquote}
    342 Object-oriented programming languages only provide implicit qualification for the receiver.
    343 
    344 In detail, the @with@ statement has the form:
    345 \begin{cfa}
    346 $\emph{with-statement}$:
    347         'with' '(' $\emph{expression-list}$ ')' $\emph{compound-statement}$
    348 \end{cfa}
    349 and may appear as the body of a routine or nested within a routine body.
    350 Each expression in the expression-list provides a type and object.
    351 The type must be an aggregate type.
    352 (Enumerations are already opened.)
    353 The object is the implicit qualifier for the open structure-fields.
    354 All expressions in the expression list are open in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right.
    355 
    356 
    357 \subsection{Overloading}
    358 
    359 \CFA maximizes the ability to reuse names via overloading to aggressively address the naming problem.
    360 Both variables and routines may be overloaded, where selection is based on types, and number of returns (as in Ada~\cite{Ada}) and arguments.
    361 \begin{cquote}
    362 \vspace*{-\baselineskip}%???
    363 \lstDeleteShortInline@%
    364 \begin{cfa}
    365 // selection based on type
    366 \end{cfa}
    367 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    368 \begin{cfa}
    369 const short int `MIN` = -32768;
    370 const int `MIN` = -2147483648;
    371 const long int `MIN` = -9223372036854775808L;
    372 \end{cfa}
    373 &
    374 \begin{cfa}
    375 short int si = `MIN`;
    376 int i = `MIN`;
    377 long int li = `MIN`;
    378 \end{cfa}
    379 \end{tabular}
    380 \begin{cfa}
    381 // selection based on type and number of parameters
    382 \end{cfa}
    383 \begin{tabular}{@{}l@{\hspace{2.7\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    384 \begin{cfa}
    385 void `f`( void );
    386 void `f`( char );
    387 void `f`( int, double );
    388 \end{cfa}
    389 &
    390 \begin{cfa}
    391 `f`();
    392 `f`( 'a' );
    393 `f`( 3, 5.2 );
    394 \end{cfa}
    395 \end{tabular}
    396 \begin{cfa}
    397 // selection based on type and number of returns
    398 \end{cfa}
    399 \begin{tabular}{@{}l@{\hspace{2\parindentlnth}}|@{\hspace{2\parindentlnth}}l@{}}
    400 \begin{cfa}
    401 char `f`( int );
    402 double `f`( int );
    403 [char, double] `f`( int );
    404 \end{cfa}
    405 &
    406 \begin{cfa}
    407 char c = `f`( 3 );
    408 double d = `f`( 3 );
    409 [d, c] = `f`( 3 );
    410 \end{cfa}
    411 \end{tabular}
    412 \lstMakeShortInline@%
    413 \end{cquote}
    414 Overloading is important for \CFA concurrency since the runtime system relies on creating different types to represent concurrency objects.
    415 Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions to prevent name clashes.
    416 As seen in Section~\ref{basics}, routine @main@ is heavily overloaded.
    417 
    418 Variable overloading is useful in the parallel semantics of the @with@ statement for fields with the same name:
    419 \begin{cfa}
    420 struct S { int `i`; int j; double m; } s;
    421 struct T { int `i`; int k; int m; } t;
    422 with ( s, t ) {
    423         j + k;                                                                  $\C{// unambiguous, s.j + t.k}$
    424         m = 5.0;                                                                $\C{// unambiguous, s.m = 5.0}$
    425         m = 1;                                                                  $\C{// unambiguous, t.m = 1}$
    426         int a = m;                                                              $\C{// unambiguous, a = t.m }$
    427         double b = m;                                                   $\C{// unambiguous, b = s.m}$
    428         int c = `s.i` + `t.i`;                                  $\C{// unambiguous, qualification}$
    429         (double)m;                                                              $\C{// unambiguous, cast s.m}$
    430 }
    431 \end{cfa}
    432 For parallel semantics, both @s.i@ and @t.i@ are visible the same type, so only @i@ is ambiguous without qualification.
    433 
    434 
    435 \subsection{Operators}
    436 
    437 Overloading also extends to operators.
    438 Operator-overloading syntax creates a routine name with an operator symbol and question marks for the operands:
    439 \begin{cquote}
    440 \lstDeleteShortInline@%
    441 \begin{tabular}{@{}ll@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
    442 \begin{cfa}
    443 int ++? (int op);
    444 int ?++ (int op);
    445 int `?+?` (int op1, int op2);
    446 int ?<=?(int op1, int op2);
    447 int ?=? (int & op1, int op2);
    448 int ?+=?(int & op1, int op2);
    449 \end{cfa}
    450 &
    451 \begin{cfa}
    452 // unary prefix increment
    453 // unary postfix increment
    454 // binary plus
    455 // binary less than
    456 // binary assignment
    457 // binary plus-assignment
    458 \end{cfa}
    459 &
    460 \begin{cfa}
    461 struct S { int i, j; };
    462 S `?+?`( S op1, S op2) { // add two structures
    463         return (S){op1.i + op2.i, op1.j + op2.j};
    464 }
    465 S s1 = {1, 2}, s2 = {2, 3}, s3;
    466 s3 = s1 `+` s2;         // compute sum: s3 == {2, 5}
    467 \end{cfa}
    468 \end{tabular}
    469 \lstMakeShortInline@%
    470 \end{cquote}
    471 While concurrency does not use operator overloading directly, it provides an introduction for the syntax of constructors.
    472 
    473 
    474 \subsection{Parametric Polymorphism}
    475 \label{s:ParametricPolymorphism}
    476 
    477 The signature feature of \CFA is parametric-polymorphic routines~\cite{} with routines generalized using a @forall@ clause (giving the language its name), which allow separately compiled routines to support generic usage over multiple types.
    478 For example, the following sum routine works for any type that supports construction from 0 and addition \commenttd{constructors have not been introduced yet.}:
    479 \begin{cfa}
    480 forall( otype T | { void `?{}`( T *, zero_t ); T `?+?`( T, T ); } ) // constraint type, 0 and +
    481 T sum( T a[$\,$], size_t size ) {
    482         `T` total = { `0` };                                    $\C{// initialize by 0 constructor}$
    483         for ( size_t i = 0; i < size; i += 1 )
    484                 total = total `+` a[i];                         $\C{// select appropriate +}$
    485         return total;
    486 }
    487 S sa[5];
    488 int i = sum( sa, 5 );                                           $\C{// use S's 0 construction and +}$
    489 \end{cfa}
    490 
    491 \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each routine declaration:
    492 \begin{cfa}
    493 trait `sumable`( otype T ) {
    494         void `?{}`( T &, zero_t );                              $\C{// 0 literal constructor}$
    495         T `?+?`( T, T );                                                $\C{// assortment of additions}$
    496         T ?+=?( T &, T );
    497         T ++?( T & );
    498         T ?++( T & );
    499 };
    500 forall( otype T `| sumable( T )` )                      $\C{// use trait}$
    501 T sum( T a[$\,$], size_t size );
    502 \end{cfa}
    503 
    504 Assertions can be @otype@ or @dtype@.
    505 @otype@ refers to a ``complete'' object, \ie an object has a size, default constructor, copy constructor, destructor and an assignment operator.
    506 @dtype@ only guarantees an object has a size and alignment.
    507 
    508 Using the return type for discrimination, it is possible to write a type-safe @alloc@ based on the C @malloc@:
    509 \begin{cfa}
    510 forall( dtype T | sized(T) ) T * alloc( void ) { return (T *)malloc( sizeof(T) ); }
    511 int * ip = alloc();                                                     $\C{// select type and size from left-hand side}$
    512 double * dp = alloc();
    513 struct S {...} * sp = alloc();
    514 \end{cfa}
    515 where the return type supplies the type/size of the allocation, which is impossible in most type systems.
    516 
    517 
    518 \subsection{Constructors / Destructors}
    519 
    520 Object lifetime is a challenge in non-managed programming languages.
    521 \CFA responds with \CC-like constructors and destructors:
    522 \begin{cfa}
    523 struct VLA { int len, * data; };                        $\C{// variable length array of integers}$
    524 void ?{}( VLA & vla ) with ( vla ) { len = 10;  data = alloc( len ); }  $\C{// default constructor}$
    525 void ?{}( VLA & vla, int size, char fill ) with ( vla ) { len = size;  data = alloc( len, fill ); } // initialization
    526 void ?{}( VLA & vla, VLA other ) { vla.len = other.len;  vla.data = other.data; } $\C{// copy, shallow}$
    527 void ^?{}( VLA & vla ) with ( vla ) { free( data ); } $\C{// destructor}$
    528 {
    529         VLA  x,            y = { 20, 0x01 },     z = y; $\C{// z points to y}$
    530         //    x{};         y{ 20, 0x01 };          z{ z, y };
    531         ^x{};                                                                   $\C{// deallocate x}$
    532         x{};                                                                    $\C{// reallocate x}$
    533         z{ 5, 0xff };                                                   $\C{// reallocate z, not pointing to y}$
    534         ^y{};                                                                   $\C{// deallocate y}$
    535         y{ x };                                                                 $\C{// reallocate y, points to x}$
    536         x{};                                                                    $\C{// reallocate x, not pointing to y}$
    537         //  ^z{};  ^y{};  ^x{};
    538 }
    539 \end{cfa}
    540 Like \CC, construction is implicit on allocation (stack/heap) and destruction is implicit on deallocation.
    541 The object and all their fields are constructed/destructed.
    542 \CFA also provides @new@ and @delete@, which behave like @malloc@ and @free@, in addition to constructing and destructing objects:
    543 \begin{cfa}
    544 {       struct S s = {10};                                              $\C{// allocation, call constructor}$
    545         ...
    546 }                                                                                       $\C{// deallocation, call destructor}$
    547 struct S * s = new();                                           $\C{// allocation, call constructor}$
    548 ...
    549 delete( s );                                                            $\C{// deallocation, call destructor}$
    550 \end{cfa}
    551 \CFA concurrency uses object lifetime as a means of synchronization and/or mutual exclusion.
    552 
    553 
    554 \section{Concurrency Basics}\label{basics}
    555 
    556 At its core, concurrency is based on multiple call-stacks and scheduling threads executing on these stacks.
    557 Multiple call stacks (or contexts) and a single thread of execution, called \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
    558 In coroutining, the single thread is self-scheduling across the stacks, so execution is deterministic, \ie given fixed inputs, the execution path to the outputs is fixed and predictable.
    559 A \newterm{stackless} coroutine executes on the caller's stack~\cite{Python} but this approach is restrictive, \eg preventing modularization and supporting only iterator/generator-style programming;
    560 a \newterm{stackfull} coroutine executes on its own stack, allowing full generality.
    561 Only stackfull coroutines are a stepping-stone to concurrency.
    562 
    563 The transition to concurrency, even for execution with a single thread and multiple stacks, occurs when coroutines also context switch to a scheduling oracle, introducing non-determinism from the coroutine perspective~\cite[\S~3]{Buhr05a}.
    564 Therefore, a minimal concurrency system is possible using coroutines (see Section \ref{coroutine}) in conjunction with a scheduler to decide where to context switch next.
    565 The resulting execution system now follows a cooperative threading-model, called \newterm{non-preemptive scheduling}.
    566 
    567 Because the scheduler is special, it can either be a stackless or stackfull coroutine. \commenttd{I dislike this sentence, it seems imply 1-step vs 2-step but also seems to say that some kind of coroutine is required, which is not the case.}
    568 For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
    569 For stackfull, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
    570 A stackfull scheduler is often used for simplicity and security, even through there is a slightly higher runtime-cost. \commenttd{I'm not a fan of the fact that we don't quantify this but yet imply it is negligeable.}
    571 
    572 Regardless of the approach used, a subset of concurrency related challenges start to appear.
    573 For the complete set of concurrency challenges to occur, the missing feature is \newterm{preemption}, where context switching occurs randomly between any two instructions, often based on a timer interrupt, called \newterm{preemptive scheduling}.
    574 While a scheduler introduces uncertainty in the order of execution, preemption introduces uncertainty where context switches occur.
    575 Interestingly, uncertainty is necessary for the runtime (operating) system to give the illusion of parallelism on a single processor and increase performance on multiple processors.
    576 The reason is that only the runtime has complete knowledge about resources and how to best utilized them.
    577 However, the introduction of unrestricted non-determinism results in the need for \newterm{mutual exclusion} and \newterm{synchronization} to restrict non-determinism for correctness;
    578 otherwise, it is impossible to write meaningful programs.
    579 Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows.
    580 
    581 
    582 \subsection{\protect\CFA's Thread Building Blocks}
    583 
    584 An important missing feature in C is threading\footnote{While the C11 standard defines a ``threads.h'' header, it is minimal and defined as optional.
    585 As such, library support for threading is far from widespread.
    586 At the time of writing the paper, neither \protect\lstinline|gcc| nor \protect\lstinline|clang| support ``threads.h'' in their standard libraries.}.
    587 In modern programming languages, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore existing and new programming languages must have tools for writing efficient concurrent programs to take advantage of parallelism.
    588 As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages.
    589 Furthermore, because C is a system-level language, programmers expect to choose precisely which features they need and which cost they are willing to pay.
    590 Hence, concurrent programs should be written using high-level mechanisms, and only step down to lower-level mechanisms when performance bottlenecks are encountered.
    591 
    592 
    593 \subsection{Coroutines: A Stepping Stone}\label{coroutine}
    594 
    595 While the focus of this discussion is concurrency and parallelism, it is important to address coroutines, which are a significant building block of a concurrency system.
    596 Coroutines are generalized routines allowing execution to be temporarily suspend and later resumed.
    597 Hence, unlike a normal routine, a coroutine may not terminate when it returns to its caller, allowing it to be restarted with the values and execution location present at the point of suspension.
    598 This capability is accomplish via the coroutine's stack, where suspend/resume context switch among stacks.
    599 Because threading design-challenges are present in coroutines, their design effort is relevant, and this effort can be easily exposed to programmers giving them a useful new programming paradigm because a coroutine handles the class of problems that need to retain state between calls, \eg plugins, device drivers, and finite-state machines.
    600 Therefore, the core \CFA coroutine-API for has two fundamental features: independent call-stacks and @suspend@/@resume@ operations.
    601 
    602 For example, a problem made easier with coroutines is unbounded generators, \eg generating an infinite sequence of Fibonacci numbers, where Figure~\ref{f:C-fibonacci} shows conventional approaches for writing a Fibonacci generator in C.
    603 \begin{displaymath}
    604 \mathsf{fib}(n) = \left \{
    605 \begin{array}{ll}
    606 0                                       & n = 0         \\
    607 1                                       & n = 1         \\
    608 \mathsf{fib}(n-1) + \mathsf{fib}(n-2)   & n \ge 2       \\
    609 \end{array}
    610 \right.
    611 \end{displaymath}
    612 Figure~\ref{f:GlobalVariables} illustrates the following problems:
    613 unique unencapsulated global variables necessary to retain state between calls;
    614 only one Fibonacci generator;
    615 execution state must be explicitly retained via explicit state variables.
    616 Figure~\ref{f:ExternalState} addresses these issues:
    617 unencapsulated program global variables become encapsulated structure variables;
    618 unique global variables are replaced by multiple Fibonacci objects;
    619 explicit execution state is removed by precomputing the first two Fibonacci numbers and returning $\mathsf{fib}(n-2)$.
     304This paper discusses the design philosophy and implementation of advanced language-level control-flow and concurrent/parallel features in \CFA~\cite{Moss18,Cforall} and its runtime, which is written entirely in \CFA.
     305\CFA is a modern, polymorphic, non-object-oriented\footnote{
     306\CFA has features often associated with object-oriented programming languages, such as constructors, destructors, virtuals and simple inheritance.
     307However, functions \emph{cannot} be nested in structures, so there is no lexical binding between a structure and set of functions (member/method) implemented by an implicit \lstinline@this@ (receiver) parameter.},
     308backwards-compatible extension of the C programming language.
     309In many ways, \CFA is to C as Scala~\cite{Scala} is to Java, providing a \emph{research vehicle} for new typing and control-flow capabilities on top of a highly popular programming language allowing immediate dissemination.
     310Within the \CFA framework, new control-flow features are created from scratch because ISO \Celeven defines only a subset of the \CFA extensions, where the overlapping features are concurrency~\cite[\S~7.26]{C11}.
     311However, \Celeven concurrency is largely wrappers for a subset of the pthreads library~\cite{Butenhof97,Pthreads}, and \Celeven and pthreads concurrency is simple, based on thread fork/join in a function and mutex/condition locks, which is low-level and error-prone;
     312no high-level language concurrency features are defined.
     313Interestingly, almost a decade after publication of the \Celeven standard, neither gcc-8, clang-9 nor msvc-19 (most recent versions) support the \Celeven include @threads.h@, indicating little interest in the C11 concurrency approach (possibly because the effort to add concurrency to \CC).
     314Finally, while the \Celeven standard does not state a threading model, the historical association with pthreads suggests implementations would adopt kernel-level threading (1:1)~\cite{ThreadModel}.
     315
     316In contrast, there has been a renewed interest during the past decade in user-level (M:N, green) threading in old and new programming languages.
     317As multi-core hardware became available in the 1980/90s, both user and kernel threading were examined.
     318Kernel threading was chosen, largely because of its simplicity and fit with the simpler operating systems and hardware architectures at the time, which gave it a performance advantage~\cite{Drepper03}.
     319Libraries like pthreads were developed for C, and the Solaris operating-system switched from user (JDK 1.1~\cite{JDK1.1}) to kernel threads.
     320As a result, languages like Java, Scala, Objective-C~\cite{obj-c-book}, \CCeleven~\cite{C11}, and C\#~\cite{Csharp} adopt the 1:1 kernel-threading model, with a variety of presentation mechanisms.
     321From 2000 onwards, languages like Go~\cite{Go}, Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, D~\cite{D}, and \uC~\cite{uC++,uC++book} have championed the M:N user-threading model, and many user-threading libraries have appeared~\cite{Qthreads,MPC,Marcel}, including putting green threads back into Java~\cite{Quasar}.
     322The main argument for user-level threading is that it is lighter weight than kernel threading (locking and context switching do not cross the kernel boundary), so there is less restriction on programming styles that encourage large numbers of threads performing medium work units to facilitate load balancing by the runtime~\cite{Verch12}.
     323As well, user-threading facilitates a simpler concurrency approach using thread objects that leverage sequential patterns versus events with call-backs~\cite{Adya02,vonBehren03}.
     324Finally, performant user-threading implementations (both time and space) meet or exceed direct kernel-threading implementations, while achieving the programming advantages of high concurrency levels and safety.
     325
     326A further effort over the past two decades is the development of language memory models to deal with the conflict between language features and compiler/hardware optimizations, \ie some language features are unsafe in the presence of aggressive sequential optimizations~\cite{Buhr95a,Boehm05}.
     327The consequence is that a language must provide sufficient tools to program around safety issues, as inline and library code is all sequential to the compiler.
     328One solution is low-level qualifiers and functions (\eg @volatile@ and atomics) allowing \emph{programmers} to explicitly write safe (race-free~\cite{Boehm12}) programs.
     329A safer solution is high-level language constructs so the \emph{compiler} knows the optimization boundaries, and hence, provides implicit safety.
     330This problem is best known with respect to concurrency, but applies to other complex control-flow, like exceptions\footnote{
     331\CFA exception handling will be presented in a separate paper.
     332The key feature that dovetails with this paper is nonlocal exceptions allowing exceptions to be raised across stacks, with synchronous exceptions raised among coroutines and asynchronous exceptions raised among threads, similar to that in \uC~\cite[\S~5]{uC++}
     333} and coroutines.
     334Finally, language solutions allow matching constructs with language paradigm, \ie imperative and functional languages often have different presentations of the same concept to fit their programming model.
     335
     336Finally, it is important for a language to provide safety over performance \emph{as the default}, allowing careful reduction of safety for performance when necessary.
     337Two concurrency violations of this philosophy are \emph{spurious wakeup} (random wakeup~\cite[\S~8]{Buhr05a}) and \emph{barging}\footnote{
     338The notion of competitive succession instead of direct handoff, \ie a lock owner releases the lock and an arriving thread acquires it ahead of preexisting waiter threads.
     339} (signals-as-hints~\cite[\S~8]{Buhr05a}), where one is a consequence of the other, \ie once there is spurious wakeup, signals-as-hints follow.
     340However, spurious wakeup is \emph{not} a foundational concurrency property~\cite[\S~8]{Buhr05a}, it is a performance design choice.
     341Similarly, signals-as-hints are often a performance decision.
     342We argue removing spurious wakeup and signals-as-hints make concurrent programming significantly safer because it removes local non-determinism and matches with programmer expectation.
     343(Author experience teaching concurrency is that students are highly confused by these semantics.)
     344Clawing back performance, when local non-determinism is unimportant, should be an option not the default.
     345
     346\begin{comment}
     347Most augmented traditional (Fortran 18~\cite{Fortran18}, Cobol 14~\cite{Cobol14}, Ada 12~\cite{Ada12}, Java 11~\cite{Java11}) and new languages (Go~\cite{Go}, Rust~\cite{Rust}, and D~\cite{D}), except \CC, diverge from C with different syntax and semantics, only interoperate indirectly with C, and are not systems languages, for those with managed memory.
     348As a result, there is a significant learning curve to move to these languages, and C legacy-code must be rewritten.
     349While \CC, like \CFA, takes an evolutionary approach to extend C, \CC's constantly growing complex and interdependent features-set (\eg objects, inheritance, templates, etc.) mean idiomatic \CC code is difficult to use from C, and C programmers must expend significant effort learning \CC.
     350Hence, rewriting and retraining costs for these languages, even \CC, are prohibitive for companies with a large C software-base.
     351\CFA with its orthogonal feature-set, its high-performance runtime, and direct access to all existing C libraries circumvents these problems.
     352\end{comment}
     353
     354\CFA embraces user-level threading, language extensions for advanced control-flow, and safety as the default.
     355We present comparative examples so the reader can judge if the \CFA control-flow extensions are better and safer than those in other concurrent, imperative programming languages, and perform experiments to show the \CFA runtime is competitive with other similar mechanisms.
     356The main contributions of this work are:
     357\begin{itemize}[topsep=3pt,itemsep=1pt]
     358\item
     359language-level generators, coroutines and user-level threading, which respect the expectations of C programmers.
     360\item
     361monitor synchronization without barging, and the ability to safely acquiring multiple monitors \emph{simultaneously} (deadlock free), while seamlessly integrating these capabilities with all monitor synchronization mechanisms.
     362\item
     363providing statically type-safe interfaces that integrate with the \CFA polymorphic type-system and other language features.
     364% \item
     365% library extensions for executors, futures, and actors built on the basic mechanisms.
     366\item
     367a runtime system with no spurious wakeup.
     368\item
     369a dynamic partitioning mechanism to segregate the execution environment for specialized requirements.
     370% \item
     371% a non-blocking I/O library
     372\item
     373experimental results showing comparable performance of the new features with similar mechanisms in other programming languages.
     374\end{itemize}
     375
     376Section~\ref{s:StatefulFunction} begins advanced control by introducing sequential functions that retain data and execution state between calls, which produces constructs @generator@ and @coroutine@.
     377Section~\ref{s:Concurrency} begins concurrency, or how to create (fork) and destroy (join) a thread, which produces the @thread@ construct.
     378Section~\ref{s:MutualExclusionSynchronization} discusses the two mechanisms to restricted nondeterminism when controlling shared access to resources (mutual exclusion) and timing relationships among threads (synchronization).
     379Section~\ref{s:Monitor} shows how both mutual exclusion and synchronization are safely embedded in the @monitor@ and @thread@ constructs.
     380Section~\ref{s:CFARuntimeStructure} describes the large-scale mechanism to structure (cluster) threads and virtual processors (kernel threads).
     381Section~\ref{s:Performance} uses a series of microbenchmarks to compare \CFA threading with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0.
     382
     383
     384\section{Stateful Function}
     385\label{s:StatefulFunction}
     386
     387The stateful function is an old idea~\cite{Conway63,Marlin80} that is new again~\cite{C++20Coroutine19}, where execution is temporarily suspended and later resumed, \eg plugin, device driver, finite-state machine.
     388Hence, a stateful function may not end when it returns to its caller, allowing it to be restarted with the data and execution location present at the point of suspension.
     389This capability is accomplished by retaining a data/execution \emph{closure} between invocations.
     390If the closure is fixed size, we call it a \emph{generator} (or \emph{stackless}), and its control flow is restricted, \eg suspending outside the generator is prohibited.
     391If the closure is variable size, we call it a \emph{coroutine} (or \emph{stackful}), and as the names implies, often implemented with a separate stack with no programming restrictions.
     392Hence, refactoring a stackless coroutine may require changing it to stackful.
     393A foundational property of all \emph{stateful functions} is that resume/suspend \emph{do not} cause incremental stack growth, \ie resume/suspend operations are remembered through the closure not the stack.
     394As well, activating a stateful function is \emph{asymmetric} or \emph{symmetric}, identified by resume/suspend (no cycles) and resume/resume (cycles).
     395A fixed closure activated by modified call/return is faster than a variable closure activated by context switching.
     396Additionally, any storage management for the closure (especially in unmanaged languages, \ie no garbage collection) must also be factored into design and performance.
     397Therefore, selecting between stackless and stackful semantics is a tradeoff between programming requirements and performance, where stackless is faster and stackful is more general.
     398Note, creation cost is amortized across usage, so activation cost is usually the dominant factor.
    620399
    621400\begin{figure}
    622401\centering
    623 \newbox\myboxA
    624402\begin{lrbox}{\myboxA}
    625403\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    626 `int f1, f2, state = 1;`   // single global variables
    627 int fib() {
    628         int fn;
    629         `switch ( state )` {  // explicit execution state
    630           case 1: fn = 0;  f1 = fn;  state = 2;  break;
    631           case 2: fn = 1;  f2 = f1;  f1 = fn;  state = 3;  break;
    632           case 3: fn = f1 + f2;  f2 = f1;  f1 = fn;  break;
    633         }
     404typedef struct {
     405        int fn1, fn;
     406} Fib;
     407#define FibCtor { 1, 0 }
     408int fib( Fib * f ) {
     409
     410
     411
     412        int fn = f->fn; f->fn = f->fn1;
     413                f->fn1 = f->fn + fn;
    634414        return fn;
     415
    635416}
    636417int main() {
    637 
    638         for ( int i = 0; i < 10; i += 1 ) {
    639                 printf( "%d\n", fib() );
    640         }
     418        Fib f1 = FibCtor, f2 = FibCtor;
     419        for ( int i = 0; i < 10; i += 1 )
     420                printf( "%d %d\n",
     421                           fib( &f1 ), fib( &f2 ) );
    641422}
    642423\end{cfa}
    643424\end{lrbox}
    644425
    645 \newbox\myboxB
    646426\begin{lrbox}{\myboxB}
    647427\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    648 #define FIB_INIT `{ 0, 1 }`
    649 typedef struct { int f2, f1; } Fib;
    650 int fib( Fib * f ) {
    651 
    652         int ret = f->f2;
    653         int fn = f->f1 + f->f2;
    654         f->f2 = f->f1; f->f1 = fn;
    655 
    656         return ret;
     428`generator` Fib {
     429        int fn1, fn;
     430};
     431
     432void `main(Fib & fib)` with(fib) {
     433
     434        [fn1, fn] = [1, 0];
     435        for () {
     436                `suspend;`
     437                [fn1, fn] = [fn, fn + fn1];
     438
     439        }
    657440}
    658441int main() {
    659         Fib f1 = FIB_INIT, f2 = FIB_INIT;
    660         for ( int i = 0; i < 10; i += 1 ) {
    661                 printf( "%d %d\n", fib( &f1 ), fib( &f2 ) );
     442        Fib f1, f2;
     443        for ( 10 )
     444                sout | `resume( f1 )`.fn
     445                         | `resume( f2 )`.fn;
     446}
     447\end{cfa}
     448\end{lrbox}
     449
     450\begin{lrbox}{\myboxC}
     451\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     452typedef struct {
     453        int fn1, fn;  void * `next`;
     454} Fib;
     455#define FibCtor { 1, 0, NULL }
     456Fib * comain( Fib * f ) {
     457        if ( f->next ) goto *f->next;
     458        f->next = &&s1;
     459        for ( ;; ) {
     460                return f;
     461          s1:; int fn = f->fn + f->fn1;
     462                        f->fn1 = f->fn; f->fn = fn;
    662463        }
    663464}
     465int main() {
     466        Fib f1 = FibCtor, f2 = FibCtor;
     467        for ( int i = 0; i < 10; i += 1 )
     468                printf("%d %d\n",comain(&f1)->fn,
     469                                 comain(&f2)->fn);
     470}
    664471\end{cfa}
    665472\end{lrbox}
    666473
    667 \subfloat[3 States: global variables]{\label{f:GlobalVariables}\usebox\myboxA}
    668 \qquad
    669 \subfloat[1 State: external variables]{\label{f:ExternalState}\usebox\myboxB}
    670 \caption{C Fibonacci Implementations}
    671 \label{f:C-fibonacci}
     474\subfloat[C asymmetric generator]{\label{f:CFibonacci}\usebox\myboxA}
     475\hspace{3pt}
     476\vrule
     477\hspace{3pt}
     478\subfloat[\CFA asymmetric generator]{\label{f:CFAFibonacciGen}\usebox\myboxB}
     479\hspace{3pt}
     480\vrule
     481\hspace{3pt}
     482\subfloat[C generator implementation]{\label{f:CFibonacciSim}\usebox\myboxC}
     483\caption{Fibonacci (output) asymmetric generator}
     484\label{f:FibonacciAsymmetricGenerator}
    672485
    673486\bigskip
    674487
    675 \newbox\myboxA
    676488\begin{lrbox}{\myboxA}
    677489\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    678 `coroutine` Fib { int fn; };
    679 void main( Fib & fib ) with( fib ) {
    680         int f1, f2;
    681         fn = 0;  f1 = fn;  `suspend()`;
    682         fn = 1;  f2 = f1;  f1 = fn;  `suspend()`;
    683         for ( ;; ) {
    684                 fn = f1 + f2;  f2 = f1;  f1 = fn;  `suspend()`;
     490`generator Fmt` {
     491        char ch;
     492        int g, b;
     493};
     494void ?{}( Fmt & fmt ) { `resume(fmt);` } // constructor
     495void ^?{}( Fmt & f ) with(f) { $\C[1.75in]{// destructor}$
     496        if ( g != 0 || b != 0 ) sout | nl; }
     497void `main( Fmt & f )` with(f) {
     498        for () { $\C{// until destructor call}$
     499                for ( ; g < 5; g += 1 ) { $\C{// groups}$
     500                        for ( ; b < 4; b += 1 ) { $\C{// blocks}$
     501                                `suspend;` $\C{// wait for character}$
     502                                while ( ch == '\n' ) `suspend;` // ignore
     503                                sout | ch;                                              // newline
     504                        } sout | " ";  // block spacer
     505                } sout | nl; // group newline
    685506        }
    686507}
    687 int next( Fib & fib ) with( fib ) {
    688         `resume( fib );`
    689         return fn;
    690 }
    691508int main() {
    692         Fib f1, f2;
    693         for ( int i = 1; i <= 10; i += 1 ) {
    694                 sout | next( f1 ) | next( f2 ) | endl;
     509        Fmt fmt; $\C{// fmt constructor called}$
     510        for () {
     511                sin | fmt.ch; $\C{// read into generator}$
     512          if ( eof( sin ) ) break;
     513                `resume( fmt );`
    695514        }
    696 }
     515
     516} $\C{// fmt destructor called}\CRT$
    697517\end{cfa}
    698518\end{lrbox}
    699 \newbox\myboxB
     519
    700520\begin{lrbox}{\myboxB}
    701521\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    702 `coroutine` Fib { int ret; };
    703 void main( Fib & f ) with( fib ) {
    704         int fn, f1 = 1, f2 = 0;
     522typedef struct {
     523        void * next;
     524        char ch;
     525        int g, b;
     526} Fmt;
     527void comain( Fmt * f ) {
     528        if ( f->next ) goto *f->next;
     529        f->next = &&s1;
    705530        for ( ;; ) {
    706                 ret = f2;
    707 
    708                 fn = f1 + f2;  f2 = f1;  f1 = fn; `suspend();`
     531                for ( f->g = 0; f->g < 5; f->g += 1 ) {
     532                        for ( f->b = 0; f->b < 4; f->b += 1 ) {
     533                                return;
     534                          s1:;  while ( f->ch == '\n' ) return;
     535                                printf( "%c", f->ch );
     536                        } printf( " " );
     537                } printf( "\n" );
    709538        }
    710539}
    711 int next( Fib & fib ) with( fib ) {
    712         `resume( fib );`
    713         return ret;
    714 }
    715 
    716 
    717 
    718 
    719 
    720 
     540int main() {
     541        Fmt fmt = { NULL };  comain( &fmt ); // prime
     542        for ( ;; ) {
     543                scanf( "%c", &fmt.ch );
     544          if ( feof( stdin ) ) break;
     545                comain( &fmt );
     546        }
     547        if ( fmt.g != 0 || fmt.b != 0 ) printf( "\n" );
     548}
    721549\end{cfa}
    722550\end{lrbox}
    723 \subfloat[3 States, internal variables]{\label{f:Coroutine3States}\usebox\myboxA}
    724 \qquad\qquad
    725 \subfloat[1 State, internal variables]{\label{f:Coroutine1State}\usebox\myboxB}
    726 \caption{\CFA Coroutine Fibonacci Implementations}
    727 \label{f:fibonacci-cfa}
     551
     552\subfloat[\CFA asymmetric generator]{\label{f:CFAFormatGen}\usebox\myboxA}
     553\hspace{3pt}
     554\vrule
     555\hspace{3pt}
     556\subfloat[C generator simulation]{\label{f:CFormatSim}\usebox\myboxB}
     557\hspace{3pt}
     558\caption{Formatter (input) asymmetric generator}
     559\label{f:FormatterAsymmetricGenerator}
    728560\end{figure}
    729561
    730 Using a coroutine, it is possible to express the Fibonacci formula directly without any of the C problems.
    731 Figure~\ref{f:Coroutine3States} creates a @coroutine@ type:
    732 \begin{cfa}
    733 `coroutine` Fib { int fn; };
    734 \end{cfa}
    735 which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface routines @next@.
    736 Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
    737 The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code has the three suspend points, representing the three states in the Fibonacci formula, to context switch back to the caller's resume.
    738 The interface routine @next@, takes a Fibonacci instance and context switches to it using @resume@;
    739 on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
    740 The first @resume@ is special because it cocalls the coroutine at its coroutine main and allocates the stack;
    741 when the coroutine main returns, its stack is deallocated.
    742 Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
    743 Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
    744 Coroutine generators are called \newterm{output coroutines} because values are only returned.
    745 
    746 Figure~\ref{f:CFAFmt} shows an \newterm{input coroutine}, @Format@, for restructuring text into groups of characters of fixed-size blocks.
    747 For example, the input of the left is reformatted into the output on the right.
    748 \begin{quote}
     562Stateful functions appear as generators, coroutines, and threads, where presentations are based on function objects or pointers~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
     563For example, Python presents generators as a function object:
     564\begin{python}
     565def Gen():
     566        ... `yield val` ...
     567gen = Gen()
     568for i in range( 10 ):
     569        print( next( gen ) )
     570\end{python}
     571Boost presents coroutines in terms of four functor object-types:
     572\begin{cfa}
     573asymmetric_coroutine<>::pull_type
     574asymmetric_coroutine<>::push_type
     575symmetric_coroutine<>::call_type
     576symmetric_coroutine<>::yield_type
     577\end{cfa}
     578and many languages present threading using function pointers, @pthreads@~\cite{Butenhof97}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}, \eg pthreads:
     579\begin{cfa}
     580void * rtn( void * arg ) { ... }
     581int i = 3, rc;
     582pthread_t t; $\C{// thread id}$
     583`rc = pthread_create( &t, rtn, (void *)i );` $\C{// create and initialized task, type-unsafe input parameter}$
     584\end{cfa}
     585% void mycor( pthread_t cid, void * arg ) {
     586%       int * value = (int *)arg;                               $\C{// type unsafe, pointer-size only}$
     587%       // thread body
     588% }
     589% int main() {
     590%       int input = 0, output;
     591%       coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$
     592%       coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$
     593% }
     594\CFA's preferred presentation model for generators/coroutines/threads is a hybrid of objects and functions, with an object-oriented flavour.
     595Essentially, the generator/coroutine/thread function is semantically coupled with a generator/coroutine/thread custom type.
     596The custom type solves several issues, while accessing the underlying mechanisms used by the custom types is still allowed.
     597
     598
     599\subsection{Generator}
     600
     601Stackless generators have the potential to be very small and fast, \ie as small and fast as function call/return for both creation and execution.
     602The \CFA goal is to achieve this performance target, possibly at the cost of some semantic complexity.
     603A series of different kinds of generators and their implementation demonstrate how this goal is accomplished.
     604
     605Figure~\ref{f:FibonacciAsymmetricGenerator} shows an unbounded asymmetric generator for an infinite sequence of Fibonacci numbers written in C and \CFA, with a simple C implementation for the \CFA version.
     606This generator is an \emph{output generator}, producing a new result on each resumption.
     607To compute Fibonacci, the previous two values in the sequence are retained to generate the next value, \ie @fn1@ and @fn@, plus the execution location where control restarts when the generator is resumed, \ie top or middle.
     608An additional requirement is the ability to create an arbitrary number of generators (of any kind), \ie retaining one state in global variables is insufficient;
     609hence, state is retained in a closure between calls.
     610Figure~\ref{f:CFibonacci} shows the C approach of manually creating the closure in structure @Fib@, and multiple instances of this closure provide multiple Fibonacci generators.
     611The C version only has the middle execution state because the top execution state is declaration initialization.
     612Figure~\ref{f:CFAFibonacciGen} shows the \CFA approach, which also has a manual closure, but replaces the structure with a custom \CFA @generator@ type.
     613This generator type is then connected to a function that \emph{must be named \lstinline|main|},\footnote{
     614The name \lstinline|main| has special meaning in C, specifically the function where a program starts execution.
     615Hence, overloading this name for other starting points (generator/coroutine/thread) is a logical extension.}
     616called a \emph{generator main},which takes as its only parameter a reference to the generator type.
     617The generator main contains @suspend@ statements that suspend execution without ending the generator versus @return@.
     618For the Fibonacci generator-main,\footnote{
     619The \CFA \lstinline|with| opens an aggregate scope making its fields directly accessible, like Pascal \lstinline|with|, but using parallel semantics.
     620Multiple aggregates may be opened.}
     621the top initialization state appears at the start and the middle execution state is denoted by statement @suspend@.
     622Any local variables in @main@ \emph{are not retained} between calls;
     623hence local variables are only for temporary computations \emph{between} suspends.
     624All retained state \emph{must} appear in the generator's type.
     625As well, generator code containing a @suspend@ cannot be refactored into a helper function called by the generator, because @suspend@ is implemented via @return@, so a return from the helper function goes back to the current generator not the resumer.
     626The generator is started by calling function @resume@ with a generator instance, which begins execution at the top of the generator main, and subsequent @resume@ calls restart the generator at its point of last suspension.
     627Resuming an ended (returned) generator is undefined.
     628Function @resume@ returns its argument generator so it can be cascaded in an expression, in this case to print the next Fibonacci value @fn@ computed in the generator instance.
     629Figure~\ref{f:CFibonacciSim} shows the C implementation of the \CFA generator only needs one additional field, @next@, to handle retention of execution state.
     630The computed @goto@ at the start of the generator main, which branches after the previous suspend, adds very little cost to the resume call.
     631Finally, an explicit generator type provides both design and performance benefits, such as multiple type-safe interface functions taking and returning arbitrary types.\footnote{
     632The \CFA operator syntax uses \lstinline|?| to denote operands, which allows precise definitions for pre, post, and infix operators, \eg \lstinline|++?|, \lstinline|?++|, and \lstinline|?+?|, in addition \lstinline|?\{\}| denotes a constructor, as in \lstinline|foo `f` = `\{`...`\}`|, \lstinline|^?\{\}| denotes a destructor, and \lstinline|?()| is \CC function call \lstinline|operator()|.
     633}%
     634\begin{cfa}
     635int ?()( Fib & fib ) { return `resume( fib )`.fn; } $\C[3.9in]{// function-call interface}$
     636int ?()( Fib & fib, int N ) { for ( N - 1 ) `fib()`; return `fib()`; } $\C{// use function-call interface to skip N values}$
     637double ?()( Fib & fib ) { return (int)`fib()` / 3.14159; } $\C{// different return type, cast prevents recursive call}\CRT$
     638sout | (int)f1() | (double)f1() | f2( 2 ); // alternative interface, cast selects call based on return type, step 2 values
     639\end{cfa}
     640Now, the generator can be a separately compiled opaque-type only accessed through its interface functions.
     641For contrast, Figure~\ref{f:PythonFibonacci} shows the equivalent Python Fibonacci generator, which does not use a generator type, and hence only has a single interface, but an implicit closure.
     642
     643Having to manually create the generator closure by moving local-state variables into the generator type is an additional programmer burden.
     644(This restriction is removed by the coroutine in Section~\ref{s:Coroutine}.)
     645This requirement follows from the generality of variable-size local-state, \eg local state with a variable-length array requires dynamic allocation because the array size is unknown at compile time.
     646However, dynamic allocation significantly increases the cost of generator creation/destruction and is a showstopper for embedded real-time programming.
     647But more importantly, the size of the generator type is tied to the local state in the generator main, which precludes separate compilation of the generator main, \ie a generator must be inlined or local state must be dynamically allocated.
     648With respect to safety, we believe static analysis can discriminate local state from temporary variables in a generator, \ie variable usage spanning @suspend@, and generate a compile-time error.
     649Finally, our current experience is that most generator problems have simple data state, including local state, but complex execution state, so the burden of creating the generator type is small.
     650As well, C programmers are not afraid of this kind of semantic programming requirement, if it results in very small, fast generators.
     651
     652Figure~\ref{f:CFAFormatGen} shows an asymmetric \newterm{input generator}, @Fmt@, for restructuring text into groups of characters of fixed-size blocks, \ie the input on the left is reformatted into the output on the right, where newlines are ignored.
     653\begin{center}
    749654\tt
    750655\begin{tabular}{@{}l|l@{}}
    751656\multicolumn{1}{c|}{\textbf{\textrm{input}}} & \multicolumn{1}{c}{\textbf{\textrm{output}}} \\
    752 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
     657\begin{tabular}[t]{@{}ll@{}}
     658abcdefghijklmnopqrstuvwxyz \\
     659abcdefghijklmnopqrstuvwxyz
     660\end{tabular}
    753661&
    754662\begin{tabular}[t]{@{}lllll@{}}
     
    758666\end{tabular}
    759667\end{tabular}
    760 \end{quote}
    761 The example takes advantage of resuming a coroutine in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
    762 The destruction provides a newline if formatted text ends with a full line.
    763 Figure~\ref{f:CFmt} shows the C equivalent formatter, where the loops of the coroutine are flatten (linearized) and rechecked on each call because execution location is not retained between calls.
     668\end{center}
     669The example takes advantage of resuming a generator in the constructor to prime the loops so the first character sent for formatting appears inside the nested loops.
     670The destructor provides a newline, if formatted text ends with a full line.
     671Figure~\ref{f:CFormatSim} shows the C implementation of the \CFA input generator with one additional field and the computed @goto@.
     672For contrast, Figure~\ref{f:PythonFormatter} shows the equivalent Python format generator with the same properties as the Fibonacci generator.
     673
     674Figure~\ref{f:DeviceDriverGen} shows a \emph{killer} asymmetric generator, a device-driver, because device drivers caused 70\%-85\% of failures in Windows/Linux~\cite{Swift05}.
     675Device drives follow the pattern of simple data state but complex execution state, \ie finite state-machine (FSM) parsing a protocol.
     676For example, the following protocol:
     677\begin{center}
     678\ldots\, STX \ldots\, message \ldots\, ESC ETX \ldots\, message \ldots\, ETX 2-byte crc \ldots
     679\end{center}
     680is a network message beginning with the control character STX, ending with an ETX, and followed by a 2-byte cyclic-redundancy check.
     681Control characters may appear in a message if preceded by an ESC.
     682When a message byte arrives, it triggers an interrupt, and the operating system services the interrupt by calling the device driver with the byte read from a hardware register.
     683The device driver returns a status code of its current state, and when a complete message is obtained, the operating system knows the message is in the message buffer.
     684Hence, the device driver is an input/output generator.
     685
     686Note, the cost of creating and resuming the device-driver generator, @Driver@, is virtually identical to call/return, so performance in an operating-system kernel is excellent.
     687As well, the data state is small, where variables @byte@ and @msg@ are communication variables for passing in message bytes and returning the message, and variables @lnth@, @crc@, and @sum@ are local variable that must be retained between calls and are manually hoisted into the generator type.
     688% Manually, detecting and hoisting local-state variables is easy when the number is small.
     689In contrast, the execution state is large, with one @resume@ and seven @suspend@s.
     690Hence, the key benefits of the generator are correctness, safety, and maintenance because the execution states are transcribed directly into the programming language rather than using a table-driven approach.
     691Because FSMs can be complex and frequently occur in important domains, direct generator support is important in a system programming language.
    764692
    765693\begin{figure}
     
    767695\newbox\myboxA
    768696\begin{lrbox}{\myboxA}
     697\begin{python}[aboveskip=0pt,belowskip=0pt]
     698def Fib():
     699        fn1, fn = 0, 1
     700        while True:
     701                `yield fn1`
     702                fn1, fn = fn, fn1 + fn
     703f1 = Fib()
     704f2 = Fib()
     705for i in range( 10 ):
     706        print( next( f1 ), next( f2 ) )
     707
     708
     709
     710
     711
     712
     713\end{python}
     714\end{lrbox}
     715
     716\newbox\myboxB
     717\begin{lrbox}{\myboxB}
     718\begin{python}[aboveskip=0pt,belowskip=0pt]
     719def Fmt():
     720        try:
     721                while True:
     722                        for g in range( 5 ):
     723                                for b in range( 4 ):
     724                                        print( `(yield)`, end='' )
     725                                print( '  ', end='' )
     726                        print()
     727        except GeneratorExit:
     728                if g != 0 | b != 0:
     729                        print()
     730fmt = Fmt()
     731`next( fmt )`                    # prime, next prewritten
     732for i in range( 41 ):
     733        `fmt.send( 'a' );`      # send to yield
     734\end{python}
     735\end{lrbox}
     736\subfloat[Fibonacci]{\label{f:PythonFibonacci}\usebox\myboxA}
     737\hspace{3pt}
     738\vrule
     739\hspace{3pt}
     740\subfloat[Formatter]{\label{f:PythonFormatter}\usebox\myboxB}
     741\caption{Python generator}
     742\label{f:PythonGenerator}
     743
     744\bigskip
     745
     746\begin{tabular}{@{}l|l@{}}
    769747\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    770 `coroutine` Format {
    771         char ch;   // used for communication
    772         int g, b;  // global because used in destructor
     748enum Status { CONT, MSG, ESTX,
     749                                ELNTH, ECRC };
     750`generator` Driver {
     751        Status status;
     752        unsigned char byte, * msg; // communication
     753        unsigned int lnth, sum;      // local state
     754        unsigned short int crc;
    773755};
    774 void main( Format & fmt ) with( fmt ) {
    775         for ( ;; ) {
    776                 for ( g = 0; g < 5; g += 1 ) {      // group
    777                         for ( b = 0; b < 4; b += 1 ) { // block
    778                                 `suspend();`
    779                                 sout | ch;              // separator
     756void ?{}( Driver & d, char * m ) { d.msg = m; }
     757Status next( Driver & d, char b ) with( d ) {
     758        byte = b; `resume( d );` return status;
     759}
     760void main( Driver & d ) with( d ) {
     761        enum { STX = '\002', ESC = '\033',
     762                        ETX = '\003', MaxMsg = 64 };
     763  msg: for () { // parse message
     764                status = CONT;
     765                lnth = 0; sum = 0;
     766                while ( byte != STX ) `suspend;`
     767          emsg: for () {
     768                        `suspend;` // process byte
     769\end{cfa}
     770&
     771\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     772                        choose ( byte ) { // switch with implicit break
     773                          case STX:
     774                                status = ESTX; `suspend;` continue msg;
     775                          case ETX:
     776                                break emsg;
     777                          case ESC:
     778                                `suspend;`
    780779                        }
    781                         sout | "  ";               // separator
     780                        if ( lnth >= MaxMsg ) { // buffer full ?
     781                                status = ELNTH; `suspend;` continue msg; }
     782                        msg[lnth++] = byte;
     783                        sum += byte;
    782784                }
    783                 sout | endl;
     785                msg[lnth] = '\0'; // terminate string
     786                `suspend;`
     787                crc = byte << 8;
     788                `suspend;`
     789                status = (crc | byte) == sum ? MSG : ECRC;
     790                `suspend;`
    784791        }
    785792}
    786 void ?{}( Format & fmt ) { `resume( fmt );` }
    787 void ^?{}( Format & fmt ) with( fmt ) {
    788         if ( g != 0 || b != 0 ) sout | endl;
    789 }
    790 void format( Format & fmt ) {
    791         `resume( fmt );`
     793\end{cfa}
     794\end{tabular}
     795\caption{Device-driver generator for communication protocol}
     796\label{f:DeviceDriverGen}
     797\end{figure}
     798
     799Figure~\ref{f:CFAPingPongGen} shows a symmetric generator, where the generator resumes another generator, forming a resume/resume cycle.
     800(The trivial cycle is a generator resuming itself.)
     801This control flow is similar to recursion for functions but without stack growth.
     802The steps for symmetric control-flow are creating, executing, and terminating the cycle.
     803Constructing the cycle must deal with definition-before-use to close the cycle, \ie, the first generator must know about the last generator, which is not within scope.
     804(This issue occurs for any cyclic data structure.)
     805% The example creates all the generators and then assigns the partners that form the cycle.
     806% Alternatively, the constructor can assign the partners as they are declared, except the first, and the first-generator partner is set after the last generator declaration to close the cycle.
     807Once the cycle is formed, the program main resumes one of the generators, and the generators can then traverse an arbitrary cycle using @resume@ to activate partner generator(s).
     808Terminating the cycle is accomplished by @suspend@ or @return@, both of which go back to the stack frame that started the cycle (program main in the example).
     809The starting stack-frame is below the last active generator because the resume/resume cycle does not grow the stack.
     810Also, since local variables are not retained in the generator function, it does not contain any objects with destructors that must be called, so the  cost is the same as a function return.
     811Destructor cost occurs when the generator instance is deallocated, which is easily controlled by the programmer.
     812
     813Figure~\ref{f:CPingPongSim} shows the implementation of the symmetric generator, where the complexity is the @resume@, which needs an extension to the calling convention to perform a forward rather than backward jump.
     814This jump-starts at the top of the next generator main to re-execute the normal calling convention to make space on the stack for its local variables.
     815However, before the jump, the caller must reset its stack (and any registers) equivalent to a @return@, but subsequently jump forward.
     816This semantics is basically a tail-call optimization, which compilers already perform.
     817The example shows the assembly code to undo the generator's entry code before the direct jump.
     818This assembly code depends on what entry code is generated, specifically if there are local variables and the level of optimization.
     819To provide this new calling convention requires a mechanism built into the compiler, which is beyond the scope of \CFA at this time.
     820Nevertheless, it is possible to hand generate any symmetric generators for proof of concept and performance testing.
     821A compiler could also eliminate other artifacts in the generator simulation to further increase performance, \eg LLVM has various coroutine support~\cite{CoroutineTS}, and \CFA can leverage this support should it fork @clang@.
     822
     823\begin{figure}
     824\centering
     825\begin{lrbox}{\myboxA}
     826\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     827`generator PingPong` {
     828        const char * name;
     829        int N;
     830        int i;                          // local state
     831        PingPong & partner; // rebindable reference
     832};
     833
     834void `main( PingPong & pp )` with(pp) {
     835        for ( ; i < N; i += 1 ) {
     836                sout | name | i;
     837                `resume( partner );`
     838        }
    792839}
    793840int main() {
    794         Format fmt;
    795         eof: for ( ;; ) {
    796                 sin | fmt.ch;
    797           if ( eof( sin ) ) break eof;
    798                 format( fmt );
     841        enum { N = 5 };
     842        PingPong ping = {"ping",N,0}, pong = {"pong",N,0};
     843        &ping.partner = &pong;  &pong.partner = &ping;
     844        `resume( ping );`
     845}
     846\end{cfa}
     847\end{lrbox}
     848
     849\begin{lrbox}{\myboxB}
     850\begin{cfa}[escapechar={},aboveskip=0pt,belowskip=0pt]
     851typedef struct PingPong {
     852        const char * name;
     853        int N, i;
     854        struct PingPong * partner;
     855        void * next;
     856} PingPong;
     857#define PPCtor(name, N) {name,N,0,NULL,NULL}
     858void comain( PingPong * pp ) {
     859        if ( pp->next ) goto *pp->next;
     860        pp->next = &&cycle;
     861        for ( ; pp->i < pp->N; pp->i += 1 ) {
     862                printf( "%s %d\n", pp->name, pp->i );
     863                asm( "mov  %0,%%rdi" : "=m" (pp->partner) );
     864                asm( "mov  %rdi,%rax" );
     865                asm( "popq %rbx" );
     866                asm( "jmp  comain" );
     867          cycle: ;
     868        }
     869}
     870\end{cfa}
     871\end{lrbox}
     872
     873\subfloat[\CFA symmetric generator]{\label{f:CFAPingPongGen}\usebox\myboxA}
     874\hspace{3pt}
     875\vrule
     876\hspace{3pt}
     877\subfloat[C generator simulation]{\label{f:CPingPongSim}\usebox\myboxB}
     878\hspace{3pt}
     879\caption{Ping-Pong symmetric generator}
     880\label{f:PingPongSymmetricGenerator}
     881\end{figure}
     882
     883Finally, part of this generator work was inspired by the recent \CCtwenty generator proposal~\cite{C++20Coroutine19} (which they call coroutines).
     884Our work provides the same high-performance asymmetric generators as \CCtwenty, and extends their work with symmetric generators.
     885An additional \CCtwenty generator feature allows @suspend@ and @resume@ to be followed by a restricted compound statement that is executed after the current generator has reset its stack but before calling the next generator, specified with \CFA syntax:
     886\begin{cfa}
     887... suspend`{ ... }`;
     888... resume( C )`{ ... }` ...
     889\end{cfa}
     890Since the current generator's stack is released before calling the compound statement, the compound statement can only reference variables in the generator's type.
     891This feature is useful when a generator is used in a concurrent context to ensure it is stopped before releasing a lock in the compound statement, which might immediately allow another thread to resume the generator.
     892Hence, this mechanism provides a general and safe handoff of the generator among competing threads.
     893
     894
     895\subsection{Coroutine}
     896\label{s:Coroutine}
     897
     898Stackful coroutines extend generator semantics, \ie there is an implicit closure and @suspend@ may appear in a helper function called from the coroutine main.
     899A coroutine is specified by replacing @generator@ with @coroutine@ for the type.
     900Coroutine generality results in higher cost for creation, due to dynamic stack allocation, execution, due to context switching among stacks, and terminating, due to possible stack unwinding and dynamic stack deallocation.
     901A series of different kinds of coroutines and their implementations demonstrate how coroutines extend generators.
     902
     903First, the previous generator examples are converted to their coroutine counterparts, allowing local-state variables to be moved from the generator type into the coroutine main.
     904\begin{description}
     905\item[Fibonacci]
     906Move the declaration of @fn1@ to the start of coroutine main.
     907\begin{cfa}[xleftmargin=0pt]
     908void main( Fib & fib ) with(fib) {
     909        `int fn1;`
     910\end{cfa}
     911\item[Formatter]
     912Move the declaration of @g@ and @b@ to the for loops in the coroutine main.
     913\begin{cfa}[xleftmargin=0pt]
     914for ( `g`; 5 ) {
     915        for ( `b`; 4 ) {
     916\end{cfa}
     917\item[Device Driver]
     918Move the declaration of @lnth@ and @sum@ to their points of initialization.
     919\begin{cfa}[xleftmargin=0pt]
     920        status = CONT;
     921        `unsigned int lnth = 0, sum = 0;`
     922        ...
     923        `unsigned short int crc = byte << 8;`
     924\end{cfa}
     925\item[PingPong]
     926Move the declaration of @i@ to the for loop in the coroutine main.
     927\begin{cfa}[xleftmargin=0pt]
     928void main( PingPong & pp ) with(pp) {
     929        for ( `i`; N ) {
     930\end{cfa}
     931\end{description}
     932It is also possible to refactor code containing local-state and @suspend@ statements into a helper function, like the computation of the CRC for the device driver.
     933\begin{cfa}
     934unsigned int Crc() {
     935        `suspend;`
     936        unsigned short int crc = byte << 8;
     937        `suspend;`
     938        status = (crc | byte) == sum ? MSG : ECRC;
     939        return crc;
     940}
     941\end{cfa}
     942A call to this function is placed at the end of the driver's coroutine-main.
     943For complex finite-state machines, refactoring is part of normal program abstraction, especially when code is used in multiple places.
     944Again, this complexity is usually associated with execution state rather than data state.
     945
     946\begin{comment}
     947Figure~\ref{f:Coroutine3States} creates a @coroutine@ type, @`coroutine` Fib { int fn; }@, which provides communication, @fn@, for the \newterm{coroutine main}, @main@, which runs on the coroutine stack, and possibly multiple interface functions, \eg @next@.
     948Like the structure in Figure~\ref{f:ExternalState}, the coroutine type allows multiple instances, where instances of this type are passed to the (overloaded) coroutine main.
     949The coroutine main's stack holds the state for the next generation, @f1@ and @f2@, and the code represents the three states in the Fibonacci formula via the three suspend points, to context switch back to the caller's @resume@.
     950The interface function @next@, takes a Fibonacci instance and context switches to it using @resume@;
     951on restart, the Fibonacci field, @fn@, contains the next value in the sequence, which is returned.
     952The first @resume@ is special because it allocates the coroutine stack and cocalls its coroutine main on that stack;
     953when the coroutine main returns, its stack is deallocated.
     954Hence, @Fib@ is an object at creation, transitions to a coroutine on its first resume, and transitions back to an object when the coroutine main finishes.
     955Figure~\ref{f:Coroutine1State} shows the coroutine version of the C version in Figure~\ref{f:ExternalState}.
     956Coroutine generators are called \newterm{output coroutines} because values are only returned.
     957
     958\begin{figure}
     959\centering
     960\newbox\myboxA
     961% \begin{lrbox}{\myboxA}
     962% \begin{cfa}[aboveskip=0pt,belowskip=0pt]
     963% `int fn1, fn2, state = 1;`   // single global variables
     964% int fib() {
     965%       int fn;
     966%       `switch ( state )` {  // explicit execution state
     967%         case 1: fn = 0;  fn1 = fn;  state = 2;  break;
     968%         case 2: fn = 1;  fn2 = fn1;  fn1 = fn;  state = 3;  break;
     969%         case 3: fn = fn1 + fn2;  fn2 = fn1;  fn1 = fn;  break;
     970%       }
     971%       return fn;
     972% }
     973% int main() {
     974%
     975%       for ( int i = 0; i < 10; i += 1 ) {
     976%               printf( "%d\n", fib() );
     977%       }
     978% }
     979% \end{cfa}
     980% \end{lrbox}
     981\begin{lrbox}{\myboxA}
     982\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     983#define FibCtor { 0, 1 }
     984typedef struct { int fn1, fn; } Fib;
     985int fib( Fib * f ) {
     986
     987        int ret = f->fn1;
     988        f->fn1 = f->fn;
     989        f->fn = ret + f->fn;
     990        return ret;
     991}
     992
     993
     994
     995int main() {
     996        Fib f1 = FibCtor, f2 = FibCtor;
     997        for ( int i = 0; i < 10; i += 1 ) {
     998                printf( "%d %d\n",
     999                                fib( &f1 ), fib( &f2 ) );
    7991000        }
    8001001}
     
    8051006\begin{lrbox}{\myboxB}
    8061007\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    807 struct Format {
    808         char ch;
    809         int g, b;
    810 };
    811 void format( struct Format * fmt ) {
    812         if ( fmt->ch != -1 ) {      // not EOF ?
    813                 printf( "%c", fmt->ch );
    814                 fmt->b += 1;
    815                 if ( fmt->b == 4 ) {  // block
    816                         printf( "  " );      // separator
    817                         fmt->b = 0;
    818                         fmt->g += 1;
    819                 }
    820                 if ( fmt->g == 5 ) {  // group
    821                         printf( "\n" );     // separator
    822                         fmt->g = 0;
    823                 }
    824         } else {
    825                 if ( fmt->g != 0 || fmt->b != 0 ) printf( "\n" );
     1008`coroutine` Fib { int fn1; };
     1009void main( Fib & fib ) with( fib ) {
     1010        int fn;
     1011        [fn1, fn] = [0, 1];
     1012        for () {
     1013                `suspend;`
     1014                [fn1, fn] = [fn, fn1 + fn];
    8261015        }
    8271016}
     1017int ?()( Fib & fib ) with( fib ) {
     1018        return `resume( fib )`.fn1;
     1019}
    8281020int main() {
    829         struct Format fmt = { 0, 0, 0 };
    830         for ( ;; ) {
    831                 scanf( "%c", &fmt.ch );
    832           if ( feof( stdin ) ) break;
    833                 format( &fmt );
    834         }
    835         fmt.ch = -1;
    836         format( &fmt );
    837 }
     1021        Fib f1, f2;
     1022        for ( 10 ) {
     1023                sout | f1() | f2();
     1024}
     1025
     1026
    8381027\end{cfa}
    8391028\end{lrbox}
    840 \subfloat[\CFA Coroutine]{\label{f:CFAFmt}\usebox\myboxA}
     1029
     1030\newbox\myboxC
     1031\begin{lrbox}{\myboxC}
     1032\begin{python}[aboveskip=0pt,belowskip=0pt]
     1033
     1034def Fib():
     1035
     1036        fn1, fn = 0, 1
     1037        while True:
     1038                `yield fn1`
     1039                fn1, fn = fn, fn1 + fn
     1040
     1041
     1042// next prewritten
     1043
     1044
     1045f1 = Fib()
     1046f2 = Fib()
     1047for i in range( 10 ):
     1048        print( next( f1 ), next( f2 ) )
     1049
     1050
     1051
     1052\end{python}
     1053\end{lrbox}
     1054
     1055\subfloat[C]{\label{f:GlobalVariables}\usebox\myboxA}
     1056\hspace{3pt}
     1057\vrule
     1058\hspace{3pt}
     1059\subfloat[\CFA]{\label{f:ExternalState}\usebox\myboxB}
     1060\hspace{3pt}
     1061\vrule
     1062\hspace{3pt}
     1063\subfloat[Python]{\label{f:ExternalState}\usebox\myboxC}
     1064\caption{Fibonacci generator}
     1065\label{f:C-fibonacci}
     1066\end{figure}
     1067
     1068\bigskip
     1069
     1070\newbox\myboxA
     1071\begin{lrbox}{\myboxA}
     1072\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1073`coroutine` Fib { int fn; };
     1074void main( Fib & fib ) with( fib ) {
     1075        fn = 0;  int fn1 = fn; `suspend`;
     1076        fn = 1;  int fn2 = fn1;  fn1 = fn; `suspend`;
     1077        for () {
     1078                fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `suspend`; }
     1079}
     1080int next( Fib & fib ) with( fib ) { `resume( fib );` return fn; }
     1081int main() {
     1082        Fib f1, f2;
     1083        for ( 10 )
     1084                sout | next( f1 ) | next( f2 );
     1085}
     1086\end{cfa}
     1087\end{lrbox}
     1088\newbox\myboxB
     1089\begin{lrbox}{\myboxB}
     1090\begin{python}[aboveskip=0pt,belowskip=0pt]
     1091
     1092def Fibonacci():
     1093        fn = 0; fn1 = fn; `yield fn`  # suspend
     1094        fn = 1; fn2 = fn1; fn1 = fn; `yield fn`
     1095        while True:
     1096                fn = fn1 + fn2; fn2 = fn1; fn1 = fn; `yield fn`
     1097
     1098
     1099f1 = Fibonacci()
     1100f2 = Fibonacci()
     1101for i in range( 10 ):
     1102        print( `next( f1 )`, `next( f2 )` ) # resume
     1103
     1104\end{python}
     1105\end{lrbox}
     1106\subfloat[\CFA]{\label{f:Coroutine3States}\usebox\myboxA}
    8411107\qquad
    842 \subfloat[C Linearized]{\label{f:CFmt}\usebox\myboxB}
    843 \caption{Formatting text into lines of 5 blocks of 4 characters.}
    844 \label{f:fmt-line}
     1108\subfloat[Python]{\label{f:Coroutine1State}\usebox\myboxB}
     1109\caption{Fibonacci input coroutine, 3 states, internal variables}
     1110\label{f:cfa-fibonacci}
    8451111\end{figure}
    846 
    847 The previous examples are \newterm{asymmetric (semi) coroutine}s because one coroutine always calls a resuming routine for another coroutine, and the resumed coroutine always suspends back to its last resumer, similar to call/return for normal routines
    848 However, there is no stack growth because @resume@/@suspend@ context switch to existing stack-frames rather than create new ones.
    849 \newterm{Symmetric (full) coroutine}s have a coroutine call a resuming routine for another coroutine, which eventually forms a resuming-call cycle.
    850 (The trivial cycle is a coroutine resuming itself.)
    851 This control flow is similar to recursion for normal routines, but again there is no stack growth from the context switch.
     1112\end{comment}
    8521113
    8531114\begin{figure}
     
    8571118\begin{cfa}
    8581119`coroutine` Prod {
    859         Cons & c;
     1120        Cons & c;                       // communication
    8601121        int N, money, receipt;
    8611122};
    8621123void main( Prod & prod ) with( prod ) {
    8631124        // 1st resume starts here
    864         for ( int i = 0; i < N; i += 1 ) {
     1125        for ( i; N ) {
    8651126                int p1 = random( 100 ), p2 = random( 100 );
    866                 sout | p1 | " " | p2 | endl;
     1127                sout | p1 | " " | p2;
    8671128                int status = delivery( c, p1, p2 );
    868                 sout | " $" | money | endl | status | endl;
     1129                sout | " $" | money | nl | status;
    8691130                receipt += 1;
    8701131        }
    8711132        stop( c );
    872         sout | "prod stops" | endl;
     1133        sout | "prod stops";
    8731134}
    8741135int payment( Prod & prod, int money ) {
     
    8911152\begin{cfa}
    8921153`coroutine` Cons {
    893         Prod & p;
     1154        Prod & p;                       // communication
    8941155        int p1, p2, status;
    895         _Bool done;
     1156        bool done;
    8961157};
    8971158void ?{}( Cons & cons, Prod & p ) {
    898         &cons.p = &p;
     1159        &cons.p = &p; // reassignable reference
    8991160        cons.[status, done ] = [0, false];
    9001161}
    901 void ^?{}( Cons & cons ) {}
    9021162void main( Cons & cons ) with( cons ) {
    9031163        // 1st resume starts here
    9041164        int money = 1, receipt;
    9051165        for ( ; ! done; ) {
    906                 sout | p1 | " " | p2 | endl | " $" | money | endl;
     1166                sout | p1 | " " | p2 | nl | " $" | money;
    9071167                status += 1;
    9081168                receipt = payment( p, money );
    909                 sout | " #" | receipt | endl;
     1169                sout | " #" | receipt;
    9101170                money += 1;
    9111171        }
    912         sout | "cons stops" | endl;
     1172        sout | "cons stops";
    9131173}
    9141174int delivery( Cons & cons, int p1, int p2 ) {
     
    9211181        `resume( cons );`
    9221182}
     1183
    9231184\end{cfa}
    9241185\end{tabular}
    925 \caption{Producer / consumer: resume-resume cycle, bi-directional communication}
     1186\caption{Producer / consumer: resume-resume cycle, bidirectional communication}
    9261187\label{f:ProdCons}
    9271188\end{figure}
    9281189
    929 Figure~\ref{f:ProdCons} shows a producer/consumer symmetric-coroutine performing bi-directional communication.
    930 Since the solution involves a full-coroutining cycle, the program main creates one coroutine in isolation, passes this coroutine to its partner, and closes the cycle at the call to @start@.
    931 The @start@ routine communicates both the number of elements to be produced and the consumer into the producer's coroutine structure.
    932 Then the @resume@ to @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
    933 @prod@'s coroutine main starts, creates local variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
     1190Figure~\ref{f:ProdCons} shows the ping-pong example in Figure~\ref{f:CFAPingPongGen} extended into a producer/consumer symmetric-coroutine performing bidirectional communication.
     1191This example is illustrative because both producer/consumer have two interface functions with @resume@s that suspend execution in these interface (helper) functions.
     1192The program main creates the producer coroutine, passes it to the consumer coroutine in its initialization, and closes the cycle at the call to @start@ along with the number of items to be produced.
     1193The first @resume@ of @prod@ creates @prod@'s stack with a frame for @prod@'s coroutine main at the top, and context switches to it.
     1194@prod@'s coroutine main starts, creates local-state variables that are retained between coroutine activations, and executes $N$ iterations, each generating two random values, calling the consumer to deliver the values, and printing the status returned from the consumer.
    9341195
    9351196The producer call to @delivery@ transfers values into the consumer's communication variables, resumes the consumer, and returns the consumer status.
    936 For the first resume, @cons@'s stack is initialized, creating local variables retained between subsequent activations of the coroutine.
    937 The consumer iterates until the @done@ flag is set, prints, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation).
    938 The call from the consumer to the @payment@ introduces the cycle between producer and consumer.
     1197On the first resume, @cons@'s stack is created and initialized, holding local-state variables retained between subsequent activations of the coroutine.
     1198The consumer iterates until the @done@ flag is set, prints the values delivered by the producer, increments status, and calls back to the producer via @payment@, and on return from @payment@, prints the receipt from the producer and increments @money@ (inflation).
     1199The call from the consumer to @payment@ introduces the cycle between producer and consumer.
    9391200When @payment@ is called, the consumer copies values into the producer's communication variable and a resume is executed.
    940 The context switch restarts the producer at the point where it was last context switched, so it continues in @delivery@ after the resume.
    941 
     1201The context switch restarts the producer at the point where it last context switched, so it continues in @delivery@ after the resume.
    9421202@delivery@ returns the status value in @prod@'s coroutine main, where the status is printed.
    9431203The loop then repeats calling @delivery@, where each call resumes the consumer coroutine.
     
    9451205The consumer increments and returns the receipt to the call in @cons@'s coroutine main.
    9461206The loop then repeats calling @payment@, where each call resumes the producer coroutine.
    947 
    948 After iterating $N$ times, the producer calls @stop@.
    949 The @done@ flag is set to stop the consumer's execution and a resume is executed.
    950 The context switch restarts @cons@ in @payment@ and it returns with the last receipt.
    951 The consumer terminates its loops because @done@ is true, its @main@ terminates, so @cons@ transitions from a coroutine back to an object, and @prod@ reactivates after the resume in @stop@.
    952 @stop@ returns and @prod@'s coroutine main terminates.
    953 The program main restarts after the resume in @start@.
    954 @start@ returns and the program main terminates.
    955 
    956 
    957 \subsection{Coroutine Implementation}
    958 
    959 A significant implementation challenge for coroutines (and threads, see section \ref{threads}) is adding extra fields and executing code after/before the coroutine constructor/destructor and coroutine main to create/initialize/de-initialize/destroy extra fields and the stack.
    960 There are several solutions to this problem and the chosen option forced the \CFA coroutine design.
    961 
    962 Object-oriented inheritance provides extra fields and code in a restricted context, but it requires programmers to explicitly perform the inheritance:
    963 \begin{cfa}
    964 struct mycoroutine $\textbf{\textsf{inherits}}$ baseCoroutine { ... }
    965 \end{cfa}
    966 and the programming language (and possibly its tool set, \eg debugger) may need to understand @baseCoroutine@ because of the stack.
    967 Furthermore, the execution of constructs/destructors is in the wrong order for certain operations, \eg for threads;
    968 \eg, if the thread is implicitly started, it must start \emph{after} all constructors, because the thread relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived.
    969 
    970 An alternatively is composition:
    971 \begin{cfa}
    972 struct mycoroutine {
    973         ... // declarations
     1207Figure~\ref{f:ProdConsRuntimeStacks} shows the runtime stacks of the program main, and the coroutine mains for @prod@ and @cons@ during the cycling.
     1208
     1209\begin{figure}
     1210\begin{center}
     1211\input{FullProdConsStack.pstex_t}
     1212\end{center}
     1213\vspace*{-10pt}
     1214\caption{Producer / consumer runtime stacks}
     1215\label{f:ProdConsRuntimeStacks}
     1216
     1217\medskip
     1218
     1219\begin{center}
     1220\input{FullCoroutinePhases.pstex_t}
     1221\end{center}
     1222\vspace*{-10pt}
     1223\caption{Ping / Pong coroutine steps}
     1224\label{f:PingPongFullCoroutineSteps}
     1225\end{figure}
     1226
     1227Terminating a coroutine cycle is more complex than a generator cycle, because it requires context switching to the program main's \emph{stack} to shutdown the program, whereas generators started by the program main run on its stack.
     1228Furthermore, each deallocated coroutine must guarantee all destructors are run for object allocated in the coroutine type \emph{and} allocated on the coroutine's stack at the point of suspension, which can be arbitrarily deep.
     1229When a coroutine's main ends, its stack is already unwound so any stack allocated objects with destructors have been finalized.
     1230The na\"{i}ve semantics for coroutine-cycle termination is to context switch to the last resumer, like executing a @suspend@/@return@ in a generator.
     1231However, for coroutines, the last resumer is \emph{not} implicitly below the current stack frame, as for generators, because each coroutine's stack is independent.
     1232Unfortunately, it is impossible to determine statically if a coroutine is in a cycle and unrealistic to check dynamically (graph-cycle problem).
     1233Hence, a compromise solution is necessary that works for asymmetric (acyclic) and symmetric (cyclic) coroutines.
     1234
     1235Our solution is to context switch back to the first resumer (starter) once the coroutine ends.
     1236This semantics works well for the most common asymmetric and symmetric coroutine usage patterns.
     1237For asymmetric coroutines, it is common for the first resumer (starter) coroutine to be the only resumer.
     1238All previous generators converted to coroutines have this property.
     1239For symmetric coroutines, it is common for the cycle creator to persist for the lifetime of the cycle.
     1240Hence, the starter coroutine is remembered on the first resume and ending the coroutine resumes the starter.
     1241Figure~\ref{f:ProdConsRuntimeStacks} shows this semantic by the dashed lines from the end of the coroutine mains: @prod@ starts @cons@ so @cons@ resumes @prod@ at the end, and the program main starts @prod@ so @prod@ resumes the program main at the end.
     1242For other scenarios, it is always possible to devise a solution with additional programming effort, such as forcing the cycle forward (backward) to a safe point before starting termination.
     1243
     1244The producer/consumer example does not illustrate the full power of the starter semantics because @cons@ always ends first.
     1245Assume generator @PingPong@ is converted to a coroutine.
     1246Figure~\ref{f:PingPongFullCoroutineSteps} shows the creation, starter, and cyclic execution steps of the coroutine version.
     1247The program main creates (declares) coroutine instances @ping@ and @pong@.
     1248Next, program main resumes @ping@, making it @ping@'s starter, and @ping@'s main resumes @pong@'s main, making it @pong@'s starter.
     1249Execution forms a cycle when @pong@ resumes @ping@, and cycles $N$ times.
     1250By adjusting $N$ for either @ping@/@pong@, it is possible to have either one finish first, instead of @pong@ always ending first.
     1251If @pong@ ends first, it resumes its starter @ping@ in its coroutine main, then @ping@ ends and resumes its starter the program main in function @start@.
     1252If @ping@ ends first, it resumes its starter the program main in function @start@.
     1253Regardless of the cycle complexity, the starter stack always leads back to the program main, but the stack can be entered at an arbitrary point.
     1254Once back at the program main, coroutines @ping@ and @pong@ are deallocated.
     1255For generators, deallocation runs the destructors for all objects in the generator type.
     1256For coroutines, deallocation deals with objects in the coroutine type and must also run the destructors for any objects pending on the coroutine's stack for any unterminated coroutine.
     1257Hence, if a coroutine's destructor detects the coroutine is not ended, it implicitly raises a cancellation exception (uncatchable exception) at the coroutine and resumes it so the cancellation exception can propagate to the root of the coroutine's stack destroying all local variable on the stack.
     1258So the \CFA semantics for the generator and coroutine, ensure both can be safely deallocated at any time, regardless of their current state, like any other aggregate object.
     1259Explicitly raising normal exceptions at another coroutine can replace flag variables, like @stop@, \eg @prod@ raises a @stop@ exception at @cons@ after it finishes generating values and resumes @cons@, which catches the @stop@ exception to terminate its loop.
     1260
     1261Finally, there is an interesting effect for @suspend@ with symmetric coroutines.
     1262A coroutine must retain its last resumer to suspend back because the resumer is on a different stack.
     1263These reverse pointers allow @suspend@ to cycle \emph{backwards}, which may be useful in certain cases.
     1264However, there is an anomaly if a coroutine resumes itself, because it overwrites its last resumer with itself, losing the ability to resume the last external resumer.
     1265To prevent losing this information, a self-resume does not overwrite the last resumer.
     1266
     1267
     1268\subsection{Generator / Coroutine Implementation}
     1269
     1270A significant implementation challenge for generators/coroutines (and threads in Section~\ref{s:threads}) is adding extra fields to the custom types and related functions, \eg inserting code after/before the coroutine constructor/destructor and @main@ to create/initialize/de-initialize/destroy any extra fields, \eg stack.
     1271There are several solutions to these problem, which follow from the object-oriented flavour of adopting custom types.
     1272
     1273For object-oriented languages, inheritance is used to provide extra fields and code via explicit inheritance:
     1274\begin{cfa}[morekeywords={class,inherits}]
     1275class myCoroutine inherits baseCoroutine { ... }
     1276\end{cfa}
     1277% The problem is that the programming language and its tool chain, \eg debugger, @valgrind@, need to understand @baseCoroutine@ because it infers special property, so type @baseCoroutine@ becomes a de facto keyword and all types inheriting from it are implicitly custom types.
     1278The problem is that some special properties are not handled by existing language semantics, \eg the execution of constructors/destructors is in the wrong order to implicitly start threads because the thread must start \emph{after} all constructors as it relies on a completely initialized object, but the inherited constructor runs \emph{before} the derived.
     1279Alternatives, such as explicitly starting threads as in Java, are repetitive and forgetting to call start is a common source of errors.
     1280An alternative is composition:
     1281\begin{cfa}
     1282struct myCoroutine {
     1283        ... // declaration/communication variables
    9741284        baseCoroutine dummy; // composition, last declaration
    9751285}
    9761286\end{cfa}
    977 which also requires an explicit declaration that must be the last one to ensure correct initialization order.
     1287which also requires an explicit declaration that must be last to ensure correct initialization order.
    9781288However, there is nothing preventing wrong placement or multiple declarations.
    9791289
    980 For coroutines as for threads, many implementations are based on routine pointers or routine objects~\cite{Butenhof97, C++14, MS:VisualC++, BoostCoroutines15}.
    981 For example, Boost implements coroutines in terms of four functor object-types:
    982 \begin{cfa}
    983 asymmetric_coroutine<>::pull_type
    984 asymmetric_coroutine<>::push_type
    985 symmetric_coroutine<>::call_type
    986 symmetric_coroutine<>::yield_type
    987 \end{cfa}
    988 Similarly, the canonical threading paradigm is often based on routine pointers, \eg @pthread@~\cite{pthreads}, \Csharp~\cite{Csharp}, Go~\cite{Go}, and Scala~\cite{Scala}.
    989 However, the generic thread-handle (identifier) is limited (few operations), unless it is wrapped in a custom type.
    990 \begin{cfa}
    991 void mycor( coroutine_t cid, void * arg ) {
    992         int * value = (int *)arg;                               $\C{// type unsafe, pointer-size only}$
    993         // Coroutine body
    994 }
    995 int main() {
    996         int input = 0, output;
    997         coroutine_t cid = coroutine_create( &mycor, (void *)&input ); $\C{// type unsafe, pointer-size only}$
    998         coroutine_resume( cid, (void *)input, (void **)&output ); $\C{// type unsafe, pointer-size only}$
    999 }
    1000 \end{cfa}
    1001 Since the custom type is simple to write in \CFA and solves several issues, added support for routine/lambda-based coroutines adds very little.
    1002 
    1003 Note, the type @coroutine_t@ must be an abstract handle to the coroutine, because the coroutine descriptor and its stack are non-copyable.
    1004 Copying the coroutine descriptor results in copies being out of date with the current state of the stack.
    1005 Correspondingly, copying the stack results is copies being out of date with coroutine descriptor, and pointers in the stack being out of date to data on the stack.
    1006 (There is no mechanism in C to find all stack-specific pointers and update them as part of a copy.)
    1007 
    1008 The selected approach is to use language support by introducing a new kind of aggregate (structure):
    1009 \begin{cfa}
    1010 coroutine Fibonacci {
    1011         int fn; // communication variables
    1012 };
    1013 \end{cfa}
    1014 The @coroutine@ keyword means the compiler (and tool set) can find and inject code where needed.
    1015 The downside of this approach is that it makes coroutine a special case in the language.
    1016 Users wanting to extend coroutines or build their own for various reasons can only do so in ways offered by the language.
    1017 Furthermore, implementing coroutines without language supports also displays the power of a programming language.
    1018 While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed without using the language support.
    1019 The reserved keyword eases use for the common cases.
    1020 
    1021 Part of the mechanism to generalize coroutines is using a \CFA trait, which defines a coroutine as anything satisfying the trait @is_coroutine@, and this trait is used to restrict coroutine-manipulation routines:
     1290\CFA custom types make any special properties explicit to the language and its tool chain, \eg the language code-generator knows where to inject code
     1291% and when it is unsafe to perform certain optimizations,
     1292and IDEs using simple parsing can find and manipulate types with special properties.
     1293The downside of this approach is that it makes custom types a special case in the language.
     1294Users wanting to extend custom types or build their own can only do so in ways offered by the language.
     1295Furthermore, implementing custom types without language support may display the power of a programming language.
     1296\CFA blends the two approaches, providing custom type for idiomatic \CFA code, while extending and building new custom types is still possible, similar to Java concurrency with builtin and library.
     1297
     1298Part of the mechanism to generalize custom types is the \CFA trait~\cite[\S~2.3]{Moss18}, \eg the definition for custom-type @coroutine@ is anything satisfying the trait @is_coroutine@, and this trait both enforces and restricts the coroutine-interface functions.
    10221299\begin{cfa}
    10231300trait is_coroutine( `dtype` T ) {
     
    10251302        coroutine_desc * get_coroutine( T & );
    10261303};
    1027 forall( `dtype` T | is_coroutine(T) ) void suspend( T & );
    1028 forall( `dtype` T | is_coroutine(T) ) void resume( T & );
    1029 \end{cfa}
    1030 The @dtype@ property of the trait ensures the coroutine descriptor is non-copyable, so all coroutines must be passed by reference (pointer).
    1031 The routine definitions ensures there is a statically-typed @main@ routine that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the currently executing coroutine handle.
    1032 The @main@ routine has no return value or additional parameters because the coroutine type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values versus fixed ones.
    1033 The generic routines @suspend@ and @resume@ can be redefined, but any object passed to them is a coroutine since it must satisfy the @is_coroutine@ trait to compile.
    1034 The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ routine, and possibly redefining @suspend@ and @resume@.
    1035 The \CFA keyword @coroutine@ implicitly implements the getter and forward declarations required for implementing the coroutine main:
     1304forall( `dtype` T | is_coroutine(T) ) void $suspend$( T & ), resume( T & );
     1305\end{cfa}
     1306Note, copying generators/coroutines/threads is not meaningful.
     1307For example, both the resumer and suspender descriptors can have bidirectional pointers;
     1308copying these coroutines does not update the internal pointers so behaviour of both copies would be difficult to understand.
     1309Furthermore, two coroutines cannot logically execute on the same stack.
     1310A deep coroutine copy, which copies the stack, is also meaningless in an unmanaged language (no garbage collection), like C, because the stack may contain pointers to object within it that require updating for the copy.
     1311The \CFA @dtype@ property provides no \emph{implicit} copying operations and the @is_coroutine@ trait provides no \emph{explicit} copying operations, so all coroutines must be passed by reference (pointer).
     1312The function definitions ensure there is a statically typed @main@ function that is the starting point (first stack frame) of a coroutine, and a mechanism to get (read) the coroutine descriptor from its handle.
     1313The @main@ function has no return value or additional parameters because the coroutine type allows an arbitrary number of interface functions with corresponding arbitrary typed input/output values versus fixed ones.
     1314The advantage of this approach is that users can easily create different types of coroutines, \eg changing the memory layout of a coroutine is trivial when implementing the @get_coroutine@ function, and possibly redefining \textsf{suspend} and @resume@.
     1315
     1316The \CFA custom-type @coroutine@ implicitly implements the getter and forward declarations for the coroutine main.
    10361317\begin{cquote}
    10371318\begin{tabular}{@{}ccc@{}}
     
    10691350\end{tabular}
    10701351\end{cquote}
    1071 The combination of these two approaches allows an easy and concise specification to coroutining (and concurrency) for normal users, while more advanced users have tighter control on memory layout and initialization.
    1072 
    1073 
    1074 \subsection{Thread Interface}
    1075 \label{threads}
    1076 
    1077 Both user and kernel threads are supported, where user threads provide concurrency and kernel threads provide parallelism.
    1078 Like coroutines and for the same design reasons, the selected approach for user threads is to use language support by introducing a new kind of aggregate (structure) and a \CFA trait:
     1352The combination of custom types and fundamental @trait@ description of these types allows a concise specification for programmers and tools, while more advanced programmers can have tighter control over memory layout and initialization.
     1353
     1354Figure~\ref{f:CoroutineMemoryLayout} shows different memory-layout options for a coroutine (where a task is similar).
     1355The coroutine handle is the @coroutine@ instance containing programmer specified type global/communication variables across interface functions.
     1356The coroutine descriptor contains all implicit declarations needed by the runtime, \eg @suspend@/@resume@, and can be part of the coroutine handle or separate.
     1357The coroutine stack can appear in a number of locations and be fixed or variable sized.
     1358Hence, the coroutine's stack could be a VLS\footnote{
     1359We are examining variable-sized structures (VLS), where fields can be variable-sized structures or arrays.
     1360Once allocated, a VLS is fixed sized.}
     1361on the allocating stack, provided the allocating stack is large enough.
     1362For a VLS stack allocation/deallocation is an inexpensive adjustment of the stack pointer, modulo any stack constructor costs (\eg initial frame setup).
     1363For heap stack allocation, allocation/deallocation is an expensive heap allocation (where the heap can be a shared resource), modulo any stack constructor costs.
     1364With heap stack allocation, it is also possible to use a split (segmented) stack calling convention, available with gcc and clang, so the stack is variable sized.
     1365Currently, \CFA supports stack/heap allocated descriptors but only fixed-sized heap allocated stacks.
     1366In \CFA debug-mode, the fixed-sized stack is terminated with a write-only page, which catches most stack overflows.
     1367Experience teaching concurrency with \uC~\cite{CS343} shows fixed-sized stacks are rarely an issue for students.
     1368Split-stack allocation is under development but requires recompilation of legacy code, which may be impossible.
     1369
     1370\begin{figure}
     1371\centering
     1372\input{corlayout.pstex_t}
     1373\caption{Coroutine memory layout}
     1374\label{f:CoroutineMemoryLayout}
     1375\end{figure}
     1376
     1377
     1378\section{Concurrency}
     1379\label{s:Concurrency}
     1380
     1381Concurrency is nondeterministic scheduling of independent sequential execution paths (threads), where each thread has its own stack.
     1382A single thread with multiple call stacks, \newterm{coroutining}~\cite{Conway63,Marlin80}, does \emph{not} imply concurrency~\cite[\S~2]{Buhr05a}.
     1383In coroutining, coroutines self-schedule the thread across stacks so execution is deterministic.
     1384(It is \emph{impossible} to generate a concurrency error when coroutining.)
     1385However, coroutines are a stepping stone towards concurrency.
     1386
     1387The transition to concurrency, even for a single thread with multiple stacks, occurs when coroutines context switch to a \newterm{scheduling coroutine}, introducing non-determinism from the coroutine perspective~\cite[\S~3,]{Buhr05a}.
     1388Therefore, a minimal concurrency system requires coroutines \emph{in conjunction with a nondeterministic scheduler}.
     1389The resulting execution system now follows a cooperative threading model~\cite{Adya02,libdill}, called \newterm{non-preemptive scheduling}.
     1390Adding \newterm{preemption} introduces non-cooperative scheduling, where context switching occurs randomly between any two instructions often based on a timer interrupt, called \newterm{preemptive scheduling}.
     1391While a scheduler introduces uncertain execution among explicit context switches, preemption introduces uncertainty by introducing implicit context switches.
     1392Uncertainty gives the illusion of parallelism on a single processor and provides a mechanism to access and increase performance on multiple processors.
     1393The reason is that the scheduler/runtime have complete knowledge about resources and how to best utilized them.
     1394However, the introduction of unrestricted nondeterminism results in the need for \newterm{mutual exclusion} and \newterm{synchronization}, which restrict nondeterminism for correctness;
     1395otherwise, it is impossible to write meaningful concurrent programs.
     1396Optimal concurrent performance is often obtained by having as much nondeterminism as mutual exclusion and synchronization correctness allow.
     1397
     1398A scheduler can either be a stackless or stackful.
     1399For stackless, the scheduler performs scheduling on the stack of the current coroutine and switches directly to the next coroutine, so there is one context switch.
     1400For stackful, the current coroutine switches to the scheduler, which performs scheduling, and it then switches to the next coroutine, so there are two context switches.
     1401The \CFA runtime uses a stackful scheduler for uniformity and security.
     1402
     1403
     1404\subsection{Thread}
     1405\label{s:threads}
     1406
     1407Threading needs the ability to start a thread and wait for its completion.
     1408A common API for this ability is @fork@ and @join@.
     1409\begin{cquote}
     1410\begin{tabular}{@{}lll@{}}
     1411\multicolumn{1}{c}{\textbf{Java}} & \multicolumn{1}{c}{\textbf{\Celeven}} & \multicolumn{1}{c}{\textbf{pthreads}} \\
     1412\begin{cfa}
     1413class MyTask extends Thread {...}
     1414mytask t = new MyTask(...);
     1415`t.start();` // start
     1416// concurrency
     1417`t.join();` // wait
     1418\end{cfa}
     1419&
     1420\begin{cfa}
     1421class MyTask { ... } // functor
     1422MyTask mytask;
     1423`thread t( mytask, ... );` // start
     1424// concurrency
     1425`t.join();` // wait
     1426\end{cfa}
     1427&
     1428\begin{cfa}
     1429void * rtn( void * arg ) {...}
     1430pthread_t t;  int i = 3;
     1431`pthread_create( &t, rtn, (void *)i );` // start
     1432// concurrency
     1433`pthread_join( t, NULL );` // wait
     1434\end{cfa}
     1435\end{tabular}
     1436\end{cquote}
     1437\CFA has a simpler approach using a custom @thread@ type and leveraging declaration semantics (allocation/deallocation), where threads implicitly @fork@ after construction and @join@ before destruction.
     1438\begin{cfa}
     1439thread MyTask {};
     1440void main( MyTask & this ) { ... }
     1441int main() {
     1442        MyTask team`[10]`; $\C[2.5in]{// allocate stack-based threads, implicit start after construction}$
     1443        // concurrency
     1444} $\C{// deallocate stack-based threads, implicit joins before destruction}$
     1445\end{cfa}
     1446This semantic ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic (termination) synchronization.
     1447For block allocation to arbitrary depth, including recursion, threads are created/destroyed in a lattice structure (tree with top and bottom).
     1448Arbitrary topologies are possible using dynamic allocation, allowing threads to outlive their declaration scope, identical to normal dynamic allocation.
     1449\begin{cfa}
     1450MyTask * factory( int N ) { ... return `anew( N )`; } $\C{// allocate heap-based threads, implicit start after construction}$
     1451int main() {
     1452        MyTask * team = factory( 10 );
     1453        // concurrency
     1454        `delete( team );` $\C{// deallocate heap-based threads, implicit joins before destruction}\CRT$
     1455}
     1456\end{cfa}
     1457
     1458Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequentially, after all the row threads have terminated.
     1459The program uses heap-based threads because each thread needs different constructor values.
     1460(Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.)
     1461The allocation/deallocation pattern appears unusual because allocated objects are immediately deallocated without any intervening code.
     1462However, for threads, the deletion provides implicit synchronization, which is the intervening code.
     1463% While the subtotals are added in linear order rather than completion order, which slightly inhibits concurrency, the computation is restricted by the critical-path thread (\ie the thread that takes the longest), and so any inhibited concurrency is very small as totalling the subtotals is trivial.
     1464
     1465\begin{figure}
     1466\begin{cfa}
     1467`thread` Adder { int * row, cols, & subtotal; } $\C{// communication variables}$
     1468void ?{}( Adder & adder, int row[], int cols, int & subtotal ) {
     1469        adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ];
     1470}
     1471void main( Adder & adder ) with( adder ) {
     1472        subtotal = 0;
     1473        for ( c; cols ) { subtotal += row[c]; }
     1474}
     1475int main() {
     1476        const int rows = 10, cols = 1000;
     1477        int matrix[rows][cols], subtotals[rows], total = 0;
     1478        // read matrix
     1479        Adder * adders[rows];
     1480        for ( r; rows; ) { $\C{// start threads to sum rows}$
     1481                adders[r] = `new( matrix[r], cols, &subtotals[r] );`
     1482        }
     1483        for ( r; rows ) { $\C{// wait for threads to finish}$
     1484                `delete( adders[r] );` $\C{// termination join}$
     1485                total += subtotals[r]; $\C{// total subtotal}$
     1486        }
     1487        sout | total;
     1488}
     1489\end{cfa}
     1490\caption{Concurrent matrix summation}
     1491\label{s:ConcurrentMatrixSummation}
     1492\end{figure}
     1493
     1494
     1495\subsection{Thread Implementation}
     1496
     1497Threads in \CFA are user level run by runtime kernel threads (see Section~\ref{s:CFARuntimeStructure}), where user threads provide concurrency and kernel threads provide parallelism.
     1498Like coroutines, and for the same design reasons, \CFA provides a custom @thread@ type and a @trait@ to enforce and restrict the task-interface functions.
    10791499\begin{cquote}
    10801500\begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}}
    10811501\begin{cfa}
    10821502thread myThread {
    1083         // communication variables
     1503        ... // declaration/communication variables
    10841504};
    10851505
     
    10891509\begin{cfa}
    10901510trait is_thread( `dtype` T ) {
    1091       void main( T & );
    1092       thread_desc * get_thread( T & );
    1093       void ^?{}( T & `mutex` );
     1511        void main( T & );
     1512        thread_desc * get_thread( T & );
     1513        void ^?{}( T & `mutex` );
    10941514};
    10951515\end{cfa}
    10961516\end{tabular}
    10971517\end{cquote}
    1098 (The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitors}.)
    1099 Like a coroutine, the statically-typed @main@ routine is the starting point (first stack frame) of a user thread.
    1100 The difference is that a coroutine borrows a thread from its caller, so the first thread resuming a coroutine creates an instance of @main@;
    1101 whereas, a user thread receives its own thread from the runtime system, which starts in @main@ as some point after the thread constructor is run.\footnote{
    1102 The \lstinline@main@ routine is already a special routine in C (where the program begins), so it is a natural extension of the semantics to use overloading to declare mains for different coroutines/threads (the normal main being the main of the initial thread).}
    1103 No return value or additional parameters are necessary for this routine because the task type allows an arbitrary number of interface routines with corresponding arbitrary typed input/output values.
    1104 
    1105 \begin{comment} % put in appendix with coroutine version ???
    1106 As such the @main@ routine of a thread can be defined as
    1107 \begin{cfa}
    1108 thread foo {};
    1109 
    1110 void main(foo & this) {
    1111         sout | "Hello World!" | endl;
    1112 }
    1113 \end{cfa}
    1114 
    1115 In this example, threads of type @foo@ start execution in the @void main(foo &)@ routine, which prints @"Hello World!".@ While this paper encourages this approach to enforce strongly typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity.
    1116 With the static semantics it is trivial to write a thread type that takes a routine pointer as a parameter and executes it on its stack asynchronously.
    1117 \begin{cfa}
    1118 typedef void (*voidRtn)(int);
    1119 
    1120 thread RtnRunner {
    1121         voidRtn func;
    1122         int arg;
    1123 };
    1124 
    1125 void ?{}(RtnRunner & this, voidRtn inRtn, int arg) {
    1126         this.func = inRtn;
    1127         this.arg  = arg;
    1128 }
    1129 
    1130 void main(RtnRunner & this) {
    1131         // thread starts here and runs the routine
    1132         this.func( this.arg );
    1133 }
    1134 
    1135 void hello(/*unused*/ int) {
    1136         sout | "Hello World!" | endl;
    1137 }
    1138 
    1139 int main() {
    1140         RtnRunner f = {hello, 42};
    1141         return 0?
    1142 }
    1143 \end{cfa}
    1144 A consequence of the strongly typed approach to main is that memory layout of parameters and return values to/from a thread are now explicitly specified in the \textbf{api}.
    1145 \end{comment}
    1146 
    1147 For user threads to be useful, it must be possible to start and stop the underlying thread, and wait for it to complete execution.
    1148 While using an API such as @fork@ and @join@ is relatively common, such an interface is awkward and unnecessary.
    1149 A simple approach is to use allocation/deallocation principles, and have threads implicitly @fork@ after construction and @join@ before destruction.
    1150 \begin{cfa}
    1151 thread World {};
    1152 void main( World & this ) {
    1153         sout | "World!" | endl;
    1154 }
    1155 int main() {
    1156         World w`[10]`;                                                  $\C{// implicit forks after creation}$
    1157         sout | "Hello " | endl;                                 $\C{// "Hello " and 10 "World!" printed concurrently}$
    1158 }                                                                                       $\C{// implicit joins before destruction}$
    1159 \end{cfa}
    1160 This semantics ensures a thread is started and stopped exactly once, eliminating some programming error, and scales to multiple threads for basic (termination) synchronization.
    1161 This tree-structure (lattice) create/delete from C block-structure is generalized by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created.
    1162 \begin{cfa}
    1163 int main() {
    1164         MyThread * heapLived;
    1165         {
    1166                 MyThread blockLived;                            $\C{// fork block-based thread}$
    1167                 heapLived = `new`( MyThread );          $\C{// fork heap-based thread}$
    1168                 ...
    1169         }                                                                               $\C{// join block-based thread}$
    1170         ...
    1171         `delete`( heapLived );                                  $\C{// join heap-based thread}$
    1172 }
    1173 \end{cfa}
    1174 The heap-based approach allows arbitrary thread-creation topologies, with respect to fork/join-style concurrency.
    1175 
    1176 Figure~\ref{s:ConcurrentMatrixSummation} shows concurrently adding the rows of a matrix and then totalling the subtotals sequential, after all the row threads have terminated.
    1177 The program uses heap-based threads because each thread needs different constructor values.
    1178 (Python provides a simple iteration mechanism to initialize array elements to different values allowing stack allocation.)
    1179 The allocation/deallocation pattern appears unusual because allocated objects are immediately deleted without any intervening code.
    1180 However, for threads, the deletion provides implicit synchronization, which is the intervening code.
    1181 While the subtotals are added in linear order rather than completion order, which slight inhibits concurrency, the computation is restricted by the critical-path thread (\ie the thread that takes the longest), and so any inhibited concurrency is very small as totalling the subtotals is trivial.
    1182 
    1183 \begin{figure}
    1184 \begin{cfa}
    1185 thread Adder {
    1186     int * row, cols, & subtotal;                        $\C{// communication}$
    1187 };
    1188 void ?{}( Adder & adder, int row[], int cols, int & subtotal ) {
    1189     adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ];
    1190 }
    1191 void main( Adder & adder ) with( adder ) {
    1192     subtotal = 0;
    1193     for ( int c = 0; c < cols; c += 1 ) {
    1194                 subtotal += row[c];
    1195     }
    1196 }
    1197 int main() {
    1198     const int rows = 10, cols = 1000;
    1199     int matrix[rows][cols], subtotals[rows], total = 0;
    1200     // read matrix
    1201     Adder * adders[rows];
    1202     for ( int r = 0; r < rows; r += 1 ) {       $\C{// start threads to sum rows}$
    1203                 adders[r] = new( matrix[r], cols, &subtotals[r] );
    1204     }
    1205     for ( int r = 0; r < rows; r += 1 ) {       $\C{// wait for threads to finish}$
    1206                 delete( adders[r] );                            $\C{// termination join}$
    1207                 total += subtotals[r];                          $\C{// total subtotal}$
    1208     }
    1209     sout | total | endl;
    1210 }
    1211 \end{cfa}
    1212 \caption{Concurrent Matrix Summation}
    1213 \label{s:ConcurrentMatrixSummation}
    1214 \end{figure}
     1518Like coroutines, the @dtype@ property prevents \emph{implicit} copy operations and the @is_thread@ trait provides no \emph{explicit} copy operations, so threads must be passed by reference (pointer).
     1519Similarly, the function definitions ensure there is a statically typed @main@ function that is the thread starting point (first stack frame), a mechanism to get (read) the thread descriptor from its handle, and a special destructor to prevent deallocation while the thread is executing.
     1520(The qualifier @mutex@ for the destructor parameter is discussed in Section~\ref{s:Monitor}.)
     1521The difference between the coroutine and thread is that a coroutine borrows a thread from its caller, so the first thread resuming a coroutine creates the coroutine's stack and starts running the coroutine main on the stack;
     1522whereas, a thread is scheduling for execution in @main@ immediately after its constructor is run.
     1523No return value or additional parameters are necessary for this function because the @thread@ type allows an arbitrary number of interface functions with corresponding arbitrary typed input/output values.
    12151524
    12161525
    12171526\section{Mutual Exclusion / Synchronization}
    1218 
    1219 Uncontrolled non-deterministic execution is meaningless.
    1220 To reestablish meaningful execution requires mechanisms to reintroduce determinism (\ie restrict non-determinism), called mutual exclusion and synchronization, where mutual exclusion is an access-control mechanism on data shared by threads, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}.
    1221 Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it, \eg Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala).
    1222 In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (\eg channels~\cite{CSP,Go}).
    1223 However, in call/return-based languages, these approaches force a clear distinction (\ie introduce a new programming paradigm) between regular and concurrent computation (\ie routine call versus message passing).
    1224 Hence, a programmer must learn and manipulate two sets of design patterns.
     1527\label{s:MutualExclusionSynchronization}
     1528
     1529Unrestricted nondeterminism is meaningless as there is no way to know when the result is completed without synchronization.
     1530To produce meaningful execution requires clawing back some determinism using mutual exclusion and synchronization, where mutual exclusion provides access control for threads using shared data, and synchronization is a timing relationship among threads~\cite[\S~4]{Buhr05a}.
     1531Some concurrent systems eliminate mutable shared-state by switching to stateless communication like message passing~\cite{Thoth,Harmony,V-Kernel,MPI} (Erlang, MPI), channels~\cite{CSP} (CSP,Go), actors~\cite{Akka} (Akka, Scala), or functional techniques (Haskell).
     1532However, these approaches introduce a new communication mechanism for concurrency different from the standard communication using function call/return.
     1533Hence, a programmer must learn and manipulate two sets of design/programming patterns.
    12251534While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account.
    1226 In contrast, approaches based on statefull models more closely resemble the standard call/return programming-model, resulting in a single programming paradigm.
    1227 
    1228 At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locks mechanism are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
     1535In contrast, approaches based on stateful models more closely resemble the standard call/return programming model, resulting in a single programming paradigm.
     1536
     1537At the lowest level, concurrent control is implemented by atomic operations, upon which different kinds of locking mechanisms are constructed, \eg semaphores~\cite{Dijkstra68b}, barriers, and path expressions~\cite{Campbell74}.
    12291538However, for productivity it is always desirable to use the highest-level construct that provides the necessary efficiency~\cite{Hochstein05}.
    12301539A newer approach for restricting non-determinism is transactional memory~\cite{Herlihy93}.
    1231 While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it was rejected as the core paradigm for concurrency in \CFA.
     1540While this approach is pursued in hardware~\cite{Nakaike15} and system languages, like \CC~\cite{Cpp-Transactions}, the performance and feature set is still too restrictive to be the main concurrency paradigm for system languages, which is why it is rejected as the core paradigm for concurrency in \CFA.
    12321541
    12331542One of the most natural, elegant, and efficient mechanisms for mutual exclusion and synchronization for shared-memory systems is the \emph{monitor}.
    1234 First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming-languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
     1543First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, many concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
    12351544In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to simulate monitors.
    1236 For these reasons, \CFA selected monitors as the core high-level concurrency-construct, upon which higher-level approaches can be easily constructed.
     1545For these reasons, \CFA selected monitors as the core high-level concurrency construct, upon which higher-level approaches can be easily constructed.
    12371546
    12381547
    12391548\subsection{Mutual Exclusion}
    12401549
    1241 A group of instructions manipulating a specific instance of shared data that must be performed atomically is called an (individual) \newterm{critical-section}~\cite{Dijkstra65}.
    1242 The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously.
    1243 The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers have the same session and all writers have a unique session.
    1244 \newterm{Mutual exclusion} enforces that the correct kind and number of threads are using a critical section.
     1550A group of instructions manipulating a specific instance of shared data that must be performed atomically is called a \newterm{critical section}~\cite{Dijkstra65}, which is enforced by \newterm{simple mutual-exclusion}.
     1551The generalization is called a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session use the resource simultaneously and different sessions are segregated, which is enforced by \newterm{complex mutual-exclusion} providing the correct kind and number of threads using a group critical-section.
     1552The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers share a session but writers have a unique session.
    12451553
    12461554However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
    12471555Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use.
    1248 Ease of use comes by either guaranteeing some problems cannot occur (\eg deadlock free), or by offering a more explicit coupling between shared data and critical section.
    1249 For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing) for numerical types.
     1556Ease of use comes by either guaranteeing some problems cannot occur, \eg deadlock free, or by offering a more explicit coupling between shared data and critical section.
     1557For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations, \eg reading/writing, for numerical types.
    12501558However, a significant challenge with locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
    12511559Easing composability is another feature higher-level mutual-exclusion mechanisms can offer.
     
    12561564Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships.
    12571565Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use;
    1258 higher-level mechanisms often simplify usage by adding better coupling between synchronization and data (\eg message passing), or offering a simpler solution to otherwise involved challenges, \eg barrier lock.
    1259 Often synchronization is used to order access to a critical section, \eg ensuring a reader thread is the next kind of thread to enter a critical section.
    1260 If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, that reader has \newterm{barged}.
     1566higher-level mechanisms often simplify usage by adding better coupling between synchronization and data, \eg receive-specific versus receive-any thread in message passing or offering specialized solutions, \eg barrier lock.
     1567Often synchronization is used to order access to a critical section, \eg ensuring a waiting writer thread enters the critical section before a calling reader thread.
     1568If the calling reader is scheduled before the waiting writer, the reader has barged.
    12611569Barging can result in staleness/freshness problems, where a reader barges ahead of a writer and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from ever being read (lost computation).
    1262 Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
    1263 This challenge is often split into two different approaches: barging avoidance and barging prevention.
    1264 Algorithms that allow a barger, but divert it until later using current synchronization state (flags), are avoiding the barger;
    1265 algorithms that preclude a barger from entering during synchronization in the critical section prevent barging completely.
    1266 Techniques like baton-pass locks~\cite{Andrews89} between threads instead of unconditionally releasing locks is an example of barging prevention.
    1267 
    1268 
    1269 \section{Monitors}
    1270 \label{s:Monitors}
    1271 
    1272 A \textbf{monitor} is a set of routines that ensure mutual exclusion when accessing shared state.
    1273 More precisely, a monitor is a programming technique that binds mutual exclusion to routine scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
    1274 The strong association with the call/return paradigm eases programmability, readability and maintainability, at a slight cost in flexibility and efficiency.
    1275 
    1276 Note, like coroutines/threads, both locks and monitors require an abstract handle to reference them, because at their core, both mechanisms are manipulating non-copyable shared state.
    1277 Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data.
    1278 Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity.
    1279 As for coroutines/tasks, a non-copyable (@dtype@) trait is used to capture this requirement, so all locks/monitors must be passed by reference (pointer).
     1570Preventing or detecting barging is an involved challenge with low-level locks, which is made easier through higher-level constructs.
     1571This challenge is often split into two different approaches: barging avoidance and prevention.
     1572Algorithms that unconditionally releasing a lock for competing threads to acquire use barging avoidance during synchronization to force a barging thread to wait;
     1573algorithms that conditionally hold locks during synchronization, \eg baton-passing~\cite{Andrews89}, prevent barging completely.
     1574
     1575
     1576\section{Monitor}
     1577\label{s:Monitor}
     1578
     1579A \textbf{monitor} is a set of functions that ensure mutual exclusion when accessing shared state.
     1580More precisely, a monitor is a programming technique that implicitly binds mutual exclusion to static function scope, as opposed to locks, where mutual-exclusion is defined by acquire/release calls, independent of lexical context (analogous to block and heap storage allocation).
     1581Restricting acquire/release points eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency.
     1582\CFA uses a custom @monitor@ type and leverages declaration semantics (deallocation) to protect active or waiting threads in a monitor.
     1583
     1584The following is a \CFA monitor implementation of an atomic counter.
     1585\begin{cfa}[morekeywords=nomutex]
     1586`monitor` Aint { int cnt; }; $\C[4.25in]{// atomic integer counter}$
     1587int ++?( Aint & `mutex`$\(_{opt}\)$ this ) with( this ) { return ++cnt; } $\C{// increment}$
     1588int ?=?( Aint & `mutex`$\(_{opt}\)$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions with int}\CRT$
     1589int ?=?( int & lhs, Aint & `mutex`$\(_{opt}\)$ rhs ) with( rhs ) { lhs = cnt; }
     1590\end{cfa}
     1591% The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized.
     1592% (While a constructor may publish its address into a global variable, doing so generates a race-condition.)
     1593The prefix increment operation, @++?@, is normally @mutex@, indicating mutual exclusion is necessary during function execution, to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type.
     1594The assignment operators provide bidirectional conversion between an atomic and normal integer without accessing field @cnt@;
     1595these operations only need @mutex@, if reading/writing the implementation type is not atomic.
     1596The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@.
     1597\begin{cfa}
     1598int i = 0, j = 0, k = 5;
     1599Aint x = { 0 }, y = { 0 }, z = { 5 }; $\C{// no mutex required}$
     1600++x; ++y; ++z; $\C{// safe increment by multiple threads}$
     1601x = 2; y = i; z = k; $\C{// conversions}$
     1602i = x; j = y; k = z;
     1603\end{cfa}
     1604
     1605\CFA monitors have \newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other interface functions.
     1606\begin{cfa}
     1607monitor M { ... } m;
     1608void foo( M & mutex m ) { ... } $\C{// acquire mutual exclusion}$
     1609void bar( M & mutex m ) { $\C{// acquire mutual exclusion}$
     1610        ... `bar( m );` ... `foo( m );` ... $\C{// reacquire mutual exclusion}$
     1611}
     1612\end{cfa}
     1613\CFA monitors also ensure the monitor lock is released regardless of how an acquiring function ends (normal or exceptional), and returning a shared variable is safe via copying before the lock is released.
     1614Similar safety is offered by \emph{explicit} mechanisms like \CC RAII;
     1615monitor \emph{implicit} safety ensures no programmer usage errors.
     1616Furthermore, RAII mechanisms cannot handle complex synchronization within a monitor, where the monitor lock may not be released on function exit because it is passed to an unblocking thread;
     1617RAII is purely a mutual-exclusion mechanism (see Section~\ref{s:Scheduling}).
     1618
     1619
     1620\subsection{Monitor Implementation}
     1621
     1622For the same design reasons, \CFA provides a custom @monitor@ type and a @trait@ to enforce and restrict the monitor-interface functions.
     1623\begin{cquote}
     1624\begin{tabular}{@{}c@{\hspace{3\parindentlnth}}c@{}}
     1625\begin{cfa}
     1626monitor M {
     1627        ... // shared data
     1628};
     1629
     1630\end{cfa}
     1631&
    12801632\begin{cfa}
    12811633trait is_monitor( `dtype` T ) {
     
    12841636};
    12851637\end{cfa}
     1638\end{tabular}
     1639\end{cquote}
     1640The @dtype@ property prevents \emph{implicit} copy operations and the @is_monitor@ trait provides no \emph{explicit} copy operations, so monitors must be passed by reference (pointer).
     1641% Copying a lock is insecure because it is possible to copy an open lock and then use the open copy when the original lock is closed to simultaneously access the shared data.
     1642% Copying a monitor is secure because both the lock and shared data are copies, but copying the shared data is meaningless because it no longer represents a unique entity.
     1643Similarly, the function definitions ensures there is a mechanism to get (read) the monitor descriptor from its handle, and a special destructor to prevent deallocation if a thread using the shared data.
     1644The custom monitor type also inserts any locks needed to implement the mutual exclusion semantics.
    12861645
    12871646
     
    12891648\label{s:MutexAcquisition}
    12901649
    1291 While correctness implicitly implies a monitor's mutual exclusion is acquired and released, there are implementation options about when and where the locking/unlocking occurs.
     1650While the monitor lock provides mutual exclusion for shared data, there are implementation options for when and where the locking/unlocking occurs.
    12921651(Much of this discussion also applies to basic locks.)
    1293 For example, a monitor may need to be passed through multiple helper routines before it becomes necessary to acquire the monitor mutual-exclusion.
    1294 \begin{cfa}[morekeywords=nomutex]
    1295 monitor Aint { int cnt; };                                      $\C{// atomic integer counter}$
    1296 void ?{}( Aint & `nomutex` this ) with( this ) { cnt = 0; } $\C{// constructor}$
    1297 int ?=?( Aint & `mutex`$\(_{opt}\)$ lhs, int rhs ) with( lhs ) { cnt = rhs; } $\C{// conversions}$
    1298 void ?{}( int & this, Aint & `mutex`$\(_{opt}\)$ v ) { this = v.cnt; }
    1299 int ?=?( int & lhs, Aint & `mutex`$\(_{opt}\)$ rhs ) with( rhs ) { lhs = cnt; }
    1300 int ++?( Aint & `mutex`$\(_{opt}\)$ this ) with( this ) { return ++cnt; } $\C{// increment}$
    1301 \end{cfa}
    1302 The @Aint@ constructor, @?{}@, uses the \lstinline[morekeywords=nomutex]@nomutex@ qualifier indicating mutual exclusion is unnecessary during construction because an object is inaccessible (private) until after it is initialized.
    1303 (While a constructor may publish its address into a global variable, doing so generates a race-condition.)
    1304 The conversion operators for initializing and assigning with a normal integer only need @mutex@, if reading/writing the implementation type is not atomic.
    1305 Finally, the prefix increment operato, @++?@, is normally @mutex@ to protect the incrementing from race conditions, unless there is an atomic increment instruction for the implementation type.
    1306 
    1307 The atomic counter is used without any explicit mutual-exclusion and provides thread-safe semantics, which is similar to the \CC template @std::atomic@.
    1308 \begin{cfa}
    1309 Aint x, y, z;
    1310 ++x; ++y; ++z;                                                          $\C{// safe increment by multiple threads}$
    1311 x = 2; y = 2; z = 2;                                            $\C{// conversions}$
    1312 int i = x, j = y, k = z;
    1313 i = x; j = y; k = z;
    1314 \end{cfa}
    1315 
    1316 For maximum usability, monitors have \newterm{multi-acquire} semantics allowing a thread to acquire it multiple times without deadlock.
    1317 For example, atomically printing the contents of a binary tree:
    1318 \begin{cfa}
    1319 monitor Tree {
    1320         Tree * left, right;
    1321         // value
    1322 };
    1323 void print( Tree & mutex tree ) {                       $\C{// prefix traversal}$
    1324         // write value
    1325         print( tree->left );                                    $\C{// multiply acquire monitor lock on each recursion}$
    1326         print( tree->right );
    1327 }
    1328 \end{cfa}
    1329 
    1330 Mandatory monitor qualifiers have the benefit of being self-documented, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameter is redundant.
    1331 Instead, one of qualifier semantics can be the default, and the other required.
    1332 For example, assume the safe @mutex@ option for a monitor parameter because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors.
    1333 On the other hand, assuming \lstinline[morekeywords=nomutex]@nomutex@ is the \emph{normal} parameter behaviour, stating explicitly ``this parameter is not special''.
     1652For example, a monitor may be passed through multiple helper functions before it is necessary to acquire the monitor's mutual exclusion.
     1653
     1654The benefit of mandatory monitor qualifiers is self-documentation, but requiring both @mutex@ and \lstinline[morekeywords=nomutex]@nomutex@ for all monitor parameters is redundant.
     1655Instead, the semantics has one qualifier as the default and the other required.
     1656For example, make the safe @mutex@ qualifier the default because assuming \lstinline[morekeywords=nomutex]@nomutex@ may cause subtle errors.
     1657Alternatively, make the unsafe \lstinline[morekeywords=nomutex]@nomutex@ qualifier the default because it is the \emph{normal} parameter semantics while @mutex@ parameters are rare.
    13341658Providing a default qualifier implies knowing whether a parameter is a monitor.
    1335 Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred.
     1659Since \CFA relies heavily on traits as an abstraction mechanism, types can coincidentally match the monitor trait but not be a monitor, similar to inheritance where a shape and playing card can both be drawable.
    13361660For this reason, \CFA requires programmers to identify the kind of parameter with the @mutex@ keyword and uses no keyword to mean \lstinline[morekeywords=nomutex]@nomutex@.
    13371661
    13381662The next semantic decision is establishing which parameter \emph{types} may be qualified with @mutex@.
    1339 Given:
     1663The following has monitor parameter types that are composed of multiple objects.
    13401664\begin{cfa}
    13411665monitor M { ... }
    1342 int f1( M & mutex m );
    1343 int f2( M * mutex m );
    1344 int f3( M * mutex m[] );
    1345 int f4( stack( M * ) & mutex m );
    1346 \end{cfa}
    1347 the issue is that some of these parameter types are composed of multiple objects.
    1348 For @f1@, there is only a single parameter object.
    1349 Adding indirection in @f2@ still identifies a single object.
    1350 However, the matrix in @f3@ introduces multiple objects.
    1351 While shown shortly, multiple acquisition is possible;
    1352 however array lengths are often unknown in C.
    1353 This issue is exacerbated in @f4@, where the data structure must be safely traversed to acquire all of its elements.
    1354 
    1355 To make the issue tractable, \CFA only acquires one monitor per parameter with at most one level of indirection.
    1356 However, the C type-system has an ambiguity with respects to arrays.
    1357 Is the argument for @f2@ a single object or an array of objects?
    1358 If it is an array, only the first element of the array is acquired, which seems unsafe;
    1359 hence, @mutex@ is disallowed for array parameters.
    1360 \begin{cfa}
    1361 int f1( M & mutex m );                                          $\C{// allowed: recommended case}$
    1362 int f2( M * mutex m );                                          $\C{// disallowed: could be an array}$
    1363 int f3( M mutex m[$\,$] );                                      $\C{// disallowed: array length unknown}$
    1364 int f4( M ** mutex m );                                         $\C{// disallowed: could be an array}$
    1365 int f5( M * mutex m[$\,$] );                            $\C{// disallowed: array length unknown}$
    1366 \end{cfa}
    1367 % Note, not all array routines have distinct types: @f2@ and @f3@ have the same type, as do @f4@ and @f5@.
    1368 % However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic.
    1369 
    1370 For object-oriented monitors, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@.
    1371 \CFA has no receiver, and hence, must use an explicit mechanism to specify which object has mutual exclusion acquired.
    1372 A positive consequence of this design decision is the ability to support multi-monitor routines.
    1373 \begin{cfa}
    1374 int f( M & mutex x, M & mutex y );              $\C{// multiple monitor parameter of any type}$
    1375 M m1, m2;
    1376 f( m1, m2 );
    1377 \end{cfa}
    1378 (While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.)
    1379 In practice, writing multi-locking routines that do not deadlocks is tricky.
    1380 Having language support for such a feature is therefore a significant asset for \CFA.
    1381 
    1382 The capability to acquire multiple locks before entering a critical section is called \newterm{bulk acquire}.
    1383 In previous example, \CFA guarantees the order of acquisition is consistent across calls to different routines using the same monitors as arguments.
    1384 This consistent ordering means acquiring multiple monitors is safe from deadlock.
    1385 However, users can force the acquiring order.
    1386 For example, notice the use of @mutex@/\lstinline[morekeywords=nomutex]@nomutex@ and how this affects the acquiring order:
    1387 \begin{cfa}
    1388 void foo( M & mutex m1, M & mutex m2 );         $\C{// acquire m1 and m2}$
     1666int f1( M & mutex m ); $\C{// single parameter object}$
     1667int f2( M * mutex m ); $\C{// single or multiple parameter object}$
     1668int f3( M * mutex m[$\,$] ); $\C{// multiple parameter object}$
     1669int f4( stack( M * ) & mutex m ); $\C{// multiple parameters object}$
     1670\end{cfa}
     1671Function @f1@ has a single parameter object, while @f2@'s indirection could be a single or multi-element array, where static array size is often unknown in C.
     1672Function @f3@ has a multiple object matrix, and @f4@ a multiple object data structure.
     1673While shown shortly, multiple object acquisition is possible, but the number of objects must be statically known.
     1674Therefore, \CFA only acquires one monitor per parameter with at most one level of indirection, excluding pointers as it is impossible to statically determine the size.
     1675
     1676For object-oriented monitors, \eg Java, calling a mutex member \emph{implicitly} acquires mutual exclusion of the receiver object, @`rec`.foo(...)@.
     1677\CFA has no receiver, and hence, the explicit @mutex@ qualifier is used to specify which objects acquire mutual exclusion.
     1678A positive consequence of this design decision is the ability to support multi-monitor functions,\footnote{
     1679While object-oriented monitors can be extended with a mutex qualifier for multiple-monitor members, no prior example of this feature could be found.}
     1680called \newterm{bulk acquire}.
     1681\CFA guarantees acquisition order is consistent across calls to @mutex@ functions using the same monitors as arguments, so acquiring multiple monitors is safe from deadlock.
     1682Figure~\ref{f:BankTransfer} shows a trivial solution to the bank transfer problem~\cite{BankTransfer}, where two resources must be locked simultaneously, using \CFA monitors with implicit locking and \CC with explicit locking.
     1683A \CFA programmer only has to manage when to acquire mutual exclusion;
     1684a \CC programmer must select the correct lock and acquisition mechanism from a panoply of locking options.
     1685Making good choices for common cases in \CFA simplifies the programming experience and enhances safety.
     1686
     1687\begin{figure}
     1688\centering
     1689\begin{lrbox}{\myboxA}
     1690\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1691monitor BankAccount {
     1692
     1693        int balance;
     1694} b1 = { 0 }, b2 = { 0 };
     1695void deposit( BankAccount & `mutex` b,
     1696                        int deposit ) with(b) {
     1697        balance += deposit;
     1698}
     1699void transfer( BankAccount & `mutex` my,
     1700        BankAccount & `mutex` your, int me2you ) {
     1701
     1702        deposit( my, -me2you ); // debit
     1703        deposit( your, me2you ); // credit
     1704}
     1705`thread` Person { BankAccount & b1, & b2; };
     1706void main( Person & person ) with(person) {
     1707        for ( 10_000_000 ) {
     1708                if ( random() % 3 ) deposit( b1, 3 );
     1709                if ( random() % 3 ) transfer( b1, b2, 7 );
     1710        }
     1711}
     1712int main() {
     1713        `Person p1 = { b1, b2 }, p2 = { b2, b1 };`
     1714
     1715} // wait for threads to complete
     1716\end{cfa}
     1717\end{lrbox}
     1718
     1719\begin{lrbox}{\myboxB}
     1720\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1721struct BankAccount {
     1722        `recursive_mutex m;`
     1723        int balance = 0;
     1724} b1, b2;
     1725void deposit( BankAccount & b, int deposit ) {
     1726        `scoped_lock lock( b.m );`
     1727        b.balance += deposit;
     1728}
     1729void transfer( BankAccount & my,
     1730                        BankAccount & your, int me2you ) {
     1731        `scoped_lock lock( my.m, your.m );`
     1732        deposit( my, -me2you ); // debit
     1733        deposit( your, me2you ); // credit
     1734}
     1735
     1736void person( BankAccount & b1, BankAccount & b2 ) {
     1737        for ( int i = 0; i < 10$'$000$'$000; i += 1 ) {
     1738                if ( random() % 3 ) deposit( b1, 3 );
     1739                if ( random() % 3 ) transfer( b1, b2, 7 );
     1740        }
     1741}
     1742int main() {
     1743        `thread p1(person, ref(b1), ref(b2)), p2(person, ref(b2), ref(b1));`
     1744        `p1.join(); p2.join();`
     1745}
     1746\end{cfa}
     1747\end{lrbox}
     1748
     1749\subfloat[\CFA]{\label{f:CFABank}\usebox\myboxA}
     1750\hspace{3pt}
     1751\vrule
     1752\hspace{3pt}
     1753\subfloat[\CC]{\label{f:C++Bank}\usebox\myboxB}
     1754\hspace{3pt}
     1755\caption{Bank transfer problem}
     1756\label{f:BankTransfer}
     1757\end{figure}
     1758
     1759Users can still force the acquiring order by using @mutex@/\lstinline[morekeywords=nomutex]@nomutex@.
     1760\begin{cfa}
     1761void foo( M & mutex m1, M & mutex m2 ); $\C{// acquire m1 and m2}$
    13891762void bar( M & mutex m1, M & /* nomutex */ m2 ) { $\C{// acquire m1}$
    1390         ... foo( m1, m2 ); ...                                  $\C{// acquire m2}$
     1763        ... foo( m1, m2 ); ... $\C{// acquire m2}$
    13911764}
    13921765void baz( M & /* nomutex */ m1, M & mutex m2 ) { $\C{// acquire m2}$
    1393         ... foo( m1, m2 ); ...                                  $\C{// acquire m1}$
    1394 }
    1395 \end{cfa}
    1396 The multi-acquire semantics allows @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@.
    1397 In the calls to @bar@ and @baz@, the monitors are acquired in opposite order.
    1398 
    1399 However, such use leads to lock acquiring order problems resulting in deadlock~\cite{Lister77}, where detecting it requires dynamically tracking of monitor calls, and dealing with it requires implement rollback semantics~\cite{Dice10}.
    1400 In \CFA, safety is guaranteed by using bulk acquire of all monitors to shared objects, whereas other monitor systems provide no aid.
    1401 While \CFA provides only a partial solution, the \CFA partial solution handles many useful cases.
    1402 \begin{cfa}
    1403 monitor Bank { ... };
    1404 void deposit( Bank & `mutex` b, int deposit );
    1405 void transfer( Bank & `mutex` mybank, Bank & `mutex` yourbank, int me2you) {
    1406         deposit( mybank, `-`me2you );                   $\C{// debit}$
    1407         deposit( yourbank, me2you );                    $\C{// credit}$
    1408 }
    1409 \end{cfa}
    1410 This example shows a trivial solution to the bank-account transfer problem~\cite{BankTransfer}.
    1411 Without multi- and bulk acquire, the solution to this problem requires careful engineering.
    1412 
    1413 
    1414 \subsection{\protect\lstinline|mutex| statement} \label{mutex-stmt}
    1415 
    1416 The monitor call-semantics associate all locking semantics to routines.
    1417 Like Java, \CFA offers an alternative @mutex@ statement to reduce refactoring and naming.
     1766        ... foo( m1, m2 ); ... $\C{// acquire m1}$
     1767}
     1768\end{cfa}
     1769The bulk-acquire semantics allow @bar@ or @baz@ to acquire a monitor lock and reacquire it in @foo@.
     1770The calls to @bar@ and @baz@ acquired the monitors in opposite order, possibly resulting in deadlock.
     1771However, this case is the simplest instance of the \emph{nested-monitor problem}~\cite{Lister77}, where monitors are acquired in sequence versus bulk.
     1772Detecting the nested-monitor problem requires dynamic tracking of monitor calls, and dealing with it requires rollback semantics~\cite{Dice10}.
     1773\CFA does not deal with this fundamental problem.
     1774
     1775Finally, like Java, \CFA offers an alternative @mutex@ statement to reduce refactoring and naming.
    14181776\begin{cquote}
    1419 \begin{tabular}{@{}c|@{\hspace{\parindentlnth}}c@{}}
    1420 routine call & @mutex@ statement \\
    1421 \begin{cfa}
    1422 monitor M {};
     1777\renewcommand{\arraystretch}{0.0}
     1778\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     1779\multicolumn{1}{c}{\textbf{\lstinline@mutex@ call}} & \multicolumn{1}{c}{\lstinline@mutex@ \textbf{statement}} \\
     1780\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1781monitor M { ... };
    14231782void foo( M & mutex m1, M & mutex m2 ) {
    14241783        // critical section
     
    14291788\end{cfa}
    14301789&
    1431 \begin{cfa}
     1790\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    14321791
    14331792void bar( M & m1, M & m2 ) {
     
    14421801
    14431802
    1444 \section{Internal Scheduling}
    1445 \label{s:InternalScheduling}
    1446 
    1447 While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer, Figure~\ref{f:GenericBoundedBuffer}, may be full/empty so produce/consumer threads must block.
     1803\subsection{Scheduling}
     1804\label{s:Scheduling}
     1805
     1806% There are many aspects of scheduling in a concurrency system, all related to resource utilization by waiting threads, \ie which thread gets the resource next.
     1807% Different forms of scheduling include access to processors by threads (see Section~\ref{s:RuntimeStructureCluster}), another is access to a shared resource by a lock or monitor.
     1808This section discusses monitor scheduling for waiting threads eligible for entry, \ie which thread gets the shared resource next. (See Section~\ref{s:RuntimeStructureCluster} for scheduling threads on virtual processors.)
     1809While monitor mutual-exclusion provides safe access to shared data, the monitor data may indicate that a thread accessing it cannot proceed, \eg a bounded buffer may be full/empty so produce/consumer threads must block.
    14481810Leaving the monitor and trying again (busy waiting) is impractical for high-level programming.
    1449 Monitors eliminate busy waiting by providing internal synchronization to schedule threads needing access to the shared data, where the synchronization is blocking (threads are parked) versus spinning.
    1450 The synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling, where \newterm{scheduling} is defined as indicating which thread acquires the critical section next.
     1811Monitors eliminate busy waiting by providing synchronization to schedule threads needing access to the shared data, where threads block versus spinning.
     1812Synchronization is generally achieved with internal~\cite{Hoare74} or external~\cite[\S~2.9.2]{uC++} scheduling.
    14511813\newterm{Internal scheduling} is characterized by each thread entering the monitor and making an individual decision about proceeding or blocking, while \newterm{external scheduling} is characterized by an entering thread making a decision about proceeding for itself and on behalf of other threads attempting entry.
    1452 
    1453 Figure~\ref{f:BBInt} shows a \CFA bounded-buffer with internal scheduling, where producers/consumers enter the monitor, see the buffer is full/empty, and block on an appropriate condition lock, @full@/@empty@.
    1454 The @wait@ routine atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the routine's parameter list.
    1455 The appropriate condition lock is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer.
    1456 Signalling is unconditional, because signalling an empty condition lock does nothing.
    1457 Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means:
    1458 \begin{enumerate}
    1459 \item
    1460 The signalling thread returns immediately, and the signalled thread continues.
    1461 \item
    1462 The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait).
    1463 \item
    1464 The signalling thread blocks but is marked for urgrent unblocking at the next scheduling point and the signalled thread continues.
    1465 \end{enumerate}
    1466 The first approach is too restrictive, as it precludes solving a reasonable class of problems (\eg dating service).
    1467 \CFA supports the next two semantics as both are useful.
    1468 Finally, while it is common to store a @condition@ as a field of the monitor, in \CFA, a @condition@ variable can be created/stored independently.
    1469 Furthermore, a condition variable is tied to a \emph{group} of monitors on first use (called \newterm{branding}), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors.
     1814Finally, \CFA monitors do not allow calling threads to barge ahead of signalled threads, which simplifies synchronization among threads in the monitor and increases correctness.
     1815If barging is allowed, synchronization between a signaller and signallee is difficult, often requiring additional flags and multiple unblock/block cycles.
     1816In fact, signals-as-hints is completely opposite from that proposed by Hoare in the seminal paper on monitors~\cite[p.~550]{Hoare74}.
     1817% \begin{cquote}
     1818% However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
     1819% It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
     1820% \end{cquote}
     1821Furthermore, \CFA concurrency has no spurious wakeup~\cite[\S~9]{Buhr05a}, which eliminates an implicit form of self barging.
     1822Hence, a \CFA @wait@ statement is not enclosed in a @while@ loop retesting a blocking predicate, which can cause thread starvation due to barging.
     1823
     1824Figure~\ref{f:MonitorScheduling} shows general internal/external scheduling (for the bounded-buffer example in Figure~\ref{f:InternalExternalScheduling}).
     1825External calling threads block on the calling queue, if the monitor is occupied, otherwise they enter in FIFO order.
     1826Internal threads block on condition queues via @wait@ and reenter from the condition in FIFO order.
     1827Alternatively, internal threads block on urgent from the @signal_block@ or @waitfor@, and reenter implicitly when the monitor becomes empty, \ie, the thread in the monitor exits or waits.
     1828
     1829There are three signalling mechanisms to unblock waiting threads to enter the monitor.
     1830Note, signalling cannot have the signaller and signalled thread in the monitor simultaneously because of the mutual exclusion, so either the signaller or signallee can proceed.
     1831For internal scheduling, threads are unblocked from condition queues using @signal@, where the signallee is moved to urgent and the signaller continues (solid line).
     1832Multiple signals move multiple signallees to urgent until the condition is empty.
     1833When the signaller exits or waits, a thread blocked on urgent is processed before calling threads to prevent barging.
     1834(Java conceptually moves the signalled thread to the calling queue, and hence, allows barging.)
     1835The alternative unblock is in the opposite order using @signal_block@, where the signaller is moved to urgent and the signallee continues (dashed line), and is implicitly unblocked from urgent when the signallee exits or waits.
     1836
     1837For external scheduling, the condition queues are not used;
     1838instead threads are unblocked directly from the calling queue using @waitfor@ based on function names requesting mutual exclusion.
     1839(The linear search through the calling queue to locate a particular call can be reduced to $O(1)$.)
     1840The @waitfor@ has the same semantics as @signal_block@, where the signalled thread executes before the signallee, which waits on urgent.
     1841Executing multiple @waitfor@s from different signalled functions causes the calling threads to move to urgent.
     1842External scheduling requires urgent to be a stack, because the signaller expects to execute immediately after the specified monitor call has exited or waited.
     1843Internal scheduling behaves the same for an urgent stack or queue, except for multiple signalling, where the threads unblock from urgent in reverse order from signalling.
     1844If the restart order is important, multiple signalling by a signal thread can be transformed into daisy-chain signalling among threads, where each thread signals the next thread.
     1845We tried both a stack for @waitfor@ and queue for signalling, but that resulted in complex semantics about which thread enters next.
     1846Hence, \CFA uses a single urgent stack to correctly handle @waitfor@ and adequately support both forms of signalling.
     1847
     1848\begin{figure}
     1849\centering
     1850% \subfloat[Scheduling Statements] {
     1851% \label{fig:SchedulingStatements}
     1852% {\resizebox{0.45\textwidth}{!}{\input{CondSigWait.pstex_t}}}
     1853\input{CondSigWait.pstex_t}
     1854% }% subfloat
     1855% \quad
     1856% \subfloat[Bulk acquire monitor] {
     1857% \label{fig:BulkMonitor}
     1858% {\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}}
     1859% }% subfloat
     1860\caption{Monitor Scheduling}
     1861\label{f:MonitorScheduling}
     1862\end{figure}
     1863
     1864Figure~\ref{f:BBInt} shows a \CFA generic bounded-buffer with internal scheduling, where producers/consumers enter the monitor, detect the buffer is full/empty, and block on an appropriate condition variable, @full@/@empty@.
     1865The @wait@ function atomically blocks the calling thread and implicitly releases the monitor lock(s) for all monitors in the function's parameter list.
     1866The appropriate condition variable is signalled to unblock an opposite kind of thread after an element is inserted/removed from the buffer.
     1867Signalling is unconditional, because signalling an empty condition variable does nothing.
     1868It is common to declare condition variables as monitor fields to prevent shared access, hence no locking is required for access as the conditions are protected by the monitor lock.
     1869In \CFA, a condition variable can be created/stored independently.
     1870% To still prevent expensive locking on access, a condition variable is tied to a \emph{group} of monitors on first use, called \newterm{branding}, resulting in a low-cost boolean test to detect sharing from other monitors.
     1871
     1872% Signalling semantics cannot have the signaller and signalled thread in the monitor simultaneously, which means:
     1873% \begin{enumerate}
     1874% \item
     1875% The signalling thread returns immediately and the signalled thread continues.
     1876% \item
     1877% The signalling thread continues and the signalled thread is marked for urgent unblocking at the next scheduling point (exit/wait).
     1878% \item
     1879% The signalling thread blocks but is marked for urgent unblocking at the next scheduling point and the signalled thread continues.
     1880% \end{enumerate}
     1881% The first approach is too restrictive, as it precludes solving a reasonable class of problems, \eg dating service (see Figure~\ref{f:DatingService}).
     1882% \CFA supports the next two semantics as both are useful.
    14701883
    14711884\begin{figure}
     
    14811894        };
    14821895        void ?{}( Buffer(T) & buffer ) with(buffer) {
    1483                 [front, back, count] = 0;
     1896                front = back = count = 0;
    14841897        }
    1485 
    14861898        void insert( Buffer(T) & mutex buffer, T elem )
    14871899                                with(buffer) {
     
    15001912\end{lrbox}
    15011913
     1914% \newbox\myboxB
     1915% \begin{lrbox}{\myboxB}
     1916% \begin{cfa}[aboveskip=0pt,belowskip=0pt]
     1917% forall( otype T ) { // distribute forall
     1918%       monitor Buffer {
     1919%
     1920%               int front, back, count;
     1921%               T elements[10];
     1922%       };
     1923%       void ?{}( Buffer(T) & buffer ) with(buffer) {
     1924%               [front, back, count] = 0;
     1925%       }
     1926%       T remove( Buffer(T) & mutex buffer ); // forward
     1927%       void insert( Buffer(T) & mutex buffer, T elem )
     1928%                               with(buffer) {
     1929%               if ( count == 10 ) `waitfor( remove, buffer )`;
     1930%               // insert elem into buffer
     1931%
     1932%       }
     1933%       T remove( Buffer(T) & mutex buffer ) with(buffer) {
     1934%               if ( count == 0 ) `waitfor( insert, buffer )`;
     1935%               // remove elem from buffer
     1936%
     1937%               return elem;
     1938%       }
     1939% }
     1940% \end{cfa}
     1941% \end{lrbox}
     1942
    15021943\newbox\myboxB
    15031944\begin{lrbox}{\myboxB}
    15041945\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1505 forall( otype T ) { // distribute forall
    1506         monitor Buffer {
    1507 
    1508                 int front, back, count;
    1509                 T elements[10];
    1510         };
    1511         void ?{}( Buffer(T) & buffer ) with(buffer) {
    1512                 [front, back, count] = 0;
    1513         }
    1514         T remove( Buffer(T) & mutex buffer ); // forward
    1515         void insert( Buffer(T) & mutex buffer, T elem )
    1516                                 with(buffer) {
    1517                 if ( count == 10 ) `waitfor( remove, buffer )`;
    1518                 // insert elem into buffer
    1519 
    1520         }
    1521         T remove( Buffer(T) & mutex buffer ) with(buffer) {
    1522                 if ( count == 0 ) `waitfor( insert, buffer )`;
    1523                 // remove elem from buffer
    1524 
    1525                 return elem;
    1526         }
    1527 }
     1946monitor ReadersWriter {
     1947        int rcnt, wcnt; // readers/writer using resource
     1948};
     1949void ?{}( ReadersWriter & rw ) with(rw) {
     1950        rcnt = wcnt = 0;
     1951}
     1952void EndRead( ReadersWriter & mutex rw ) with(rw) {
     1953        rcnt -= 1;
     1954}
     1955void EndWrite( ReadersWriter & mutex rw ) with(rw) {
     1956        wcnt = 0;
     1957}
     1958void StartRead( ReadersWriter & mutex rw ) with(rw) {
     1959        if ( wcnt > 0 ) `waitfor( EndWrite, rw );`
     1960        rcnt += 1;
     1961}
     1962void StartWrite( ReadersWriter & mutex rw ) with(rw) {
     1963        if ( wcnt > 0 ) `waitfor( EndWrite, rw );`
     1964        else while ( rcnt > 0 ) `waitfor( EndRead, rw );`
     1965        wcnt = 1;
     1966}
     1967
    15281968\end{cfa}
    15291969\end{lrbox}
    15301970
    1531 \subfloat[Internal Scheduling]{\label{f:BBInt}\usebox\myboxA}
    1532 %\qquad
    1533 \subfloat[External Scheduling]{\label{f:BBExt}\usebox\myboxB}
    1534 \caption{Generic Bounded-Buffer}
    1535 \label{f:GenericBoundedBuffer}
     1971\subfloat[Generic bounded buffer, internal scheduling]{\label{f:BBInt}\usebox\myboxA}
     1972\hspace{3pt}
     1973\vrule
     1974\hspace{3pt}
     1975\subfloat[Readers / writer lock, external scheduling]{\label{f:RWExt}\usebox\myboxB}
     1976
     1977\caption{Internal / external scheduling}
     1978\label{f:InternalExternalScheduling}
    15361979\end{figure}
    15371980
    1538 Figure~\ref{f:BBExt} shows a \CFA bounded-buffer with external scheduling, where producers/consumers detecting a full/empty buffer block and prevent more producers/consumers from entering the monitor until the buffer has a free/empty slot.
    1539 External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the routine calls that can next acquire mutual exclusion.
     1981Figure~\ref{f:BBInt} can be transformed into external scheduling by removing the condition variables and signals/waits, and adding the following lines at the locations of the current @wait@s in @insert@/@remove@, respectively.
     1982\begin{cfa}[aboveskip=2pt,belowskip=1pt]
     1983if ( count == 10 ) `waitfor( remove, buffer )`;       |      if ( count == 0 ) `waitfor( insert, buffer )`;
     1984\end{cfa}
     1985Here, the producers/consumers detects a full/\-empty buffer and prevents more producers/consumers from entering the monitor until there is a free/empty slot in the buffer.
     1986External scheduling is controlled by the @waitfor@ statement, which atomically blocks the calling thread, releases the monitor lock, and restricts the function calls that can next acquire mutual exclusion.
    15401987If the buffer is full, only calls to @remove@ can acquire the buffer, and if the buffer is empty, only calls to @insert@ can acquire the buffer.
    1541 Threads making calls to routines that are currently excluded block outside (externally) of the monitor on a calling queue, versus blocking on condition queues inside the monitor.
    1542 
    1543 Both internal and external scheduling extend to multiple monitors in a natural way.
    1544 \begin{cfa}
    1545 monitor M { `condition e`; ... };
    1546 void foo( M & mutex m1, M & mutex m2 ) {
    1547         ... wait( `e` ); ...                                    $\C{// wait( e, m1, m2 )}$
    1548         ... wait( `e, m1` ); ...
    1549         ... wait( `e, m2` ); ...
    1550 }
    1551 
    1552 void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
    1553 void rtn$\(_2\)$( M & mutex m1 );
    1554 void bar( M & mutex m1, M & mutex m2 ) {
    1555         ... waitfor( `rtn` ); ...                               $\C{// waitfor( rtn\(_1\), m1, m2 )}$
    1556         ... waitfor( `rtn, m1` ); ...                   $\C{// waitfor( rtn\(_2\), m1 )}$
    1557 }
    1558 \end{cfa}
    1559 For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex types in the parameter list, \ie @wait( e, m1, m2 )@.
    1560 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
    1561 Wait statically verifies the released monitors are the acquired mutex-parameters so unconditional release is safe.
    1562 Similarly, for @waitfor( rtn, ... )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@.
    1563 To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
    1564 Waitfor statically verifies the released monitors are the same as the acquired mutex-parameters of the given routine or routine pointer.
    1565 To statically verify the released monitors match with the accepted routine's mutex parameters, the routine (pointer) prototype must be accessible.
    1566 
    1567 Given the ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
    1568 \begin{cfa}
    1569 void foo( M & mutex m1, M & mutex m2 ) {
    1570         ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
    1571 void baz( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
    1572         ... signal( `e` ); ...
    1573 \end{cfa}
    1574 The @wait@ only releases @m1@ so the signalling thread cannot acquire both @m1@ and @m2@ to  enter @baz@ to get to the @signal@.
    1575 While deadlock issues can occur with multiple/nesting acquisition, this issue results from the fact that locks, and by extension monitors, are not perfectly composable.
    1576 
    1577 Finally, an important aspect of monitor implementation is barging, \ie can calling threads barge ahead of signalled threads?
    1578 If barging is allowed, synchronization between a singller and signallee is difficult, often requiring multiple unblock/block cycles (looping around a wait rechecking if a condition is met).
    1579 \begin{quote}
    1580 However, we decree that a signal operation be followed immediately by resumption of a waiting program, without possibility of an intervening procedure call from yet a third program.
    1581 It is only in this way that a waiting program has an absolute guarantee that it can acquire the resource just released by the signalling program without any danger that a third program will interpose a monitor entry and seize the resource instead.~\cite[p.~550]{Hoare74}
    1582 \end{quote}
    1583 \CFA scheduling \emph{precludes} barging, which simplifies synchronization among threads in the monitor and increases correctness.
    1584 For example, there are no loops in either bounded buffer solution in Figure~\ref{f:GenericBoundedBuffer}.
    1585 Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design and implementation of \CFA concurrency.
    1586 
    1587 
    1588 \subsection{Barging Prevention}
    1589 
    1590 Figure~\ref{f:BargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics.
    1591 The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors.
    1592 The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired.
    1593 When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting thread to prevent barging into the outer @mutex@ statement by another thread.
    1594 However, both the signalling and signalled threads still need monitor @m1@.
    1595 
    1596 \begin{figure}
    1597 \newbox\myboxA
    1598 \begin{lrbox}{\myboxA}
    1599 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1600 monitor M m1, m2;
    1601 condition c;
    1602 mutex( m1 ) {
    1603         ...
    1604         mutex( m1, m2 ) {
    1605                 ... `wait( c )`; // block and release m1, m2
    1606                 // m1, m2 acquired
    1607         } // $\LstCommentStyle{\color{red}release m2}$
    1608         // m1 acquired
    1609 } // release m1
    1610 \end{cfa}
    1611 \end{lrbox}
    1612 
    1613 \newbox\myboxB
    1614 \begin{lrbox}{\myboxB}
    1615 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1616 
    1617 
    1618 mutex( m1 ) {
    1619         ...
    1620         mutex( m1, m2 ) {
    1621                 ... `signal( c )`; ...
    1622                 // m1, m2 acquired
    1623         } // $\LstCommentStyle{\color{red}release m2}$
    1624         // m1 acquired
    1625 } // release m1
    1626 \end{cfa}
    1627 \end{lrbox}
    1628 
    1629 \newbox\myboxC
    1630 \begin{lrbox}{\myboxC}
    1631 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    1632 
    1633 
    1634 mutex( m1 ) {
    1635         ... `wait( c )`; ...
    1636         // m1 acquired
    1637 } // $\LstCommentStyle{\color{red}release m1}$
    1638 
    1639 
    1640 
    1641 
    1642 \end{cfa}
    1643 \end{lrbox}
    1644 
    1645 \begin{cquote}
    1646 \subfloat[Waiting Thread]{\label{f:WaitingThread}\usebox\myboxA}
    1647 \hspace{2\parindentlnth}
    1648 \subfloat[Signalling Thread]{\label{f:SignallingThread}\usebox\myboxB}
    1649 \hspace{2\parindentlnth}
    1650 \subfloat[Other Waiting Thread]{\label{f:SignallingThread}\usebox\myboxC}
    1651 \end{cquote}
    1652 \caption{Barging Prevention}
    1653 \label{f:BargingPrevention}
    1654 \end{figure}
    1655 
    1656 The obvious solution to the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred.
    1657 It can be argued that that moment is when the last lock is no longer needed, because this semantics fits most closely to the behaviour of single-monitor scheduling.
    1658 This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups.
    1659 This solution releases the monitors once every monitor in a group can be released.
    1660 However, since some monitors are never released (\eg the monitor of a thread), this interpretation means a group might never be released.
    1661 A more interesting interpretation is to transfer the group until all its monitors are released, which means the group is not passed further and a thread can retain its locks.
    1662 
    1663 However, listing \ref{f:int-secret} shows this solution can become much more complicated depending on what is executed while secretly holding B at line \ref{line:secret}, while avoiding the need to transfer ownership of a subset of the condition monitors.
    1664 Figure~\ref{f:dependency} shows a slightly different example where a third thread is waiting on monitor @A@, using a different condition variable.
    1665 Because the third thread is signalled when secretly holding @B@, the goal  becomes unreachable.
    1666 Depending on the order of signals (listing \ref{f:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen:
    1667 
    1668 \begin{comment}
    1669 \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor @A@ needs to be passed to thread $\beta$ when thread $\alpha$ is done with it.
    1670 \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor @B@ needs to be retained and passed to thread $\alpha$ along with monitor @A@, which can be done directly or possibly using thread $\beta$ as an intermediate.
    1671 \\
    1672 
    1673 Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order.
    1674 However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{f:dependency}.
    1675 
    1676 In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means knowing when to release a group becomes complex and inefficient (see next section) and therefore effectively precludes this approach.
    1677 
    1678 
    1679 \subsubsection{Dependency graphs}
    1680 
    1681 \begin{figure}
    1682 \begin{multicols}{3}
    1683 Thread $\alpha$
    1684 \begin{cfa}[numbers=left, firstnumber=1]
    1685 acquire A
    1686         acquire A & B
    1687                 wait A & B
    1688         release A & B
    1689 release A
    1690 \end{cfa}
    1691 \columnbreak
    1692 Thread $\gamma$
    1693 \begin{cfa}[numbers=left, firstnumber=6, escapechar=|]
    1694 acquire A
    1695         acquire A & B
    1696                 |\label{line:signal-ab}|signal A & B
    1697         |\label{line:release-ab}|release A & B
    1698         |\label{line:signal-a}|signal A
    1699 |\label{line:release-a}|release A
    1700 \end{cfa}
    1701 \columnbreak
    1702 Thread $\beta$
    1703 \begin{cfa}[numbers=left, firstnumber=12, escapechar=|]
    1704 acquire A
    1705         wait A
    1706 |\label{line:release-aa}|release A
    1707 \end{cfa}
    1708 \end{multicols}
    1709 \begin{cfa}[caption={Pseudo-code for the three thread example.},label={f:dependency}]
    1710 \end{cfa}
    1711 \begin{center}
    1712 \input{dependency}
    1713 \end{center}
    1714 \caption{Dependency graph of the statements in listing \ref{f:dependency}}
    1715 \label{fig:dependency}
    1716 \end{figure}
    1717 
    1718 In listing \ref{f:int-bulk-cfa}, there is a solution that satisfies both barging prevention and mutual exclusion.
    1719 If ownership of both monitors is transferred to the waiter when the signaller releases @A & B@ and then the waiter transfers back ownership of @A@ back to the signaller when it releases it, then the problem is solved (@B@ is no longer in use at this point).
    1720 Dynamically finding the correct order is therefore the second possible solution.
    1721 The problem is effectively resolving a dependency graph of ownership requirements.
    1722 Here even the simplest of code snippets requires two transfers and has a super-linear complexity.
    1723 This complexity can be seen in listing \ref{f:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions.
    1724 Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks.
    1725 \begin{figure}
    1726 \begin{multicols}{2}
    1727 \begin{cfa}
    1728 acquire A
    1729         acquire B
    1730                 acquire C
    1731                         wait A & B & C
    1732                 release C
    1733         release B
    1734 release A
    1735 \end{cfa}
    1736 
    1737 \columnbreak
    1738 
    1739 \begin{cfa}
    1740 acquire A
    1741         acquire B
    1742                 acquire C
    1743                         signal A & B & C
    1744                 release C
    1745         release B
    1746 release A
    1747 \end{cfa}
    1748 \end{multicols}
    1749 \begin{cfa}[caption={Extension to three monitors of listing \ref{f:int-bulk-cfa}},label={f:explosion}]
    1750 \end{cfa}
    1751 \end{figure}
    1752 
    1753 Given the three threads example in listing \ref{f:dependency}, figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (\eg $\alpha1$ must happen before $\alpha2$).
    1754 The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold.
    1755 Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one.
    1756 
    1757 \subsubsection{Partial Signalling} \label{partial-sig}
    1758 \end{comment}
    1759 
    1760 Finally, the solution that is chosen for \CFA is to use partial signalling.
    1761 Again using listing \ref{f:int-bulk-cfa}, the partial signalling solution transfers ownership of monitor @B@ at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor @A@.
    1762 Only when it reaches line \ref{line:lastRelease} does it actually wake up the waiting thread.
    1763 This solution has the benefit that complexity is encapsulated into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
    1764 This solution has a much simpler implementation than a dependency graph solving algorithms, which is why it was chosen.
    1765 Furthermore, after being fully implemented, this solution does not appear to have any significant downsides.
    1766 
    1767 Using partial signalling, listing \ref{f:dependency} can be solved easily:
    1768 \begin{itemize}
    1769         \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor @B@ to thread $\alpha$ and continues to hold monitor @A@.
    1770         \item When thread $\gamma$ reaches line \ref{line:release-a}  it transfers monitor @A@ to thread $\beta$  and wakes it up.
    1771         \item When thread $\beta$  reaches line \ref{line:release-aa} it transfers monitor @A@ to thread $\alpha$ and wakes it up.
    1772 \end{itemize}
    1773 
    1774 
    1775 \subsection{Signalling: Now or Later}
     1988Threads calling excluded functions block outside of (external to) the monitor on the calling queue, versus blocking on condition queues inside of (internal to) the monitor.
     1989Figure~\ref{f:RWExt} shows a readers/writer lock written using external scheduling, where a waiting reader detects a writer using the resource and restricts further calls until the writer exits by calling @EndWrite@.
     1990The writer does a similar action for each reader or writer using the resource.
     1991Note, no new calls to @StarRead@/@StartWrite@ may occur when waiting for the call to @EndRead@/@EndWrite@.
     1992External scheduling allows waiting for events from other threads while restricting unrelated events, that would otherwise have to wait on conditions in the monitor.
     1993The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go @select@ on channels.
     1994While both mechanisms have strengths and weaknesses, this project uses the control-flow mechanism to be consistent with other language features.
     1995% Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor functions (see Section~\ref{s:Multi-MonitorScheduling}).
     1996
     1997Figure~\ref{f:DatingService} shows a dating service demonstrating non-blocking and blocking signalling.
     1998The dating service matches girl and boy threads with matching compatibility codes so they can exchange phone numbers.
     1999A thread blocks until an appropriate partner arrives.
     2000The complexity is exchanging phone numbers in the monitor because of the mutual-exclusion property.
     2001For signal scheduling, the @exchange@ condition is necessary to block the thread finding the match, while the matcher unblocks to take the opposite number, post its phone number, and unblock the partner.
     2002For signal-block scheduling, the implicit urgent-queue replaces the explict @exchange@-condition and @signal_block@ puts the finding thread on the urgent condition and unblocks the matcher.
     2003The dating service is an example of a monitor that cannot be written using external scheduling because it requires knowledge of calling parameters to make scheduling decisions, and parameters of waiting threads are unavailable;
     2004as well, an arriving thread may not find a partner and must wait, which requires a condition variable, and condition variables imply internal scheduling.
     2005Furthermore, barging corrupts the dating service during an exchange because a barger may also match and change the phone numbers, invalidating the previous exchange phone number.
     2006Putting loops around the @wait@s does not correct the problem;
     2007the simple solution must be restructured to account for barging.
    17762008
    17772009\begin{figure}
     
    17842016        int GirlPhNo, BoyPhNo;
    17852017        condition Girls[CCodes], Boys[CCodes];
    1786         condition exchange;
     2018        `condition exchange;`
    17872019};
    17882020int girl( DS & mutex ds, int phNo, int ccode ) {
     
    17902022                wait( Girls[ccode] );
    17912023                GirlPhNo = phNo;
    1792                 exchange.signal();
     2024                `signal( exchange );`
    17932025        } else {
    17942026                GirlPhNo = phNo;
    1795                 signal( Boys[ccode] );
    1796                 exchange.wait();
    1797         } // if
     2027                `signal( Boys[ccode] );`
     2028                `wait( exchange );`
     2029        }
    17982030        return BoyPhNo;
    17992031}
     
    18202052        } else {
    18212053                GirlPhNo = phNo; // make phone number available
    1822                 signal_block( Boys[ccode] ); // restart boy
     2054                `signal_block( Boys[ccode] );` // restart boy
    18232055
    18242056        } // if
     
    18342066\qquad
    18352067\subfloat[\lstinline@signal_block@]{\label{f:DatingSignalBlock}\usebox\myboxB}
    1836 \caption{Dating service. }
    1837 \label{f:Dating service}
     2068\caption{Dating service}
     2069\label{f:DatingService}
    18382070\end{figure}
    18392071
    1840 An important note is that, until now, signalling a monitor was a delayed operation.
    1841 The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the @signal@ statement.
    1842 However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the @signal_block@ routine.
    1843 
    1844 The example in table \ref{tbl:datingservice} highlights the difference in behaviour.
    1845 As mentioned, @signal@ only transfers ownership once the current critical section exits; this behaviour requires additional synchronization when a two-way handshake is needed.
    1846 To avoid this explicit synchronization, the @condition@ type offers the @signal_block@ routine, which handles the two-way handshake as shown in the example.
    1847 This feature removes the need for a second condition variables and simplifies programming.
    1848 Like every other monitor semantic, @signal_block@ uses barging prevention, which means mutual-exclusion is baton-passed both on the front end and the back end of the call to @signal_block@, meaning no other thread can acquire the monitor either before or after the call.
    1849 
    1850 % ======================================================================
    1851 % ======================================================================
    1852 \section{External scheduling} \label{extsched}
    1853 % ======================================================================
    1854 % ======================================================================
    1855 An alternative to internal scheduling is external scheduling (see Table~\ref{tbl:sched}).
    1856 
    1857 \begin{comment}
    1858 \begin{table}
    1859 \begin{tabular}{|c|c|c|}
    1860 Internal Scheduling & External Scheduling & Go\\
    1861 \hline
    1862 \begin{uC++}[tabsize=3]
    1863 _Monitor Semaphore {
    1864         condition c;
    1865         bool inUse;
    1866 public:
    1867         void P() {
    1868                 if(inUse)
    1869                         wait(c);
    1870                 inUse = true;
    1871         }
    1872         void V() {
    1873                 inUse = false;
    1874                 signal(c);
    1875         }
    1876 }
    1877 \end{uC++}&\begin{uC++}[tabsize=3]
    1878 _Monitor Semaphore {
    1879 
    1880         bool inUse;
    1881 public:
    1882         void P() {
    1883                 if(inUse)
    1884                         _Accept(V);
    1885                 inUse = true;
    1886         }
    1887         void V() {
    1888                 inUse = false;
    1889 
    1890         }
    1891 }
    1892 \end{uC++}&\begin{Go}[tabsize=3]
    1893 type MySem struct {
    1894         inUse bool
    1895         c     chan bool
    1896 }
    1897 
    1898 // acquire
    1899 func (s MySem) P() {
    1900         if s.inUse {
    1901                 select {
    1902                 case <-s.c:
    1903                 }
    1904         }
    1905         s.inUse = true
    1906 }
    1907 
    1908 // release
    1909 func (s MySem) V() {
    1910         s.inUse = false
    1911 
    1912         // This actually deadlocks
    1913         // when single thread
    1914         s.c <- false
    1915 }
    1916 \end{Go}
     2072In summation, for internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread;
     2073the signaller enters the monitor and changes state, detects a waiting threads that can use the state, performs a non-blocking signal on the condition queue for the waiting thread, and exits the monitor to run concurrently.
     2074The waiter unblocks next from the urgent queue, uses/takes the state, and exits the monitor.
     2075Blocking signal is the reverse, where the waiter is providing the cooperation for the signalling thread;
     2076the signaller enters the monitor, detects a waiting thread providing the necessary state, performs a blocking signal to place it on the urgent queue and unblock the waiter.
     2077The waiter changes state and exits the monitor, and the signaller unblocks next from the urgent queue to use/take the state.
     2078
     2079Both internal and external scheduling extend to multiple monitors in a natural way.
     2080\begin{cquote}
     2081\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     2082\begin{cfa}
     2083monitor M { `condition e`; ... };
     2084void foo( M & mutex m1, M & mutex m2 ) {
     2085        ... wait( `e` ); ...   // wait( e, m1, m2 )
     2086        ... wait( `e, m1` ); ...
     2087        ... wait( `e, m2` ); ...
     2088}
     2089\end{cfa}
     2090&
     2091\begin{cfa}
     2092void rtn$\(_1\)$( M & mutex m1, M & mutex m2 );
     2093void rtn$\(_2\)$( M & mutex m1 );
     2094void bar( M & mutex m1, M & mutex m2 ) {
     2095        ... waitfor( `rtn` ); ...       // $\LstCommentStyle{waitfor( rtn\(_1\), m1, m2 )}$
     2096        ... waitfor( `rtn, m1` ); ... // $\LstCommentStyle{waitfor( rtn\(_2\), m1 )}$
     2097}
     2098\end{cfa}
    19172099\end{tabular}
    1918 \caption{Different forms of scheduling.}
    1919 \label{tbl:sched}
    1920 \end{table}
    1921 \end{comment}
    1922 
    1923 This method is more constrained and explicit, which helps users reduce the non-deterministic nature of concurrency.
    1924 Indeed, as the following examples demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occurring.
    1925 External scheduling can generally be done either in terms of control flow (\eg Ada with @accept@, \uC with @_Accept@) or in terms of data (\eg Go with channels).
    1926 Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics.
    1927 Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines.
    1928 The previous example shows a simple use @_Accept@ versus @wait@/@signal@ and its advantages.
    1929 Note that while other languages often use @accept@/@select@ as the core external scheduling keyword, \CFA uses @waitfor@ to prevent name collisions with existing socket \textbf{api}s.
    1930 
    1931 For the @P@ member above using internal scheduling, the call to @wait@ only guarantees that @V@ is the last routine to access the monitor, allowing a third routine, say @isInUse()@, acquire mutual exclusion several times while routine @P@ is waiting.
    1932 On the other hand, external scheduling guarantees that while routine @P@ is waiting, no other routine than @V@ can acquire the monitor.
    1933 
    1934 % ======================================================================
    1935 % ======================================================================
    1936 \subsection{Loose Object Definitions}
    1937 % ======================================================================
    1938 % ======================================================================
    1939 In \uC, a monitor class declaration includes an exhaustive list of monitor operations.
    1940 Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user:
    1941 
    1942 \begin{cfa}
    1943 monitor A {};
    1944 
    1945 void f(A & mutex a);
    1946 void g(A & mutex a) {
    1947         waitfor(f); // Obvious which f() to wait for
    1948 }
    1949 
    1950 void f(A & mutex a, int); // New different F added in scope
    1951 void h(A & mutex a) {
    1952         waitfor(f); // Less obvious which f() to wait for
    1953 }
    1954 \end{cfa}
    1955 
    1956 Furthermore, external scheduling is an example where implementation constraints become visible from the interface.
    1957 Here is the cfa-code for the entering phase of a monitor:
    1958 \begin{center}
    1959 \begin{tabular}{l}
    1960 \begin{cfa}
    1961         if monitor is free
    1962                 enter
    1963         elif already own the monitor
    1964                 continue
    1965         elif monitor accepts me
    1966                 enter
    1967         else
    1968                 block
    1969 \end{cfa}
    1970 \end{tabular}
    1971 \end{center}
    1972 For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions.
    1973 However, a fast check for @monitor accepts me@ is much harder to implement depending on the constraints put on the monitors.
    1974 Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}.
     2100\end{cquote}
     2101For @wait( e )@, the default semantics is to atomically block the signaller and release all acquired mutex parameters, \ie @wait( e, m1, m2 )@.
     2102To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @wait( e, m1 )@.
     2103Wait cannot statically verifies the released monitors are the acquired mutex-parameters without disallowing separately compiled helper functions calling @wait@.
     2104While \CC supports bulk locking, @wait@ only accepts a single lock for a condition variable, so bulk locking with condition variables is asymmetric.
     2105Finally, a signaller,
     2106\begin{cfa}
     2107void baz( M & mutex m1, M & mutex m2 ) {
     2108        ... signal( e ); ...
     2109}
     2110\end{cfa}
     2111must have acquired at least the same locks as the waiting thread signalled from a condition queue to allow the locks to be passed, and hence, prevent barging.
     2112
     2113Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex parameters, \ie @waitfor( rtn, m1, m2 )@.
     2114To override the implicit multi-monitor wait, specific mutex parameter(s) can be specified, \eg @waitfor( rtn, m1 )@.
     2115@waitfor@ does statically verify the monitor types passed are the same as the acquired mutex-parameters of the given function or function pointer, hence the function (pointer) prototype must be accessible.
     2116% When an overloaded function appears in an @waitfor@ statement, calls to any function with that name are accepted.
     2117% The rationale is that members with the same name should perform a similar function, and therefore, all should be eligible to accept a call.
     2118Overloaded functions can be disambiguated using a cast
     2119\begin{cfa}
     2120void rtn( M & mutex m );
     2121`int` rtn( M & mutex m );
     2122waitfor( (`int` (*)( M & mutex ))rtn, m );
     2123\end{cfa}
     2124
     2125The ability to release a subset of acquired monitors can result in a \newterm{nested monitor}~\cite{Lister77} deadlock.
     2126\begin{cfa}
     2127void foo( M & mutex m1, M & mutex m2 ) {
     2128        ... wait( `e, m1` ); ...                                $\C{// release m1, keeping m2 acquired )}$
     2129void bar( M & mutex m1, M & mutex m2 ) {        $\C{// must acquire m1 and m2 )}$
     2130        ... signal( `e` ); ...
     2131\end{cfa}
     2132The @wait@ only releases @m1@ so the signalling thread cannot acquire @m1@ and @m2@ to enter @bar@ and @signal@ the condition.
     2133While deadlock can occur with multiple/nesting acquisition, this is a consequence of locks, and by extension monitors, not being perfectly composable.
     2134
     2135
     2136
     2137\subsection{\texorpdfstring{Extended \protect\lstinline@waitfor@}{Extended waitfor}}
     2138
     2139Figure~\ref{f:ExtendedWaitfor} shows the extended form of the @waitfor@ statement to conditionally accept one of a group of mutex functions, with an optional statement to be performed \emph{after} the mutex function finishes.
     2140For a @waitfor@ clause to be executed, its @when@ must be true and an outstanding call to its corresponding member(s) must exist.
     2141The \emph{conditional-expression} of a @when@ may call a function, but the function must not block or context switch.
     2142If there are multiple acceptable mutex calls, selection occurs top-to-bottom (prioritized) among the @waitfor@ clauses, whereas some programming languages with similar mechanisms accept nondeterministically for this case, \eg Go \lstinline[morekeywords=select]@select@.
     2143If some accept guards are true and there are no outstanding calls to these members, the acceptor is blocked until a call to one of these members is made.
     2144If there is a @timeout@ clause, it provides an upper bound on waiting.
     2145If all the accept guards are false, the statement does nothing, unless there is a terminating @else@ clause with a true guard, which is executed instead.
     2146Hence, the terminating @else@ clause allows a conditional attempt to accept a call without blocking.
     2147If both @timeout@ and @else@ clause are present, the @else@ must be conditional, or the @timeout@ is never triggered.
     2148There is also a traditional future wait queue (not shown) (\eg Microsoft (@WaitForMultipleObjects@)), to wait for a specified number of future elements in the queue.
    19752149
    19762150\begin{figure}
    19772151\centering
    1978 \subfloat[Classical Monitor] {
    1979 \label{fig:ClassicalMonitor}
    1980 {\resizebox{0.45\textwidth}{!}{\input{monitor}}}
    1981 }% subfloat
    1982 \qquad
    1983 \subfloat[bulk acquire Monitor] {
    1984 \label{fig:BulkMonitor}
    1985 {\resizebox{0.45\textwidth}{!}{\input{ext_monitor}}}
    1986 }% subfloat
    1987 \caption{External Scheduling Monitor}
     2152\begin{cfa}
     2153`when` ( $\emph{conditional-expression}$ )      $\C{// optional guard}$
     2154        waitfor( $\emph{mutex-member-name}$ ) $\emph{statement}$ $\C{// action after call}$
     2155`or` `when` ( $\emph{conditional-expression}$ ) $\C{// any number of functions}$
     2156        waitfor( $\emph{mutex-member-name}$ ) $\emph{statement}$
     2157`or`    ...
     2158`when` ( $\emph{conditional-expression}$ ) $\C{// optional guard}$
     2159        `timeout` $\emph{statement}$ $\C{// optional terminating timeout clause}$
     2160`when` ( $\emph{conditional-expression}$ ) $\C{// optional guard}$
     2161        `else`  $\emph{statement}$ $\C{// optional terminating clause}$
     2162\end{cfa}
     2163\caption{Extended \protect\lstinline@waitfor@}
     2164\label{f:ExtendedWaitfor}
    19882165\end{figure}
    19892166
    1990 There are other alternatives to these pictures, but in the case of the left picture, implementing a fast accept check is relatively easy.
    1991 Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (\eg 128) of mutex members.
    1992 This approach requires a unique dense ordering of routines with an upper-bound and that ordering must be consistent across translation units.
    1993 For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritance.
    1994 However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit.
    1995 This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects.
    1996 
    1997 The alternative is to alter the implementation as in Figure~\ref{fig:BulkMonitor}.
    1998 Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate.
    1999 Generating a mask dynamically means that the storage for the mask information can vary between calls to @waitfor@, allowing for more flexibility and extensions.
    2000 Storing an array of accepted routine pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search.
    2001 Furthermore, supporting nested external scheduling (\eg listing \ref{f:nest-ext}) may now require additional searches for the @waitfor@ statement to check if a routine is already queued.
     2167Note, a group of conditional @waitfor@ clauses is \emph{not} the same as a group of @if@ statements, \eg:
     2168\begin{cfa}
     2169if ( C1 ) waitfor( mem1 );                       when ( C1 ) waitfor( mem1 );
     2170else if ( C2 ) waitfor( mem2 );         or when ( C2 ) waitfor( mem2 );
     2171\end{cfa}
     2172The left example only accepts @mem1@ if @C1@ is true or only @mem2@ if @C2@ is true.
     2173The right example accepts either @mem1@ or @mem2@ if @C1@ and @C2@ are true.
     2174
     2175An interesting use of @waitfor@ is accepting the @mutex@ destructor to know when an object is deallocated, \eg assume the bounded buffer is restructred from a monitor to a thread with the following @main@.
     2176\begin{cfa}
     2177void main( Buffer(T) & buffer ) with(buffer) {
     2178        for () {
     2179                `waitfor( ^?{}, buffer )` break;
     2180                or when ( count != 20 ) waitfor( insert, buffer ) { ... }
     2181                or when ( count != 0 ) waitfor( remove, buffer ) { ... }
     2182        }
     2183        // clean up
     2184}
     2185\end{cfa}
     2186When the program main deallocates the buffer, it first calls the buffer's destructor, which is accepted, the destructor runs, and the buffer is deallocated.
     2187However, the buffer thread cannot continue after the destructor call because the object is gone;
     2188hence, clean up in @main@ cannot occur, which means destructors for local objects are not run.
     2189To make this useful capability work, the semantics for accepting the destructor is the same as @signal@, \ie the destructor call is placed on urgent and the acceptor continues execution, which ends the loop, cleans up, and the thread terminates.
     2190Then, the destructor caller unblocks from urgent to deallocate the object.
     2191Accepting the destructor is the idiomatic way in \CFA to terminate a thread performing direct communication.
     2192
     2193
     2194\subsection{Bulk Barging Prevention}
     2195
     2196Figure~\ref{f:BulkBargingPrevention} shows \CFA code where bulk acquire adds complexity to the internal-signalling semantics.
     2197The complexity begins at the end of the inner @mutex@ statement, where the semantics of internal scheduling need to be extended for multiple monitors.
     2198The problem is that bulk acquire is used in the inner @mutex@ statement where one of the monitors is already acquired.
     2199When the signalling thread reaches the end of the inner @mutex@ statement, it should transfer ownership of @m1@ and @m2@ to the waiting threads to prevent barging into the outer @mutex@ statement by another thread.
     2200However, both the signalling and waiting threads W1 and W2 need some subset of monitors @m1@ and @m2@.
     2201\begin{cquote}
     2202condition c: (order 1) W2(@m2@), W1(@m1@,@m2@)\ \ \ or\ \ \ (order 2) W1(@m1@,@m2@), W2(@m2@) \\
     2203S: acq. @m1@ $\rightarrow$ acq. @m1,m2@ $\rightarrow$ @signal(c)@ $\rightarrow$ rel. @m2@ $\rightarrow$ pass @m2@ unblock W2 (order 2) $\rightarrow$ rel. @m1@ $\rightarrow$ pass @m1,m2@ unblock W1 \\
     2204\hspace*{2.75in}$\rightarrow$ rel. @m1@ $\rightarrow$ pass @m1,m2@ unblock W1 (order 1)
     2205\end{cquote}
    20022206
    20032207\begin{figure}
    2004 \begin{cfa}[caption={Example of nested external scheduling},label={f:nest-ext}]
    2005 monitor M {};
    2006 void foo( M & mutex a ) {}
    2007 void bar( M & mutex b ) {
    2008         // Nested in the waitfor(bar, c) call
    2009         waitfor(foo, b);
    2010 }
    2011 void baz( M & mutex c ) {
    2012         waitfor(bar, c);
    2013 }
    2014 
    2015 \end{cfa}
     2208\newbox\myboxA
     2209\begin{lrbox}{\myboxA}
     2210\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2211monitor M m1, m2;
     2212condition c;
     2213mutex( m1 ) { // $\LstCommentStyle{\color{red}outer}$
     2214        ...
     2215        mutex( m1, m2 ) { // $\LstCommentStyle{\color{red}inner}$
     2216                ... `signal( c )`; ...
     2217                // m1, m2 still acquired
     2218        } // $\LstCommentStyle{\color{red}release m2}$
     2219        // m1 acquired
     2220} // release m1
     2221\end{cfa}
     2222\end{lrbox}
     2223
     2224\newbox\myboxB
     2225\begin{lrbox}{\myboxB}
     2226\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2227
     2228
     2229mutex( m1 ) {
     2230        ...
     2231        mutex( m1, m2 ) {
     2232                ... `wait( c )`; // release m1, m2
     2233                // m1, m2 reacquired
     2234        } // $\LstCommentStyle{\color{red}release m2}$
     2235        // m1 acquired
     2236} // release m1
     2237\end{cfa}
     2238\end{lrbox}
     2239
     2240\newbox\myboxC
     2241\begin{lrbox}{\myboxC}
     2242\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2243
     2244
     2245mutex( m2 ) {
     2246        ... `wait( c )`; // release m2
     2247        // m2 reacquired
     2248} // $\LstCommentStyle{\color{red}release m2}$
     2249
     2250
     2251
     2252
     2253\end{cfa}
     2254\end{lrbox}
     2255
     2256\begin{cquote}
     2257\subfloat[Signalling Thread (S)]{\label{f:SignallingThread}\usebox\myboxA}
     2258\hspace{3\parindentlnth}
     2259\subfloat[Waiting Thread (W1)]{\label{f:WaitingThread}\usebox\myboxB}
     2260\hspace{2\parindentlnth}
     2261\subfloat[Waiting Thread (W2)]{\label{f:OtherWaitingThread}\usebox\myboxC}
     2262\end{cquote}
     2263\caption{Bulk Barging Prevention}
     2264\label{f:BulkBargingPrevention}
    20162265\end{figure}
    20172266
    2018 Note that in the right picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a routine pointer and a set of monitors, as is discussed in the next section.
    2019 These details are omitted from the picture for the sake of simplicity.
    2020 
    2021 At this point, a decision must be made between flexibility and performance.
    2022 Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost.
    2023 Here, however, the cost of flexibility cannot be trivially removed.
    2024 In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be  hard to write.
    2025 This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA.
    2026 
    2027 % ======================================================================
    2028 % ======================================================================
     2267One scheduling solution is for the signaller S to keep ownership of all locks until the last lock is ready to be transferred, because this semantics fits most closely to the behaviour of single-monitor scheduling.
     2268However, this solution is inefficient if W2 waited first and can be immediate passed @m2@ when released, while S retains @m1@ until completion of the outer mutex statement.
     2269If W1 waited first, the signaller must retain @m1@ amd @m2@ until completion of the outer mutex statement and then pass both to W1.
     2270% Furthermore, there is an execution sequence where the signaller always finds waiter W2, and hence, waiter W1 starves.
     2271To support this efficient semantics (and prevent barging), the implementation maintains a list of monitors acquired for each blocked thread.
     2272When a signaller exits or waits in a monitor function/statement, the front waiter on urgent is unblocked if all its monitors are released.
     2273Implementing a fast subset check for the necessary released monitors is important.
     2274% The benefit is encapsulating complexity into only two actions: passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met.
     2275
     2276
     2277\subsection{Loose Object Definitions}
     2278\label{s:LooseObjectDefinitions}
     2279
     2280In an object-oriented programming language, a class includes an exhaustive list of operations.
     2281A new class can add members via static inheritance but the subclass still has an exhaustive list of operations.
     2282(Dynamic member adding, \eg JavaScript~\cite{JavaScript}, is not considered.)
     2283In the object-oriented scenario, the type and all its operators are always present at compilation (even separate compilation), so it is possible to number the operations in a bit mask and use an $O(1)$ compare with a similar bit mask created for the operations specified in a @waitfor@.
     2284
     2285However, in \CFA, monitor functions can be statically added/removed in translation units, making a fast subset check difficult.
     2286\begin{cfa}
     2287        monitor M { ... }; // common type, included in .h file
     2288translation unit 1
     2289        void `f`( M & mutex m );
     2290        void g( M & mutex m ) { waitfor( `f`, m ); }
     2291translation unit 2
     2292        void `f`( M & mutex m ); $\C{// replacing f and g for type M in this translation unit}$
     2293        void `g`( M & mutex m );
     2294        void h( M & mutex m ) { waitfor( `f`, m ) or waitfor( `g`, m ); } $\C{// extending type M in this translation unit}$
     2295\end{cfa}
     2296The @waitfor@ statements in each translation unit cannot form a unique bit-mask because the monitor type does not carry that information.
     2297Hence, function pointers are used to identify the functions listed in the @waitfor@ statement, stored in a variable-sized array.
     2298Then, the same implementation approach used for the urgent stack is used for the calling queue.
     2299Each caller has a list of monitors acquired, and the @waitfor@ statement performs a (usually short) linear search matching functions in the @waitfor@ list with called functions, and then verifying the associated mutex locks can be transfers.
     2300(A possible way to construct a dense mapping is at link or load-time.)
     2301
     2302
    20292303\subsection{Multi-Monitor Scheduling}
    2030 % ======================================================================
    2031 % ======================================================================
    2032 
    2033 External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax.
    2034 Even in the simplest possible case, some new semantics needs to be established:
    2035 \begin{cfa}
    2036 monitor M {};
    2037 
    2038 void f(M & mutex a);
    2039 
    2040 void g(M & mutex b, M & mutex c) {
    2041         waitfor(f); // two monitors M => unknown which to pass to f(M & mutex)
    2042 }
    2043 \end{cfa}
    2044 The obvious solution is to specify the correct monitor as follows:
    2045 
    2046 \begin{cfa}
    2047 monitor M {};
    2048 
    2049 void f(M & mutex a);
    2050 
    2051 void g(M & mutex a, M & mutex b) {
    2052         // wait for call to f with argument b
    2053         waitfor(f, b);
    2054 }
    2055 \end{cfa}
    2056 This syntax is unambiguous.
    2057 Both locks are acquired and kept by @g@.
    2058 When routine @f@ is called, the lock for monitor @b@ is temporarily transferred from @g@ to @f@ (while @g@ still holds lock @a@).
    2059 This behaviour can be extended to the multi-monitor @waitfor@ statement as follows.
    2060 
    2061 \begin{cfa}
    2062 monitor M {};
    2063 
    2064 void f(M & mutex a, M & mutex b);
    2065 
    2066 void g(M & mutex a, M & mutex b) {
    2067         // wait for call to f with arguments a and b
    2068         waitfor(f, a, b);
    2069 }
    2070 \end{cfa}
    2071 
    2072 Note that the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired in the routine. @waitfor@ used in any other context is undefined behaviour.
    2073 
    2074 An important behaviour to note is when a set of monitors only match partially:
    2075 
    2076 \begin{cfa}
    2077 mutex struct A {};
    2078 
    2079 mutex struct B {};
    2080 
    2081 void g(A & mutex a, B & mutex b) {
    2082         waitfor(f, a, b);
    2083 }
    2084 
    2085 A a1, a2;
    2086 B b;
    2087 
    2088 void foo() {
    2089         g(a1, b); // block on accept
    2090 }
    2091 
    2092 void bar() {
    2093         f(a2, b); // fulfill cooperation
    2094 }
    2095 \end{cfa}
    2096 While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables.
    2097 In both cases, partially matching monitor sets does not wakeup the waiting thread.
    2098 It is also important to note that in the case of external scheduling the order of parameters is irrelevant; @waitfor(f,a,b)@ and @waitfor(f,b,a)@ are indistinguishable waiting condition.
    2099 
    2100 % ======================================================================
    2101 % ======================================================================
    2102 \subsection{\protect\lstinline|waitfor| Semantics}
    2103 % ======================================================================
    2104 % ======================================================================
    2105 
    2106 Syntactically, the @waitfor@ statement takes a routine identifier and a set of monitors.
    2107 While the set of monitors can be any list of expressions, the routine name is more restricted because the compiler validates at compile time the validity of the routine type and the parameters used with the @waitfor@ statement.
    2108 It checks that the set of monitors passed in matches the requirements for a routine call.
    2109 Figure~\ref{f:waitfor} shows various usages of the waitfor statement and which are acceptable.
    2110 The choice of the routine type is made ignoring any non-@mutex@ parameter.
    2111 One limitation of the current implementation is that it does not handle overloading, but overloading is possible.
     2304\label{s:Multi-MonitorScheduling}
     2305
     2306External scheduling, like internal scheduling, becomes significantly more complex for multi-monitor semantics.
     2307Even in the simplest case, new semantics need to be established.
     2308\begin{cfa}
     2309monitor M { ... };
     2310void f( M & mutex m1 );
     2311void g( M & mutex m1, M & mutex m2 ) { `waitfor( f );` } $\C{// pass m1 or m2 to f?}$
     2312\end{cfa}
     2313The solution is for the programmer to disambiguate:
     2314\begin{cfa}
     2315waitfor( f, `m2` ); $\C{// wait for call to f with argument m2}$
     2316\end{cfa}
     2317Both locks are acquired by function @g@, so when function @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@, while @g@ still holds lock @m1@.
     2318This behaviour can be extended to the multi-monitor @waitfor@ statement.
     2319\begin{cfa}
     2320monitor M { ... };
     2321void f( M & mutex m1, M & mutex m2 );
     2322void g( M & mutex m1, M & mutex m2 ) { waitfor( f, `m1, m2` ); $\C{// wait for call to f with arguments m1 and m2}$
     2323\end{cfa}
     2324Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by the accepting function.
     2325Also, the order of the monitors in a @waitfor@ statement is unimportant.
     2326
     2327Figure~\ref{f:UnmatchedMutexSets} shows an example where, for internal and external scheduling with multiple monitors, a signalling or accepting thread must match exactly, \ie partial matching results in waiting.
     2328For both examples, the set of monitors is disjoint so unblocking is impossible.
     2329
    21122330\begin{figure}
    2113 \begin{cfa}[caption={Various correct and incorrect uses of the waitfor statement},label={f:waitfor}]
    2114 monitor A{};
    2115 monitor B{};
    2116 
    2117 void f1( A & mutex );
    2118 void f2( A & mutex, B & mutex );
    2119 void f3( A & mutex, int );
    2120 void f4( A & mutex, int );
    2121 void f4( A & mutex, double );
    2122 
    2123 void foo( A & mutex a1, A & mutex a2, B & mutex b1, B & b2 ) {
    2124         A * ap = & a1;
    2125         void (*fp)( A & mutex ) = f1;
    2126 
    2127         waitfor(f1, a1);     // Correct : 1 monitor case
    2128         waitfor(f2, a1, b1); // Correct : 2 monitor case
    2129         waitfor(f3, a1);     // Correct : non-mutex arguments are ignored
    2130         waitfor(f1, *ap);    // Correct : expression as argument
    2131 
    2132         waitfor(f1, a1, b1); // Incorrect : Too many mutex arguments
    2133         waitfor(f2, a1);     // Incorrect : Too few mutex arguments
    2134         waitfor(f2, a1, a2); // Incorrect : Mutex arguments don't match
    2135         waitfor(f1, 1);      // Incorrect : 1 not a mutex argument
    2136         waitfor(f9, a1);     // Incorrect : f9 routine does not exist
    2137         waitfor(*fp, a1 );   // Incorrect : fp not an identifier
    2138         waitfor(f4, a1);     // Incorrect : f4 ambiguous
    2139 
    2140         waitfor(f2, a1, b2); // Undefined behaviour : b2 not mutex
    2141 }
    2142 \end{cfa}
     2331\centering
     2332\begin{lrbox}{\myboxA}
     2333\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2334monitor M1 {} m11, m12;
     2335monitor M2 {} m2;
     2336condition c;
     2337void f( M1 & mutex m1, M2 & mutex m2 ) {
     2338        signal( c );
     2339}
     2340void g( M1 & mutex m1, M2 & mutex m2 ) {
     2341        wait( c );
     2342}
     2343g( `m11`, m2 ); // block on wait
     2344f( `m12`, m2 ); // cannot fulfil
     2345\end{cfa}
     2346\end{lrbox}
     2347
     2348\begin{lrbox}{\myboxB}
     2349\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2350monitor M1 {} m11, m12;
     2351monitor M2 {} m2;
     2352
     2353void f( M1 & mutex m1, M2 & mutex m2 ) {
     2354
     2355}
     2356void g( M1 & mutex m1, M2 & mutex m2 ) {
     2357        waitfor( f, m1, m2 );
     2358}
     2359g( `m11`, m2 ); // block on accept
     2360f( `m12`, m2 ); // cannot fulfil
     2361\end{cfa}
     2362\end{lrbox}
     2363\subfloat[Internal scheduling]{\label{f:InternalScheduling}\usebox\myboxA}
     2364\hspace{3pt}
     2365\vrule
     2366\hspace{3pt}
     2367\subfloat[External scheduling]{\label{f:ExternalScheduling}\usebox\myboxB}
     2368\caption{Unmatched \protect\lstinline@mutex@ sets}
     2369\label{f:UnmatchedMutexSets}
    21432370\end{figure}
    21442371
    2145 Finally, for added flexibility, \CFA supports constructing a complex @waitfor@ statement using the @or@, @timeout@ and @else@.
    2146 Indeed, multiple @waitfor@ clauses can be chained together using @or@; this chain forms a single statement that uses baton pass to any routine that fits one of the routine+monitor set passed in.
    2147 To enable users to tell which accepted routine executed, @waitfor@s are followed by a statement (including the null statement @;@) or a compound statement, which is executed after the clause is triggered.
    2148 A @waitfor@ chain can also be followed by a @timeout@, to signify an upper bound on the wait, or an @else@, to signify that the call should be non-blocking, which checks for a matching routine call already arrived and otherwise continues.
    2149 Any and all of these clauses can be preceded by a @when@ condition to dynamically toggle the accept clauses on or off based on some current state.
    2150 Figure~\ref{f:waitfor2} demonstrates several complex masks and some incorrect ones.
     2372
     2373\subsection{\texorpdfstring{\protect\lstinline@mutex@ Threads}{mutex Threads}}
     2374
     2375Threads in \CFA can also be monitors to allow \emph{direct communication} among threads, \ie threads can have mutex functions that are called by other threads.
     2376Hence, all monitor features are available when using threads.
     2377Figure~\ref{f:DirectCommunication} shows a comparison of direct call communication in \CFA with direct channel communication in Go.
     2378(Ada provides a similar mechanism to the \CFA direct communication.)
     2379The program main in both programs communicates directly with the other thread versus indirect communication where two threads interact through a passive monitor.
     2380Both direct and indirection thread communication are valuable tools in structuring concurrent programs.
    21512381
    21522382\begin{figure}
    2153 \lstset{language=CFA,deletedelim=**[is][]{`}{`}}
    2154 \begin{cfa}
    2155 monitor A{};
    2156 
    2157 void f1( A & mutex );
    2158 void f2( A & mutex );
    2159 
    2160 void foo( A & mutex a, bool b, int t ) {
    2161         waitfor(f1, a);                                                 $\C{// Correct : blocking case}$
    2162 
    2163         waitfor(f1, a) {                                                $\C{// Correct : block with statement}$
    2164                 sout | "f1" | endl;
     2383\centering
     2384\begin{lrbox}{\myboxA}
     2385\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2386
     2387struct Msg { int i, j; };
     2388thread GoRtn { int i;  float f;  Msg m; };
     2389void mem1( GoRtn & mutex gortn, int i ) { gortn.i = i; }
     2390void mem2( GoRtn & mutex gortn, float f ) { gortn.f = f; }
     2391void mem3( GoRtn & mutex gortn, Msg m ) { gortn.m = m; }
     2392void ^?{}( GoRtn & mutex ) {}
     2393
     2394void main( GoRtn & gortn ) with( gortn ) {  // thread starts
     2395
     2396        for () {
     2397
     2398                `waitfor( mem1, gortn )` sout | i;  // wait for calls
     2399                or `waitfor( mem2, gortn )` sout | f;
     2400                or `waitfor( mem3, gortn )` sout | m.i | m.j;
     2401                or `waitfor( ^?{}, gortn )` break;
     2402
    21652403        }
    2166         waitfor(f1, a) {                                                $\C{// Correct : block waiting for f1 or f2}$
    2167                 sout | "f1" | endl;
    2168         } or waitfor(f2, a) {
    2169                 sout | "f2" | endl;
     2404
     2405}
     2406int main() {
     2407        GoRtn gortn; $\C[2.0in]{// start thread}$
     2408        `mem1( gortn, 0 );` $\C{// different calls}\CRT$
     2409        `mem2( gortn, 2.5 );`
     2410        `mem3( gortn, (Msg){1, 2} );`
     2411
     2412
     2413} // wait for completion
     2414\end{cfa}
     2415\end{lrbox}
     2416
     2417\begin{lrbox}{\myboxB}
     2418\begin{Go}[aboveskip=0pt,belowskip=0pt]
     2419func main() {
     2420        type Msg struct{ i, j int }
     2421
     2422        ch1 := make( chan int )
     2423        ch2 := make( chan float32 )
     2424        ch3 := make( chan Msg )
     2425        hand := make( chan string )
     2426        shake := make( chan string )
     2427        gortn := func() { $\C[1.5in]{// thread starts}$
     2428                var i int;  var f float32;  var m Msg
     2429                L: for {
     2430                        select { $\C{// wait for messages}$
     2431                          case `i = <- ch1`: fmt.Println( i )
     2432                          case `f = <- ch2`: fmt.Println( f )
     2433                          case `m = <- ch3`: fmt.Println( m )
     2434                          case `<- hand`: break L $\C{// sentinel}$
     2435                        }
     2436                }
     2437                `shake <- "SHAKE"` $\C{// completion}$
    21702438        }
    2171         waitfor(f1, a); or else;                                $\C{// Correct : non-blocking case}$
    2172 
    2173         waitfor(f1, a) {                                                $\C{// Correct : non-blocking case}$
    2174                 sout | "blocked" | endl;
    2175         } or else {
    2176                 sout | "didn't block" | endl;
     2439
     2440        go gortn() $\C{// start thread}$
     2441        `ch1 <- 0` $\C{// different messages}$
     2442        `ch2 <- 2.5`
     2443        `ch3 <- Msg{1, 2}`
     2444        `hand <- "HAND"` $\C{// sentinel value}$
     2445        `<- shake` $\C{// wait for completion}\CRT$
     2446}
     2447\end{Go}
     2448\end{lrbox}
     2449
     2450\subfloat[\CFA]{\label{f:CFAwaitfor}\usebox\myboxA}
     2451\hspace{3pt}
     2452\vrule
     2453\hspace{3pt}
     2454\subfloat[Go]{\label{f:Gochannel}\usebox\myboxB}
     2455\caption{Direct communication}
     2456\label{f:DirectCommunication}
     2457\end{figure}
     2458
     2459\begin{comment}
     2460The following shows an example of two threads directly calling each other and accepting calls from each other in a cycle.
     2461\begin{cfa}
     2462\end{cfa}
     2463\vspace{-0.8\baselineskip}
     2464\begin{cquote}
     2465\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     2466\begin{cfa}
     2467thread Ping {} pi;
     2468void ping( Ping & mutex ) {}
     2469void main( Ping & pi ) {
     2470        for ( 10 ) {
     2471                `waitfor( ping, pi );`
     2472                `pong( po );`
    21772473        }
    2178         waitfor(f1, a) {                                                $\C{// Correct : block at most 10 seconds}$
    2179                 sout | "blocked" | endl;
    2180         } or timeout( 10`s) {
    2181                 sout | "didn't block" | endl;
     2474}
     2475int main() {}
     2476\end{cfa}
     2477&
     2478\begin{cfa}
     2479thread Pong {} po;
     2480void pong( Pong & mutex ) {}
     2481void main( Pong & po ) {
     2482        for ( 10 ) {
     2483                `ping( pi );`
     2484                `waitfor( pong, po );`
    21822485        }
    2183         // Correct : block only if b == true if b == false, don't even make the call
    2184         when(b) waitfor(f1, a);
    2185 
    2186         // Correct : block only if b == true if b == false, make non-blocking call
    2187         waitfor(f1, a); or when(!b) else;
    2188 
    2189         // Correct : block only of t > 1
    2190         waitfor(f1, a); or when(t > 1) timeout(t); or else;
    2191 
    2192         // Incorrect : timeout clause is dead code
    2193         waitfor(f1, a); or timeout(t); or else;
    2194 
    2195         // Incorrect : order must be waitfor [or waitfor... [or timeout] [or else]]
    2196         timeout(t); or waitfor(f1, a); or else;
    2197 }
    2198 \end{cfa}
    2199 \caption{Correct and incorrect uses of the or, else, and timeout clause around a waitfor statement}
    2200 \label{f:waitfor2}
    2201 \end{figure}
    2202 
    2203 % ======================================================================
    2204 % ======================================================================
    2205 \subsection{Waiting For The Destructor}
    2206 % ======================================================================
    2207 % ======================================================================
    2208 An interesting use for the @waitfor@ statement is destructor semantics.
    2209 Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}).
    2210 However, with the semantics discussed until now, waiting for the destructor does not make any sense, since using an object after its destructor is called is undefined behaviour.
    2211 The simplest approach is to disallow @waitfor@ on a destructor.
    2212 However, a more expressive approach is to flip ordering of execution when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current @mutex@ routine, similarly to how a condition is signalled.
    2213 \begin{figure}
    2214 \begin{cfa}[caption={Example of an executor which executes action in series until the destructor is called.},label={f:dtor-order}]
    2215 monitor Executer {};
    2216 struct  Action;
    2217 
    2218 void ^?{}   (Executer & mutex this);
    2219 void execute(Executer & mutex this, const Action & );
    2220 void run    (Executer & mutex this) {
    2221         while(true) {
    2222                    waitfor(execute, this);
    2223                 or waitfor(^?{}   , this) {
    2224                         break;
    2225                 }
    2226         }
    2227 }
    2228 \end{cfa}
    2229 \end{figure}
    2230 For example, listing \ref{f:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop.
    2231 Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction.
    2232 
    2233 
    2234 % ######     #    ######     #    #       #       ####### #       ###  #####  #     #
    2235 % #     #   # #   #     #   # #   #       #       #       #        #  #     # ##   ##
    2236 % #     #  #   #  #     #  #   #  #       #       #       #        #  #       # # # #
    2237 % ######  #     # ######  #     # #       #       #####   #        #   #####  #  #  #
    2238 % #       ####### #   #   ####### #       #       #       #        #        # #     #
    2239 % #       #     # #    #  #     # #       #       #       #        #  #     # #     #
    2240 % #       #     # #     # #     # ####### ####### ####### ####### ###  #####  #     #
    2241 \section{Parallelism}
    2242 Historically, computer performance was about processor speeds and instruction counts.
    2243 However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}.
    2244 In this decade, it is no longer reasonable to create a high-performance application without caring about parallelism.
    2245 Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization.
    2246 The lowest-level approach of parallelism is to use \textbf{kthread} in combination with semantics like @fork@, @join@, \etc.
    2247 However, since these have significant costs and limitations, \textbf{kthread} are now mostly used as an implementation tool rather than a user oriented one.
    2248 There are several alternatives to solve these issues that all have strengths and weaknesses.
    2249 While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads.
    2250 
    2251 \section{Paradigms}
    2252 \subsection{User-Level Threads}
    2253 A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}.
    2254 These threads offer most of the same features that the operating system already provides but can be used on a much larger scale.
    2255 This approach is the most powerful solution as it allows all the features of multithreading, while removing several of the more expensive costs of kernel threads.
    2256 The downside is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues.
    2257 These issues can be somewhat alleviated by a concurrency toolkit with strong guarantees, but the parallelism toolkit offers very little to reduce complexity in itself.
    2258 
    2259 Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
    2260 
    2261 \subsection{Fibers : User-Level Threads Without Preemption} \label{fibers}
    2262 A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}.
    2263 However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}.
    2264 The significant difference between \textbf{uthread} and \textbf{fiber} is the lack of \textbf{preemption} in the latter.
    2265 Advocates of \textbf{fiber} list their high performance and ease of implementation as major strengths, but the performance difference between \textbf{uthread} and \textbf{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design.
    2266 Therefore this proposal largely ignores fibers.
    2267 
    2268 An example of a language that uses fibers is Go~\cite{Go}
    2269 
    2270 \subsection{Jobs and Thread Pools}
    2271 An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}.
    2272 Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface.
    2273 In \textbf{pool} based systems, users express parallelism as units of work, called jobs, and a dependency graph (either explicit or implicit) that ties them together.
    2274 This approach means users need not worry about concurrency but significantly limit the interaction that can occur among jobs.
    2275 Indeed, any \textbf{job} that blocks also block the underlying worker, which effectively means the CPU utilization, and therefore throughput, suffers noticeably.
    2276 It can be argued that a solution to this problem is to use more workers than available cores.
    2277 However, unless the number of jobs and the number of workers are comparable, having a significant number of blocked jobs always results in idles cores.
    2278 
    2279 The gold standard of this implementation is Intel's TBB library~\cite{TBB}.
    2280 
    2281 \subsection{Paradigm Performance}
    2282 While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level.
    2283 Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload.
    2284 Having a large amount of mostly independent units of work to execute almost guarantees equivalent performance across paradigms and that the \textbf{pool}-based system has the best efficiency thanks to the lower memory overhead (\ie no thread stack per job).
    2285 However, interactions among jobs can easily exacerbate contention.
    2286 User-level threads allow fine-grain context switching, which results in better resource utilization, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance.
    2287 Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done.
    2288 
    2289 \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel}
    2290 A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}.
    2291 It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues.
    2292 A \textbf{cfacluster} also offers a pluggable scheduler that can optimize the workload generated by the \textbf{uthread}.
    2293 
    2294 \textbf{cfacluster} have not been fully implemented in the context of this paper.
    2295 Currently \CFA only supports one \textbf{cfacluster}, the initial one.
    2296 
    2297 \subsection{Future Work: Machine Setup}\label{machine}
    2298 While this was not done in the context of this paper, another important aspect of clusters is affinity.
    2299 While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups.
    2300 For example, a system using \textbf{numa} configurations may benefit from users being able to tie clusters and/or kernel threads to certain CPU cores.
    2301 OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity.
    2302 
    2303 \subsection{Paradigms}\label{cfaparadigms}
    2304 Given these building blocks, it is possible to reproduce all three of the popular paradigms.
    2305 Indeed, \textbf{uthread} is the default paradigm in \CFA.
    2306 However, disabling \textbf{preemption} on a cluster means threads effectively become fibers.
    2307 Since several \textbf{cfacluster} with different scheduling policy can coexist in the same application, this allows \textbf{fiber} and \textbf{uthread} to coexist in the runtime of an application.
    2308 Finally, it is possible to build executors for thread pools from \textbf{uthread} or \textbf{fiber}, which includes specialized jobs like actors~\cite{Actors}.
    2309 
    2310 
    2311 
    2312 \section{Behind the Scenes}
    2313 There are several challenges specific to \CFA when implementing concurrency.
    2314 These challenges are a direct result of bulk acquire and loose object definitions.
    2315 These two constraints are the root cause of most design decisions in the implementation.
    2316 Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs.
    2317 This approach avoids the chicken and egg problem~\cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime.
    2318 This extra goal means that memory management is a constant concern in the design of the system.
    2319 
    2320 The main memory concern for concurrency is queues.
    2321 All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each node has pre-allocated link fields for chaining, to avoid the need for memory allocation.
    2322 Since several concurrency operations can use an unbound amount of memory (depending on bulk acquire), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers.
    2323 Conveniently, the call stack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays.
    2324 Since stack allocation is based on scopes, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array.
    2325 The threads and the condition both have a fixed amount of memory, while @mutex@ routines and blocking calls allow for an unbound amount, within the stack size.
    2326 
    2327 Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed.
    2328 
    2329 % ======================================================================
    2330 % ======================================================================
    2331 \section{Mutex Routines}
    2332 % ======================================================================
    2333 % ======================================================================
    2334 
    2335 The first step towards the monitor implementation is simple @mutex@ routines.
    2336 In the single monitor case, mutual-exclusion is done using the entry/exit procedure in listing \ref{f:entry1}.
    2337 The entry/exit procedures do not have to be extended to support multiple monitors.
    2338 Indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlock~\cite{Havender68}.
    2339 In \CFA, ordering of monitor acquisition relies on memory ordering.
    2340 This approach is sufficient because all objects are guaranteed to have distinct non-overlapping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behaviour.
    2341 When a mutex call is made, the concerned monitors are aggregated into a variable-length pointer array and sorted based on pointer values.
    2342 This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively.
    2343 \begin{figure}
    2344 \begin{multicols}{2}
    2345 Entry
    2346 \begin{cfa}
    2347 if monitor is free
    2348         enter
    2349 elif already own the monitor
    2350         continue
    2351 else
    2352         block
    2353 increment recursions
    2354 \end{cfa}
    2355 \columnbreak
    2356 Exit
    2357 \begin{cfa}
    2358 decrement recursion
    2359 if recursion == 0
    2360         if entry queue not empty
    2361                 wake-up thread
    2362 \end{cfa}
    2363 \end{multicols}
    2364 \begin{cfa}[caption={Initial entry and exit routine for monitors},label={f:entry1}]
    2365 \end{cfa}
    2366 \end{figure}
    2367 
    2368 \subsection{Details: Interaction with polymorphism}
    2369 Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support.
    2370 However, it is shown that entry-point locking solves most of the issues.
    2371 
    2372 First of all, interaction between @otype@ polymorphism (see Section~\ref{s:ParametricPolymorphism}) and monitors is impossible since monitors do not support copying.
    2373 Therefore, the main question is how to support @dtype@ polymorphism.
    2374 It is important to present the difference between the two acquiring options: \textbf{callsite-locking} and entry-point locking, \ie acquiring the monitors before making a mutex routine-call or as the first operation of the mutex routine-call.
    2375 For example:
     2486}
     2487
     2488\end{cfa}
     2489\end{tabular}
     2490\end{cquote}
     2491% \lstMakeShortInline@%
     2492% \caption{Threads ping/pong using external scheduling}
     2493% \label{f:pingpong}
     2494% \end{figure}
     2495Note, the ping/pong threads are globally declared, @pi@/@po@, and hence, start (and possibly complete) before the program main starts.
     2496\end{comment}
     2497
     2498
     2499\subsection{Execution Properties}
     2500
     2501Table~\ref{t:ObjectPropertyComposition} shows how the \CFA high-level constructs cover 3 fundamental execution properties: thread, stateful function, and mutual exclusion.
     2502Case 1 is a basic object, with none of the new execution properties.
     2503Case 2 allows @mutex@ calls to Case 1 to protect shared data.
     2504Case 3 allows stateful functions to suspend/resume but restricts operations because the state is stackless.
     2505Case 4 allows @mutex@ calls to Case 3 to protect shared data.
     2506Cases 5 and 6 are the same as 3 and 4 without restriction because the state is stackful.
     2507Cases 7 and 8 are rejected because a thread cannot execute without a stackful state in a preemptive environment when context switching from the signal handler.
     2508Cases 9 and 10 have a stackful thread without and with @mutex@ calls.
     2509For situations where threads do not require direct communication, case 9 provides faster creation/destruction by eliminating @mutex@ setup.
     2510
    23762511\begin{table}
    2377 \begin{center}
    2378 \begin{tabular}{|c|c|c|}
    2379 Mutex & \textbf{callsite-locking} & \textbf{entry-point-locking} \\
    2380 call & cfa-code & cfa-code \\
     2512\caption{Object property composition}
     2513\centering
     2514\label{t:ObjectPropertyComposition}
     2515\renewcommand{\arraystretch}{1.25}
     2516%\setlength{\tabcolsep}{5pt}
     2517\begin{tabular}{c|c||l|l}
     2518\multicolumn{2}{c||}{object properties} & \multicolumn{2}{c}{mutual exclusion} \\
    23812519\hline
    2382 \begin{cfa}[tabsize=3]
    2383 void foo(monitor& mutex a){
    2384 
    2385         // Do Work
    2386         //...
    2387 
    2388 }
    2389 
    2390 void main() {
    2391         monitor a;
    2392 
    2393         foo(a);
    2394 
    2395 }
    2396 \end{cfa} & \begin{cfa}[tabsize=3]
    2397 foo(& a) {
    2398 
    2399         // Do Work
    2400         //...
    2401 
    2402 }
    2403 
    2404 main() {
    2405         monitor a;
    2406         acquire(a);
    2407         foo(a);
    2408         release(a);
    2409 }
    2410 \end{cfa} & \begin{cfa}[tabsize=3]
    2411 foo(& a) {
    2412         acquire(a);
    2413         // Do Work
    2414         //...
    2415         release(a);
    2416 }
    2417 
    2418 main() {
    2419         monitor a;
    2420 
    2421         foo(a);
    2422 
    2423 }
    2424 \end{cfa}
    2425 \end{tabular}
    2426 \end{center}
    2427 \caption{Call-site vs entry-point locking for mutex calls}
    2428 \label{tbl:locking-site}
    2429 \end{table}
    2430 
    2431 Note the @mutex@ keyword relies on the type system, which means that in cases where a generic monitor-routine is desired, writing the mutex routine is possible with the proper trait, \eg:
    2432 \begin{cfa}
    2433 // Incorrect: T may not be monitor
    2434 forall(dtype T)
    2435 void foo(T * mutex t);
    2436 
    2437 // Correct: this routine only works on monitors (any monitor)
    2438 forall(dtype T | is_monitor(T))
    2439 void bar(T * mutex t));
    2440 \end{cfa}
    2441 
    2442 Both entry point and \textbf{callsite-locking} are feasible implementations.
    2443 The current \CFA implementation uses entry-point locking because it requires less work when using \textbf{raii}, effectively transferring the burden of implementation to object construction/destruction.
    2444 It is harder to use \textbf{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, \ie the routine body.
    2445 For example, the monitor call can appear in the middle of an expression.
    2446 Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites.
    2447 
    2448 % ======================================================================
    2449 % ======================================================================
    2450 \section{Threading} \label{impl:thread}
    2451 % ======================================================================
    2452 % ======================================================================
    2453 
    2454 Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency.
    2455 Each component of the picture is explained in detail in the flowing sections.
    2456 
    2457 \begin{figure}
    2458 \begin{center}
    2459 {\resizebox{\textwidth}{!}{\input{system.pstex_t}}}
    2460 \end{center}
    2461 \caption{Overview of the entire system}
    2462 \label{fig:system1}
    2463 \end{figure}
    2464 
    2465 \subsection{Processors}
    2466 Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA.
    2467 Indeed, any parallelism must go through operating-system libraries.
    2468 However, \textbf{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism.
    2469 Indeed, processor \textbf{kthread} simply fetch a \textbf{uthread} from the scheduler and run it; they are effectively executers for user-threads.
    2470 The main benefit of this approach is that it offers a well-defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling.
    2471 Processors internally use coroutines to take advantage of the existing context-switching semantics.
    2472 
    2473 \subsection{Stack Management}
    2474 One of the challenges of this system is to reduce the footprint as much as possible.
    2475 Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible.
    2476 Normally, coroutines also create their own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the \textbf{kthread} stack, effectively stealing the processor stack.
    2477 The exception to this rule is the Main Processor, \ie the initial \textbf{kthread} that is given to any program.
    2478 In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large.
    2479 
    2480 \subsection{Context Switching}
    2481 As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks.
    2482 To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call.
    2483 This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread.
    2484 Note that the instruction pointer can be left untouched since the context-switch is always inside the same routine
    2485 Threads, however, do not context-switch between each other directly.
    2486 They context-switch to the scheduler.
    2487 This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operations happen.
    2488 Obviously, this doubles the context-switch cost because threads must context-switch to an intermediate stack.
    2489 The alternative 1-step context-switch uses the stack of the ``from'' thread to schedule and then context-switches directly to the ``to'' thread.
    2490 However, the performance of the 2-step context-switch is still superior to a @pthread_yield@ (see section \ref{results}).
    2491 Additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch (akin to the Microsoft @SwitchToFiber@~\cite{switchToWindows} routine).
    2492 This option is not currently present in \CFA, but the changes required to add it are strictly additive.
    2493 
    2494 \subsection{Preemption} \label{preemption}
    2495 Finally, an important aspect for any complete threading system is preemption.
    2496 As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution.
    2497 Indeed, preemption is desirable because it adds a degree of isolation among threads.
    2498 In a fully cooperative system, any thread that runs a long loop can starve other threads, while in a preemptive system, starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden.
    2499 Obviously, preemption is not optimal for every workload.
    2500 However any preemptive system can become a cooperative system by making the time slices extremely large.
    2501 Therefore, \CFA uses a preemptive threading system.
    2502 
    2503 Preemption in \CFA\footnote{Note that the implementation of preemption is strongly tied with the underlying threading system.
    2504 For this reason, only the Linux implementation is cover, \CFA does not run on Windows at the time of writting} is based on kernel timers, which are used to run a discrete-event simulation.
    2505 Every processor keeps track of the current time and registers an expiration time with the preemption system.
    2506 When the preemption system receives a change in preemption, it inserts the time in a sorted order and sets a kernel timer for the closest one, effectively stepping through preemption events on each signal sent by the timer.
    2507 These timers use the Linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread.
    2508 This results in an implementation problem, because when delivering signals to a process, the kernel can deliver the signal to any kernel thread for which the signal is not blocked, \ie:
    2509 \begin{quote}
    2510 A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked.
    2511 If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.
    2512 SIGNAL(7) - Linux Programmer's Manual
    2513 \end{quote}
    2514 For the sake of simplicity, and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every kernel thread except one.
    2515 
    2516 Now because of how involuntary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread.
    2517 Hence, involuntary context-switching is done by sending signal {\tt SIGUSR1} to the corresponding proces\-sor and having the thread yield from inside the signal handler.
    2518 This approach effectively context-switches away from the signal handler back to the kernel and the signal handler frame is eventually unwound when the thread is scheduled again.
    2519 As a result, a signal handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread).
    2520 It is important to note that signal handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another.
    2521 This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes ``async-signal-safe'' routines from other routines}.
    2522 However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks.
    2523 For this reason, the alarm thread is in a tight loop around a system call to @sigwaitinfo@, requiring very little CPU time for preemption.
    2524 One final detail about the alarm thread is how to wake it when additional communication is required (\eg on thread termination).
    2525 This unblocking is also done using {\tt SIGALRM}, but sent through the @pthread_sigqueue@.
    2526 Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel.
    2527 
    2528 \subsection{Scheduler}
    2529 Finally, an aspect that was not mentioned yet is the scheduling algorithm.
    2530 Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling.
    2531 Further discussion on scheduling is present in section \ref{futur:sched}.
    2532 
    2533 % ======================================================================
    2534 % ======================================================================
    2535 \section{Internal Scheduling} \label{impl:intsched}
    2536 % ======================================================================
    2537 % ======================================================================
    2538 The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience):
    2539 
    2540 \begin{figure}
    2541 \begin{center}
    2542 {\resizebox{0.4\textwidth}{!}{\input{monitor}}}
    2543 \end{center}
    2544 \caption{Traditional illustration of a monitor}
    2545 \end{figure}
    2546 
    2547 This picture has several components, the two most important being the entry queue and the AS-stack.
    2548 The entry queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor/signaller (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next.
    2549 
    2550 For \CFA, this picture does not have support for blocking multiple monitors on a single condition.
    2551 To support bulk acquire two changes to this picture are required.
    2552 First, it is no longer helpful to attach the condition to \emph{a single} monitor.
    2553 Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}.
    2554 
    2555 \begin{figure}
    2556 \begin{center}
    2557 {\resizebox{0.8\textwidth}{!}{\input{int_monitor}}}
    2558 \end{center}
    2559 \caption{Illustration of \CFA Monitor}
    2560 \label{fig:monitor_cfa}
    2561 \end{figure}
    2562 
    2563 This picture and the proper entry and leave algorithms (see listing \ref{f:entry2}) is the fundamental implementation of internal scheduling.
    2564 Note that when a thread is moved from the condition to the AS-stack, it is conceptually split into N pieces, where N is the number of monitors specified in the parameter list.
    2565 The thread is woken up when all the pieces have popped from the AS-stacks and made active.
    2566 In this picture, the threads are split into halves but this is only because there are two monitors.
    2567 For a specific signalling operation every monitor needs a piece of thread on its AS-stack.
    2568 
    2569 \begin{figure}
    2570 \begin{multicols}{2}
    2571 Entry
    2572 \begin{cfa}
    2573 if monitor is free
    2574         enter
    2575 elif already own the monitor
    2576         continue
    2577 else
    2578         block
    2579 increment recursion
    2580 
    2581 \end{cfa}
    2582 \columnbreak
    2583 Exit
    2584 \begin{cfa}
    2585 decrement recursion
    2586 if recursion == 0
    2587         if signal_stack not empty
    2588                 set_owner to thread
    2589                 if all monitors ready
    2590                         wake-up thread
    2591 
    2592         if entry queue not empty
    2593                 wake-up thread
    2594 \end{cfa}
    2595 \end{multicols}
    2596 \begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling},label={f:entry2}]
    2597 \end{cfa}
    2598 \end{figure}
    2599 
    2600 The solution discussed in \ref{s:InternalScheduling} can be seen in the exit routine of listing \ref{f:entry2}.
    2601 Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership.
    2602 This solution is deadlock safe as well as preventing any potential barging.
    2603 The data structures used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the call stack of the @wait@ and @signal_block@ routines.
    2604 
    2605 \begin{figure}
    2606 \begin{center}
    2607 {\resizebox{0.8\textwidth}{!}{\input{monitor_structs.pstex_t}}}
    2608 \end{center}
    2609 \caption{Data structures involved in internal/external scheduling}
    2610 \label{fig:structs}
    2611 \end{figure}
    2612 
    2613 Figure \ref{fig:structs} shows a high-level representation of these data structures.
    2614 The main idea behind them is that, a thread cannot contain an arbitrary number of intrusive ``next'' pointers for linking onto monitors.
    2615 The @condition node@ is the data structure that is queued onto a condition variable and, when signalled, the condition queue is popped and each @condition criterion@ is moved to the AS-stack.
    2616 Once all the criteria have been popped from their respective AS-stacks, the thread is woken up, which is what is shown in listing \ref{f:entry2}.
    2617 
    2618 % ======================================================================
    2619 % ======================================================================
    2620 \section{External Scheduling}
    2621 % ======================================================================
    2622 % ======================================================================
    2623 Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentioned in section \ref{extsched}.
    2624 For internal scheduling, these queues are part of condition variables, which are still unique for a given scheduling operation (\ie no signal statement uses multiple conditions).
    2625 However, in the case of external scheduling, there is no equivalent object which is associated with @waitfor@ statements.
    2626 This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired.
    2627 These monitors being the only objects that have sufficient lifetime and are available on both sides of the @waitfor@ statement.
    2628 This requires an algorithm to choose which monitor holds the relevant queue.
    2629 It is also important that said algorithm be independent of the order in which users list parameters.
    2630 The proposed algorithm is to fall back on monitor lock ordering (sorting by address) and specify that the monitor that is acquired first is the one with the relevant waiting queue.
    2631 This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint.
    2632 
    2633 This algorithm choice has two consequences:
    2634 \begin{itemize}
    2635         \item The queue of the monitor with the lowest address is no longer a true FIFO queue because threads can be moved to the front of the queue.
    2636 These queues need to contain a set of monitors for each of the waiting threads.
    2637 Therefore, another thread whose set contains the same lowest address monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing.
    2638         \item The queue of the lowest priority monitor is both required and potentially unused.
    2639 Indeed, since it is not known at compile time which monitor is the monitor which has the lowest address, every monitor needs to have the correct queues even though it is possible that some queues go unused for the entire duration of the program, for example if a monitor is only used in a specific pair.
    2640 \end{itemize}
    2641 Therefore, the following modifications need to be made to support external scheduling:
    2642 \begin{itemize}
    2643         \item The threads waiting on the entry queue need to keep track of which routine they are trying to enter, and using which set of monitors.
    2644 The @mutex@ routine already has all the required information on its stack, so the thread only needs to keep a pointer to that information.
    2645         \item The monitors need to keep a mask of acceptable routines.
    2646 This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it.
    2647 It also needs storage to keep track of which routine was accepted.
    2648 Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread.
    2649 Note that if a thread has acquired two monitors but executes a @waitfor@ with only one monitor as a parameter, setting the mask of acceptable routines to both monitors will not cause any problems since the extra monitor will not change ownership regardless.
    2650 This becomes relevant when @when@ clauses affect the number of monitors passed to a @waitfor@ statement.
    2651         \item The entry/exit routines need to be updated as shown in listing \ref{f:entry3}.
    2652 \end{itemize}
    2653 
    2654 \subsection{External Scheduling - Destructors}
    2655 Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine.
    2656 This routine is needed because of the storage requirements of the call order inversion.
    2657 Indeed, when waiting for the destructors, storage is needed for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for.
    2658 For regular @waitfor@ statements, the call stack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later.
    2659 The @waitfor@ semantics can then be adjusted correspondingly, as seen in listing \ref{f:entry-dtor}
    2660 
    2661 \begin{figure}
    2662 \begin{multicols}{2}
    2663 Entry
    2664 \begin{cfa}
    2665 if monitor is free
    2666         enter
    2667 elif already own the monitor
    2668         continue
    2669 elif matches waitfor mask
    2670         push criteria to AS-stack
    2671         continue
    2672 else
    2673         block
    2674 increment recursion
    2675 \end{cfa}
    2676 \columnbreak
    2677 Exit
    2678 \begin{cfa}
    2679 decrement recursion
    2680 if recursion == 0
    2681         if signal_stack not empty
    2682                 set_owner to thread
    2683                 if all monitors ready
    2684                         wake-up thread
    2685                 endif
    2686         endif
    2687 
    2688         if entry queue not empty
    2689                 wake-up thread
    2690         endif
    2691 \end{cfa}
    2692 \end{multicols}
    2693 \begin{cfa}[caption={Entry and exit routine for monitors with internal scheduling and external scheduling},label={f:entry3}]
    2694 \end{cfa}
    2695 \end{figure}
    2696 
    2697 \begin{figure}
    2698 \begin{multicols}{2}
    2699 Destructor Entry
    2700 \begin{cfa}
    2701 if monitor is free
    2702         enter
    2703 elif already own the monitor
    2704         increment recursion
    2705         return
    2706 create wait context
    2707 if matches waitfor mask
    2708         reset mask
    2709         push self to AS-stack
    2710         baton pass
    2711 else
    2712         wait
    2713 increment recursion
    2714 \end{cfa}
    2715 \columnbreak
    2716 Waitfor
    2717 \begin{cfa}
    2718 if matching thread is already there
    2719         if found destructor
    2720                 push destructor to AS-stack
    2721                 unlock all monitors
    2722         else
    2723                 push self to AS-stack
    2724                 baton pass
    2725         endif
    2726         return
    2727 endif
    2728 if non-blocking
    2729         Unlock all monitors
    2730         Return
    2731 endif
    2732 
    2733 push self to AS-stack
    2734 set waitfor mask
    2735 block
    2736 return
    2737 \end{cfa}
    2738 \end{multicols}
    2739 \begin{cfa}[caption={Pseudo code for the \protect\lstinline|waitfor| routine and the \protect\lstinline|mutex| entry routine for destructors},label={f:entry-dtor}]
    2740 \end{cfa}
    2741 \end{figure}
    2742 
    2743 
    2744 % ======================================================================
    2745 % ======================================================================
    2746 \section{Putting It All Together}
    2747 % ======================================================================
    2748 % ======================================================================
    2749 
    2750 
    2751 \section{Threads As Monitors}
    2752 As it was subtly alluded in section \ref{threads}, @thread@s in \CFA are in fact monitors, which means that all monitor features are available when using threads.
    2753 For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine:
    2754 \begin{figure}
    2755 \begin{cfa}[caption={Toy simulator using \protect\lstinline|thread|s and \protect\lstinline|monitor|s.},label={f:engine-v1}]
    2756 // Visualization declaration
    2757 thread Renderer {} renderer;
    2758 Frame * simulate( Simulator & this );
    2759 
    2760 // Simulation declaration
    2761 thread Simulator{} simulator;
    2762 void render( Renderer & this );
    2763 
    2764 // Blocking call used as communication
    2765 void draw( Renderer & mutex this, Frame * frame );
    2766 
    2767 // Simulation loop
    2768 void main( Simulator & this ) {
    2769         while( true ) {
    2770                 Frame * frame = simulate( this );
    2771                 draw( renderer, frame );
    2772         }
    2773 }
    2774 
    2775 // Rendering loop
    2776 void main( Renderer & this ) {
    2777         while( true ) {
    2778                 waitfor( draw, this );
    2779                 render( this );
    2780         }
    2781 }
    2782 \end{cfa}
    2783 \end{figure}
    2784 One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever.
    2785 Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner:
    2786 \begin{figure}
    2787 \begin{cfa}[caption={Same toy simulator with proper termination condition.},label={f:engine-v2}]
    2788 // Visualization declaration
    2789 thread Renderer {} renderer;
    2790 Frame * simulate( Simulator & this );
    2791 
    2792 // Simulation declaration
    2793 thread Simulator{} simulator;
    2794 void render( Renderer & this );
    2795 
    2796 // Blocking call used as communication
    2797 void draw( Renderer & mutex this, Frame * frame );
    2798 
    2799 // Simulation loop
    2800 void main( Simulator & this ) {
    2801         while( true ) {
    2802                 Frame * frame = simulate( this );
    2803                 draw( renderer, frame );
    2804 
    2805                 // Exit main loop after the last frame
    2806                 if( frame->is_last ) break;
    2807         }
    2808 }
    2809 
    2810 // Rendering loop
    2811 void main( Renderer & this ) {
    2812         while( true ) {
    2813                    waitfor( draw, this );
    2814                 or waitfor( ^?{}, this ) {
    2815                         // Add an exit condition
    2816                         break;
    2817                 }
    2818 
    2819                 render( this );
    2820         }
    2821 }
    2822 
    2823 // Call destructor for simulator once simulator finishes
    2824 // Call destructor for renderer to signify shutdown
    2825 \end{cfa}
    2826 \end{figure}
    2827 
    2828 \section{Fibers \& Threads}
    2829 As mentioned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand.
    2830 Currently, using fibers is done by adding the following line of code to the program~:
    2831 \begin{cfa}
    2832 unsigned int default_preemption() {
    2833         return 0;
    2834 }
    2835 \end{cfa}
    2836 This routine is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, \ie no preemption.
    2837 However, once clusters are fully implemented, it will be possible to create fibers and \textbf{uthread} in the same system, as in listing \ref{f:fiber-uthread}
    2838 \begin{figure}
    2839 \lstset{language=CFA,deletedelim=**[is][]{`}{`}}
    2840 \begin{cfa}[caption={Using fibers and \textbf{uthread} side-by-side in \CFA},label={f:fiber-uthread}]
    2841 // Cluster forward declaration
    2842 struct cluster;
    2843 
    2844 // Processor forward declaration
    2845 struct processor;
    2846 
    2847 // Construct clusters with a preemption rate
    2848 void ?{}(cluster& this, unsigned int rate);
    2849 // Construct processor and add it to cluster
    2850 void ?{}(processor& this, cluster& cluster);
    2851 // Construct thread and schedule it on cluster
    2852 void ?{}(thread& this, cluster& cluster);
    2853 
    2854 // Declare two clusters
    2855 cluster thread_cluster = { 10`ms };                     // Preempt every 10 ms
    2856 cluster fibers_cluster = { 0 };                         // Never preempt
    2857 
    2858 // Construct 4 processors
    2859 processor processors[4] = {
    2860         //2 for the thread cluster
    2861         thread_cluster;
    2862         thread_cluster;
    2863         //2 for the fibers cluster
    2864         fibers_cluster;
    2865         fibers_cluster;
    2866 };
    2867 
    2868 // Declares thread
    2869 thread UThread {};
    2870 void ?{}(UThread& this) {
    2871         // Construct underlying thread to automatically
    2872         // be scheduled on the thread cluster
    2873         (this){ thread_cluster }
    2874 }
    2875 
    2876 void main(UThread & this);
    2877 
    2878 // Declares fibers
    2879 thread Fiber {};
    2880 void ?{}(Fiber& this) {
    2881         // Construct underlying thread to automatically
    2882         // be scheduled on the fiber cluster
    2883         (this.__thread){ fibers_cluster }
    2884 }
    2885 
    2886 void main(Fiber & this);
    2887 \end{cfa}
    2888 \end{figure}
    2889 
    2890 
    2891 % ======================================================================
    2892 % ======================================================================
    2893 \section{Performance Results} \label{results}
    2894 % ======================================================================
    2895 % ======================================================================
    2896 \section{Machine Setup}
    2897 Table \ref{tab:machine} shows the characteristics of the machine used to run the benchmarks.
    2898 All tests were made on this machine.
    2899 \begin{table}
    2900 \begin{center}
    2901 \begin{tabular}{| l | r | l | r |}
    2902 \hline
    2903 Architecture            & x86\_64                       & NUMA node(s)  & 8 \\
    2904 \hline
    2905 CPU op-mode(s)          & 32-bit, 64-bit                & Model name    & AMD Opteron\texttrademark  Processor 6380 \\
    2906 \hline
    2907 Byte Order                      & Little Endian                 & CPU Freq              & 2.5\si{\giga\hertz} \\
    2908 \hline
    2909 CPU(s)                  & 64                            & L1d cache     & \SI{16}{\kibi\byte} \\
    2910 \hline
    2911 Thread(s) per core      & 2                             & L1i cache     & \SI{64}{\kibi\byte} \\
    2912 \hline
    2913 Core(s) per socket      & 8                             & L2 cache              & \SI{2048}{\kibi\byte} \\
    2914 \hline
    2915 Socket(s)                       & 4                             & L3 cache              & \SI{6144}{\kibi\byte} \\
     2520thread  & stateful                              & \multicolumn{1}{c|}{No} & \multicolumn{1}{c}{Yes} \\
    29162521\hline
    29172522\hline
    2918 Operating system                & Ubuntu 16.04.3 LTS    & Kernel                & Linux 4.4-97-generic \\
     2523No              & No                                    & \textbf{1}\ \ \ aggregate type                & \textbf{2}\ \ \ @monitor@ aggregate type \\
    29192524\hline
    2920 Compiler                        & GCC 6.3               & Translator    & CFA 1 \\
     2525No              & Yes (stackless)               & \textbf{3}\ \ \ @generator@                   & \textbf{4}\ \ \ @monitor@ @generator@ \\
    29212526\hline
    2922 Java version            & OpenJDK-9             & Go version    & 1.9.2 \\
     2527No              & Yes (stackful)                & \textbf{5}\ \ \ @coroutine@                   & \textbf{6}\ \ \ @monitor@ @coroutine@ \\
    29232528\hline
     2529Yes             & No / Yes (stackless)  & \textbf{7}\ \ \ {\color{red}rejected} & \textbf{8}\ \ \ {\color{red}rejected} \\
     2530\hline
     2531Yes             & Yes (stackful)                & \textbf{9}\ \ \ @thread@                              & \textbf{10}\ \ @monitor@ @thread@ \\
    29242532\end{tabular}
    2925 \end{center}
    2926 \caption{Machine setup used for the tests}
    2927 \label{tab:machine}
    29282533\end{table}
    29292534
    2930 \section{Micro Benchmarks}
    2931 All benchmarks are run using the same harness to produce the results, seen as the @BENCH()@ macro in the following examples.
    2932 This macro uses the following logic to benchmark the code:
    2933 \begin{cfa}
    2934 #define BENCH(run, result) \
    2935         before = gettime(); \
    2936         run; \
    2937         after  = gettime(); \
    2938         result = (after - before) / N;
    2939 \end{cfa}
    2940 The method used to get time is @clock_gettime(CLOCK_THREAD_CPUTIME_ID);@.
    2941 Each benchmark is using many iterations of a simple call to measure the cost of the call.
    2942 The specific number of iterations depends on the specific benchmark.
    2943 
    2944 \subsection{Context-Switching}
    2945 The first interesting benchmark is to measure how long context-switches take.
    2946 The simplest approach to do this is to yield on a thread, which executes a 2-step context switch.
    2947 Yielding causes the thread to context-switch to the scheduler and back, more precisely: from the \textbf{uthread} to the \textbf{kthread} then from the \textbf{kthread} back to the same \textbf{uthread} (or a different one in the general case).
    2948 In order to make the comparison fair, coroutines also execute a 2-step context-switch by resuming another coroutine which does nothing but suspending in a tight loop, which is a resume/suspend cycle instead of a yield.
    2949 Figure~\ref{f:ctx-switch} shows the code for coroutines and threads with the results in table \ref{tab:ctx-switch}.
    2950 All omitted tests are functionally identical to one of these tests.
    2951 The difference between coroutines and threads can be attributed to the cost of scheduling.
     2535
     2536\subsection{Low-level Locks}
     2537
     2538For completeness and efficiency, \CFA provides a standard set of low-level locks: recursive mutex, condition, semaphore, barrier, \etc, and atomic instructions: @fetchAssign@, @fetchAdd@, @testSet@, @compareSet@, \etc.
     2539Some of these low-level mechanism are used in the \CFA runtime, but we strongly advocate using high-level mechanisms whenever possible.
     2540
     2541
     2542% \section{Parallelism}
     2543% \label{s:Parallelism}
     2544%
     2545% Historically, computer performance was about processor speeds.
     2546% However, with heat dissipation being a direct consequence of speed increase, parallelism is the new source for increased performance~\cite{Sutter05, Sutter05b}.
     2547% Therefore, high-performance applications must care about parallelism, which requires concurrency.
     2548% The lowest-level approach of parallelism is to use \newterm{kernel threads} in combination with semantics like @fork@, @join@, \etc.
     2549% However, kernel threads are better as an implementation tool because of complexity and higher cost.
     2550% Therefore, different abstractions are often layered onto kernel threads to simplify them, \eg pthreads.
     2551%
     2552%
     2553% \subsection{User Threads}
     2554%
     2555% A direct improvement on kernel threads is user threads, \eg Erlang~\cite{Erlang} and \uC~\cite{uC++book}.
     2556% This approach provides an interface that matches the language paradigms, gives more control over concurrency by the language runtime, and an abstract (and portable) interface to the underlying kernel threads across operating systems.
     2557% In many cases, user threads can be used on a much larger scale (100,000 threads).
     2558% Like kernel threads, user threads support preemption, which maximizes nondeterminism, but increases the potential for concurrency errors: race, livelock, starvation, and deadlock.
     2559% \CFA adopts user-threads to provide more flexibility and a low-cost mechanism to build any other concurrency approach, \eg thread pools and actors~\cite{Actors}.
     2560%
     2561% A variant of user thread is \newterm{fibres}, which removes preemption, \eg Go~\cite{Go} @goroutine@s.
     2562% Like functional programming, which removes mutation and its associated problems, removing preemption from concurrency reduces nondeterminism, making race and deadlock errors more difficult to generate.
     2563% However, preemption is necessary for fairness and to reduce tail-latency.
     2564% For concurrency that relies on spinning, if all cores spin the system is livelocked, whereas preemption breaks the livelock.
     2565
     2566
     2567\begin{comment}
     2568\subsection{Thread Pools}
     2569
     2570In contrast to direct threading is indirect \newterm{thread pools}, \eg Java @executor@, where small jobs (work units) are inserted into a work pool for execution.
     2571If the jobs are dependent, \ie interact, there is an implicit/explicit dependency graph that ties them together.
     2572While removing direct concurrency, and hence the amount of context switching, thread pools significantly limit the interaction that can occur among jobs.
     2573Indeed, jobs should not block because that also blocks the underlying thread, which effectively means the CPU utilization, and therefore throughput, suffers.
     2574While it is possible to tune the thread pool with sufficient threads, it becomes difficult to obtain high throughput and good core utilization as job interaction increases.
     2575As well, concurrency errors return, which threads pools are suppose to mitigate.
     2576
    29522577\begin{figure}
     2578\centering
     2579\begin{tabular}{@{}l|l@{}}
     2580\begin{cfa}
     2581struct Adder {
     2582    int * row, cols;
     2583};
     2584int operator()() {
     2585        subtotal = 0;
     2586        for ( int c = 0; c < cols; c += 1 )
     2587                subtotal += row[c];
     2588        return subtotal;
     2589}
     2590void ?{}( Adder * adder, int row[$\,$], int cols, int & subtotal ) {
     2591        adder.[rows, cols, subtotal] = [rows, cols, subtotal];
     2592}
     2593
     2594
     2595
     2596
     2597\end{cfa}
     2598&
     2599\begin{cfa}
     2600int main() {
     2601        const int rows = 10, cols = 10;
     2602        int matrix[rows][cols], subtotals[rows], total = 0;
     2603        // read matrix
     2604        Executor executor( 4 ); // kernel threads
     2605        Adder * adders[rows];
     2606        for ( r; rows ) { // send off work for executor
     2607                adders[r] = new( matrix[r], cols, &subtotal[r] );
     2608                executor.send( *adders[r] );
     2609        }
     2610        for ( r; rows ) {       // wait for results
     2611                delete( adders[r] );
     2612                total += subtotals[r];
     2613        }
     2614        sout | total;
     2615}
     2616\end{cfa}
     2617\end{tabular}
     2618\caption{Executor}
     2619\end{figure}
     2620\end{comment}
     2621
     2622
     2623\section{Runtime Structure}
     2624\label{s:CFARuntimeStructure}
     2625
     2626Figure~\ref{f:RunTimeStructure} illustrates the runtime structure of a \CFA program.
     2627In addition to the new kinds of objects introduced by \CFA, there are two more runtime entities used to control parallel execution: cluster and (virtual) processor.
     2628An executing thread is illustrated by its containment in a processor.
     2629
     2630\begin{figure}
     2631\centering
     2632\input{RunTimeStructure}
     2633\caption{\CFA Runtime structure}
     2634\label{f:RunTimeStructure}
     2635\end{figure}
     2636
     2637
     2638\subsection{Cluster}
     2639\label{s:RuntimeStructureCluster}
     2640
     2641A \newterm{cluster} is a collection of threads and virtual processors (abstract kernel-thread) that execute the (user) threads from its own ready queue (like an OS executing kernel threads).
     2642The purpose of a cluster is to control the amount of parallelism that is possible among threads, plus scheduling and other execution defaults.
     2643The default cluster-scheduler is single-queue multi-server, which provides automatic load-balancing of threads on processors.
     2644However, the design allows changing the scheduler, \eg multi-queue multi-server with work-stealing/sharing across the virtual processors.
     2645If several clusters exist, both threads and virtual processors, can be explicitly migrated from one cluster to another.
     2646No automatic load balancing among clusters is performed by \CFA.
     2647
     2648When a \CFA program begins execution, it creates a user cluster with a single processor and a special processor to handle preemption that does not execute user threads.
     2649The user cluster is created to contain the application user-threads.
     2650Having all threads execute on the one cluster often maximizes utilization of processors, which minimizes runtime.
     2651However, because of limitations of scheduling requirements (real-time), NUMA architecture, heterogeneous hardware, or issues with the underlying operating system, multiple clusters are sometimes necessary.
     2652
     2653
     2654\subsection{Virtual Processor}
     2655\label{s:RuntimeStructureProcessor}
     2656
     2657A virtual processor is implemented by a kernel thread (\eg UNIX process), which are scheduled for execution on a hardware processor by the underlying operating system.
     2658Programs may use more virtual processors than hardware processors.
     2659On a multiprocessor, kernel threads are distributed across the hardware processors resulting in virtual processors executing in parallel.
     2660(It is possible to use affinity to lock a virtual processor onto a particular hardware processor~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which is used when caching issues occur or for heterogeneous hardware processors.)
     2661The \CFA runtime attempts to block unused processors and unblock processors as the system load increases;
     2662balancing the workload with processors is difficult because it requires future knowledge, \ie what will the applicaton workload do next.
     2663Preemption occurs on virtual processors rather than user threads, via operating-system interrupts.
     2664Thus virtual processors execute user threads, where preemption frequency applies to a virtual processor, so preemption occurs randomly across the executed user threads.
     2665Turning off preemption transforms user threads into fibres.
     2666
     2667
     2668\begin{comment}
     2669\section{Implementation}
     2670\label{s:Implementation}
     2671
     2672A primary implementation challenge is avoiding contention from dynamically allocating memory because of bulk acquire, \eg the internal-scheduling design is (almost) free of allocations.
     2673All blocking operations are made by parking threads onto queues, therefore all queues are designed with intrusive nodes, where each node has preallocated link fields for chaining.
     2674Furthermore, several bulk-acquire operations need a variable amount of memory.
     2675This storage is allocated at the base of a thread's stack before blocking, which means programmers must add a small amount of extra space for stacks.
     2676
     2677In \CFA, ordering of monitor acquisition relies on memory ordering to prevent deadlock~\cite{Havender68}, because all objects have distinct non-overlapping memory layouts, and mutual-exclusion for a monitor is only defined for its lifetime.
     2678When a mutex call is made, pointers to the concerned monitors are aggregated into a variable-length array and sorted.
     2679This array persists for the entire duration of the mutual exclusion and is used extensively for synchronization operations.
     2680
     2681To improve performance and simplicity, context switching occurs inside a function call, so only callee-saved registers are copied onto the stack and then the stack register is switched;
     2682the corresponding registers are then restored for the other context.
     2683Note, the instruction pointer is untouched since the context switch is always inside the same function.
     2684Experimental results (not presented) for a stackless or stackful scheduler (1 versus 2 context switches) (see Section~\ref{s:Concurrency}) show the performance is virtually equivalent, because both approaches are dominated by locking to prevent a race condition.
     2685
     2686All kernel threads (@pthreads@) created a stack.
     2687Each \CFA virtual processor is implemented as a coroutine and these coroutines run directly on the kernel-thread stack, effectively stealing this stack.
     2688The exception to this rule is the program main, \ie the initial kernel thread that is given to any program.
     2689In order to respect C expectations, the stack of the initial kernel thread is used by program main rather than the main processor, allowing it to grow dynamically as in a normal C program.
     2690\end{comment}
     2691
     2692
     2693\subsection{Preemption}
     2694
     2695Nondeterministic preemption provides fairness from long-running threads, and forces concurrent programmers to write more robust programs, rather than relying on code between cooperative scheduling to be atomic.
     2696This atomic reliance can fail on multi-core machines, because execution across cores is nondeterministic.
     2697A different reason for not supporting preemption is that it significantly complicates the runtime system, \eg Microsoft runtime does not support interrupts and on Linux systems, interrupts are complex (see below).
     2698Preemption is normally handled by setting a countdown timer on each virtual processor.
     2699When the timer expires, an interrupt is delivered, and the interrupt handler resets the countdown timer, and if the virtual processor is executing in user code, the signal handler performs a user-level context-switch, or if executing in the language runtime kernel, the preemption is ignored or rolled forward to the point where the runtime kernel context switches back to user code.
     2700Multiple signal handlers may be pending.
     2701When control eventually switches back to the signal handler, it returns normally, and execution continues in the interrupted user thread, even though the return from the signal handler may be on a different kernel thread than the one where the signal is delivered.
     2702The only issue with this approach is that signal masks from one kernel thread may be restored on another as part of returning from the signal handler;
     2703therefore, the same signal mask is required for all virtual processors in a cluster.
     2704Because preemption frequency is usually long (1 millisecond) performance cost is negligible.
     2705
     2706Linux switched a decade ago from specific to arbitrary process signal-delivery for applications with multiple kernel threads.
     2707\begin{cquote}
     2708A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked.
     2709If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which it will deliver the signal.
     2710SIGNAL(7) - Linux Programmer's Manual
     2711\end{cquote}
     2712Hence, the timer-expiry signal, which is generated \emph{externally} by the Linux kernel to an application, is delivered to any of its Linux subprocesses (kernel threads).
     2713To ensure each virtual processor receives a preemption signal, a discrete-event simulation is run on a special virtual processor, and only it sets and receives timer events.
     2714Virtual processors register an expiration time with the discrete-event simulator, which is inserted in sorted order.
     2715The simulation sets the countdown timer to the value at the head of the event list, and when the timer expires, all events less than or equal to the current time are processed.
     2716Processing a preemption event sends an \emph{internal} @SIGUSR1@ signal to the registered virtual processor, which is always delivered to that processor.
     2717
     2718
     2719\subsection{Debug Kernel}
     2720
     2721There are two versions of the \CFA runtime kernel: debug and non-debug.
     2722The debugging version has many runtime checks and internal assertions, \eg stack (non-writable) guard page, and checks for stack overflow whenever context switches occur among coroutines and threads, which catches most stack overflows.
     2723After a program is debugged, the non-debugging version can be used to significantly decrease space and increase performance.
     2724
     2725
     2726\section{Performance}
     2727\label{s:Performance}
     2728
     2729To verify the implementation of the \CFA runtime, a series of microbenchmarks are performed comparing \CFA with pthreads, Java OpenJDK-9, Go 1.12.6 and \uC 7.0.0.
     2730For comparison, the package must be multi-processor (M:N), which excludes libdill/libmil~\cite{libdill} (M:1)), and use a shared-memory programming model, \eg not message passing.
     2731The benchmark computer is an AMD Opteron\texttrademark\ 6380 NUMA 64-core, 8 socket, 2.5 GHz processor, running Ubuntu 16.04.6 LTS, and \CFA/\uC are compiled with gcc 6.5.
     2732
     2733All benchmarks are run using the following harness. (The Java harness is augmented to circumvent JIT issues.)
     2734\begin{cfa}
     2735unsigned int N = 10_000_000;
     2736#define BENCH( `run` ) Time before = getTimeNsec();  `run;`  Duration result = (getTimeNsec() - before) / N;
     2737\end{cfa}
     2738The method used to get time is @clock_gettime( CLOCK_REALTIME )@.
     2739Each benchmark is performed @N@ times, where @N@ varies depending on the benchmark;
     2740the total time is divided by @N@ to obtain the average time for a benchmark.
     2741Each benchmark experiment is run 31 times.
     2742All omitted tests for other languages are functionally identical to the \CFA tests and available online~\cite{CforallBenchMarks}.
     2743% tar --exclude=.deps --exclude=Makefile --exclude=Makefile.in --exclude=c.c --exclude=cxx.cpp --exclude=fetch_add.c -cvhf benchmark.tar benchmark
     2744
     2745\paragraph{Object Creation}
     2746
     2747Object creation is measured by creating/deleting the specific kind of concurrent object.
     2748Figure~\ref{f:creation} shows the code for \CFA, with results in Table~\ref{tab:creation}.
     2749The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine to force stack creation, the creation cost is artificially low.
     2750
    29532751\begin{multicols}{2}
    2954 \CFA Coroutines
    2955 \begin{cfa}
    2956 coroutine GreatSuspender {};
    2957 void main(GreatSuspender& this) {
    2958         while(true) { suspend(); }
    2959 }
     2752\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     2753\begin{cfa}
     2754@thread@ MyThread {};
     2755void @main@( MyThread & ) {}
    29602756int main() {
    2961         GreatSuspender s;
    2962         resume(s);
     2757        BENCH( for ( N ) { @MyThread m;@ } )
     2758        sout | result`ns;
     2759}
     2760\end{cfa}
     2761\captionof{figure}{\CFA object-creation benchmark}
     2762\label{f:creation}
     2763
     2764\columnbreak
     2765
     2766\vspace*{-16pt}
     2767\captionof{table}{Object creation comparison (nanoseconds)}
     2768\label{tab:creation}
     2769
     2770\begin{tabular}[t]{@{}r*{3}{D{.}{.}{5.2}}@{}}
     2771\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     2772\CFA Coroutine Lazy             & 13.2          & 13.1          & 0.44          \\
     2773\CFA Coroutine Eager    & 531.3         & 536.0         & 26.54         \\
     2774\CFA Thread                             & 2074.9        & 2066.5        & 170.76        \\
     2775\uC Coroutine                   & 89.6          & 90.5          & 1.83          \\
     2776\uC Thread                              & 528.2         & 528.5         & 4.94          \\
     2777Goroutine                               & 4068.0        & 4113.1        & 414.55        \\
     2778Java Thread                             & 103848.5      & 104295.4      & 2637.57       \\
     2779Pthreads                                & 33112.6       & 33127.1       & 165.90
     2780\end{tabular}
     2781\end{multicols}
     2782
     2783
     2784\paragraph{Context-Switching}
     2785
     2786In procedural programming, the cost of a function call is important as modularization (refactoring) increases.
     2787(In many cases, a compiler inlines function calls to eliminate this cost.)
     2788Similarly, when modularization extends to coroutines/tasks, the time for a context switch becomes a relevant factor.
     2789The coroutine test is from resumer to suspender and from suspender to resumer, which is two context switches.
     2790The thread test is using yield to enter and return from the runtime kernel, which is two context switches.
     2791The difference in performance between coroutine and thread context-switch is the cost of scheduling for threads, whereas coroutines are self-scheduling.
     2792Figure~\ref{f:ctx-switch} only shows the \CFA code for coroutines/threads (other systems are similar) with all results in Table~\ref{tab:ctx-switch}.
     2793
     2794\begin{multicols}{2}
     2795\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     2796\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     2797@coroutine@ C {} c;
     2798void main( C & ) { for ( ;; ) { @suspend;@ } }
     2799int main() { // coroutine test
     2800        BENCH( for ( N ) { @resume( c );@ } )
     2801        sout | result`ns;
     2802}
     2803int main() { // task test
     2804        BENCH( for ( N ) { @yield();@ } )
     2805        sout | result`ns;
     2806}
     2807\end{cfa}
     2808\captionof{figure}{\CFA context-switch benchmark}
     2809\label{f:ctx-switch}
     2810
     2811\columnbreak
     2812
     2813\vspace*{-16pt}
     2814\captionof{table}{Context switch comparison (nanoseconds)}
     2815\label{tab:ctx-switch}
     2816\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
     2817\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     2818C function              & 1.8   & 1.8   & 0.01  \\
     2819\CFA generator  & 2.4   & 2.2   & 0.25  \\
     2820\CFA Coroutine  & 36.2  & 36.2  & 0.25  \\
     2821\CFA Thread             & 93.2  & 93.5  & 2.09  \\
     2822\uC Coroutine   & 52.0  & 52.1  & 0.51  \\
     2823\uC Thread              & 96.2  & 96.3  & 0.58  \\
     2824Goroutine               & 141.0 & 141.3 & 3.39  \\
     2825Java Thread             & 374.0 & 375.8 & 10.38 \\
     2826Pthreads Thread & 361.0 & 365.3 & 13.19
     2827\end{tabular}
     2828\end{multicols}
     2829
     2830
     2831\paragraph{Mutual-Exclusion}
     2832
     2833Uncontented mutual exclusion, which frequently occurs, is measured by entering/leaving a critical section.
     2834For monitors, entering and leaving a monitor function is measured.
     2835To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured.
     2836Figure~\ref{f:mutex} shows the code for \CFA with all results in Table~\ref{tab:mutex}.
     2837Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
     2838
     2839\begin{multicols}{2}
     2840\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     2841\begin{cfa}
     2842@monitor@ M {} m1/*, m2, m3, m4*/;
     2843void __attribute__((noinline))
     2844do_call( M & @mutex m/*, m2, m3, m4*/@ ) {}
     2845int main() {
    29632846        BENCH(
    2964                 for(size_t i=0; i<n; i++) {
    2965                         resume(s);
    2966                 },
    2967                 result
     2847                for( N ) do_call( m1/*, m2, m3, m4*/ );
    29682848        )
    2969         printf("%llu\n", result);
    2970 }
    2971 \end{cfa}
     2849        sout | result`ns;
     2850}
     2851\end{cfa}
     2852\captionof{figure}{\CFA acquire/release mutex benchmark}
     2853\label{f:mutex}
     2854
    29722855\columnbreak
    2973 \CFA Threads
    2974 \begin{cfa}
    2975 
    2976 
    2977 
    2978 
    2979 int main() {
    2980 
    2981 
    2982         BENCH(
    2983                 for(size_t i=0; i<n; i++) {
    2984                         yield();
    2985                 },
    2986                 result
    2987         )
    2988         printf("%llu\n", result);
    2989 }
    2990 \end{cfa}
     2856
     2857\vspace*{-16pt}
     2858\captionof{table}{Mutex comparison (nanoseconds)}
     2859\label{tab:mutex}
     2860\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
     2861\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     2862test and test-and-test lock             & 19.1  & 18.9  & 0.40  \\
     2863\CFA @mutex@ function, 1 arg.   & 45.9  & 46.6  & 1.45  \\
     2864\CFA @mutex@ function, 2 arg.   & 105.0 & 104.7 & 3.08  \\
     2865\CFA @mutex@ function, 4 arg.   & 165.0 & 167.6 & 5.65  \\
     2866\uC @monitor@ member rtn.               & 54.0  & 53.7  & 0.82  \\
     2867Java synchronized method                & 31.0  & 31.1  & 0.50  \\
     2868Pthreads Mutex Lock                             & 33.6  & 32.6  & 1.14
     2869\end{tabular}
    29912870\end{multicols}
    2992 \begin{cfa}[caption={\CFA benchmark code used to measure context-switches for coroutines and threads.},label={f:ctx-switch}]
    2993 \end{cfa}
    2994 \end{figure}
    2995 
    2996 \begin{table}
    2997 \begin{center}
    2998 \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |}
    2999 \cline{2-4}
    3000 \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\
    3001 \hline
    3002 Kernel Thread   & 241.5 & 243.86        & 5.08 \\
    3003 \CFA Coroutine  & 38            & 38            & 0    \\
    3004 \CFA Thread             & 103           & 102.96        & 2.96 \\
    3005 \uC Coroutine   & 46            & 45.86 & 0.35 \\
    3006 \uC Thread              & 98            & 99.11 & 1.42 \\
    3007 Goroutine               & 150           & 149.96        & 3.16 \\
    3008 Java Thread             & 289           & 290.68        & 8.72 \\
    3009 \hline
    3010 \end{tabular}
    3011 \end{center}
    3012 \caption{Context Switch comparison.
    3013 All numbers are in nanoseconds(\si{\nano\second})}
    3014 \label{tab:ctx-switch}
    3015 \end{table}
    3016 
    3017 \subsection{Mutual-Exclusion}
    3018 The next interesting benchmark is to measure the overhead to enter/leave a critical-section.
    3019 For monitors, the simplest approach is to measure how long it takes to enter and leave a monitor routine.
    3020 Figure~\ref{f:mutex} shows the code for \CFA.
    3021 To put the results in context, the cost of entering a non-inline routine and the cost of acquiring and releasing a @pthread_mutex@ lock is also measured.
    3022 The results can be shown in table \ref{tab:mutex}.
    3023 
    3024 \begin{figure}
    3025 \begin{cfa}[caption={\CFA benchmark code used to measure mutex routines.},label={f:mutex}]
    3026 monitor M {};
    3027 void __attribute__((noinline)) call( M & mutex m /*, m2, m3, m4*/ ) {}
    3028 
    3029 int main() {
    3030         M m/*, m2, m3, m4*/;
    3031         BENCH(
    3032                 for(size_t i=0; i<n; i++) {
    3033                         call(m/*, m2, m3, m4*/);
    3034                 },
    3035                 result
    3036         )
    3037         printf("%llu\n", result);
    3038 }
    3039 \end{cfa}
    3040 \end{figure}
    3041 
    3042 \begin{table}
    3043 \begin{center}
    3044 \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |}
    3045 \cline{2-4}
    3046 \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\
    3047 \hline
    3048 C routine                                               & 2             & 2             & 0    \\
    3049 FetchAdd + FetchSub                             & 26            & 26            & 0    \\
    3050 Pthreads Mutex Lock                             & 31            & 31.86 & 0.99 \\
    3051 \uC @monitor@ member routine            & 30            & 30            & 0    \\
    3052 \CFA @mutex@ routine, 1 argument        & 41            & 41.57 & 0.9  \\
    3053 \CFA @mutex@ routine, 2 argument        & 76            & 76.96 & 1.57 \\
    3054 \CFA @mutex@ routine, 4 argument        & 145           & 146.68        & 3.85 \\
    3055 Java synchronized routine                       & 27            & 28.57 & 2.6  \\
    3056 \hline
    3057 \end{tabular}
    3058 \end{center}
    3059 \caption{Mutex routine comparison.
    3060 All numbers are in nanoseconds(\si{\nano\second})}
    3061 \label{tab:mutex}
    3062 \end{table}
    3063 
    3064 \subsection{Internal Scheduling}
    3065 The internal-scheduling benchmark measures the cost of waiting on and signalling a condition variable.
    3066 Figure~\ref{f:int-sched} shows the code for \CFA, with results table \ref{tab:int-sched}.
    3067 As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
    3068 
    3069 \begin{figure}
    3070 \begin{cfa}[caption={Benchmark code for internal scheduling},label={f:int-sched}]
     2871
     2872
     2873\paragraph{External Scheduling}
     2874
     2875External scheduling is measured using a cycle of two threads calling and accepting the call using the @waitfor@ statement.
     2876Figure~\ref{f:ext-sched} shows the code for \CFA, with results in Table~\ref{tab:ext-sched}.
     2877Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
     2878
     2879\begin{multicols}{2}
     2880\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     2881\vspace*{-16pt}
     2882\begin{cfa}
    30712883volatile int go = 0;
    3072 condition c;
    3073 monitor M {};
    3074 M m1;
    3075 
    3076 void __attribute__((noinline)) do_call( M & mutex a1 ) { signal(c); }
    3077 
     2884@monitor@ M {} m;
    30782885thread T {};
    3079 void ^?{}( T & mutex this ) {}
    3080 void main( T & this ) {
    3081         while(go == 0) { yield(); }
    3082         while(go == 1) { do_call(m1); }
    3083 }
    3084 int  __attribute__((noinline)) do_wait( M & mutex a1 ) {
    3085         go = 1;
    3086         BENCH(
    3087                 for(size_t i=0; i<n; i++) {
    3088                         wait(c);
    3089                 },
    3090                 result
    3091         )
    3092         printf("%llu\n", result);
    3093         go = 0;
    3094         return 0;
     2886void __attribute__((noinline))
     2887do_call( M & @mutex@ ) {}
     2888void main( T & ) {
     2889        while ( go == 0 ) { yield(); }
     2890        while ( go == 1 ) { do_call( m ); }
     2891}
     2892int __attribute__((noinline))
     2893do_wait( M & @mutex@ m ) {
     2894        go = 1; // continue other thread
     2895        BENCH( for ( N ) { @waitfor( do_call, m );@ } )
     2896        go = 0; // stop other thread
     2897        sout | result`ns;
    30952898}
    30962899int main() {
    30972900        T t;
    3098         return do_wait(m1);
    3099 }
    3100 \end{cfa}
    3101 \end{figure}
    3102 
    3103 \begin{table}
    3104 \begin{center}
    3105 \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |}
    3106 \cline{2-4}
    3107 \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\
    3108 \hline
    3109 Pthreads Condition Variable                     & 5902.5        & 6093.29       & 714.78 \\
    3110 \uC @signal@                                    & 322           & 323   & 3.36   \\
    3111 \CFA @signal@, 1 @monitor@      & 352.5 & 353.11        & 3.66   \\
    3112 \CFA @signal@, 2 @monitor@      & 430           & 430.29        & 8.97   \\
    3113 \CFA @signal@, 4 @monitor@      & 594.5 & 606.57        & 18.33  \\
    3114 Java @notify@                           & 13831.5       & 15698.21      & 4782.3 \\
    3115 \hline
     2901        do_wait( m );
     2902}
     2903\end{cfa}
     2904\captionof{figure}{\CFA external-scheduling benchmark}
     2905\label{f:ext-sched}
     2906
     2907\columnbreak
     2908
     2909\vspace*{-16pt}
     2910\captionof{table}{External-scheduling comparison (nanoseconds)}
     2911\label{tab:ext-sched}
     2912\begin{tabular}{@{}r*{3}{D{.}{.}{3.2}}@{}}
     2913\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} &\multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     2914\CFA @waitfor@, 1 @monitor@     & 376.4 & 376.8 & 7.63  \\
     2915\CFA @waitfor@, 2 @monitor@     & 491.4 & 492.0 & 13.31 \\
     2916\CFA @waitfor@, 4 @monitor@     & 681.0 & 681.7 & 19.10 \\
     2917\uC @_Accept@                           & 331.1 & 331.4 & 2.66
    31162918\end{tabular}
    3117 \end{center}
    3118 \caption{Internal scheduling comparison.
    3119 All numbers are in nanoseconds(\si{\nano\second})}
    3120 \label{tab:int-sched}
    3121 \end{table}
    3122 
    3123 \subsection{External Scheduling}
    3124 The Internal scheduling benchmark measures the cost of the @waitfor@ statement (@_Accept@ in \uC).
    3125 Figure~\ref{f:ext-sched} shows the code for \CFA, with results in table \ref{tab:ext-sched}.
    3126 As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
    3127 
    3128 \begin{figure}
    3129 \begin{cfa}[caption={Benchmark code for external scheduling},label={f:ext-sched}]
     2919\end{multicols}
     2920
     2921
     2922\paragraph{Internal Scheduling}
     2923
     2924Internal scheduling is measured using a cycle of two threads signalling and waiting.
     2925Figure~\ref{f:int-sched} shows the code for \CFA, with results in Table~\ref{tab:int-sched}.
     2926Note, the incremental cost of bulk acquire for \CFA, which is largely a fixed cost for small numbers of mutex objects.
     2927Java scheduling is significantly greater because the benchmark explicitly creates multiple thread in order to prevent the JIT from making the program sequential, \ie removing all locking.
     2928
     2929\begin{multicols}{2}
     2930\lstset{language=CFA,moredelim=**[is][\color{red}]{@}{@},deletedelim=**[is][]{`}{`}}
     2931\begin{cfa}
    31302932volatile int go = 0;
    3131 monitor M {};
    3132 M m1;
     2933@monitor@ M { @condition c;@ } m;
     2934void __attribute__((noinline))
     2935do_call( M & @mutex@ a1 ) { @signal( c );@ }
    31332936thread T {};
    3134 
    3135 void __attribute__((noinline)) do_call( M & mutex a1 ) {}
    3136 
    3137 void ^?{}( T & mutex this ) {}
    31382937void main( T & this ) {
    3139         while(go == 0) { yield(); }
    3140         while(go == 1) { do_call(m1); }
    3141 }
    3142 int  __attribute__((noinline)) do_wait( M & mutex a1 ) {
    3143         go = 1;
    3144         BENCH(
    3145                 for(size_t i=0; i<n; i++) {
    3146                         waitfor(call, a1);
    3147                 },
    3148                 result
    3149         )
    3150         printf("%llu\n", result);
    3151         go = 0;
    3152         return 0;
     2938        while ( go == 0 ) { yield(); }
     2939        while ( go == 1 ) { do_call( m ); }
     2940}
     2941int  __attribute__((noinline))
     2942do_wait( M & mutex m ) with(m) {
     2943        go = 1; // continue other thread
     2944        BENCH( for ( N ) { @wait( c );@ } );
     2945        go = 0; // stop other thread
     2946        sout | result`ns;
    31532947}
    31542948int main() {
    31552949        T t;
    3156         return do_wait(m1);
    3157 }
    3158 \end{cfa}
    3159 \end{figure}
    3160 
    3161 \begin{table}
    3162 \begin{center}
    3163 \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |}
    3164 \cline{2-4}
    3165 \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\
    3166 \hline
    3167 \uC @Accept@                                    & 350           & 350.61        & 3.11  \\
    3168 \CFA @waitfor@, 1 @monitor@     & 358.5 & 358.36        & 3.82  \\
    3169 \CFA @waitfor@, 2 @monitor@     & 422           & 426.79        & 7.95  \\
    3170 \CFA @waitfor@, 4 @monitor@     & 579.5 & 585.46        & 11.25 \\
    3171 \hline
     2950        do_wait( m );
     2951}
     2952\end{cfa}
     2953\captionof{figure}{\CFA Internal-scheduling benchmark}
     2954\label{f:int-sched}
     2955
     2956\columnbreak
     2957
     2958\vspace*{-16pt}
     2959\captionof{table}{Internal-scheduling comparison (nanoseconds)}
     2960\label{tab:int-sched}
     2961\bigskip
     2962
     2963\begin{tabular}{@{}r*{3}{D{.}{.}{5.2}}@{}}
     2964\multicolumn{1}{@{}c}{} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Average} & \multicolumn{1}{c@{}}{Std Dev} \\
     2965\CFA @signal@, 1 @monitor@      & 372.6         & 374.3         & 14.17         \\
     2966\CFA @signal@, 2 @monitor@      & 492.7         & 494.1         & 12.99         \\
     2967\CFA @signal@, 4 @monitor@      & 749.4         & 750.4         & 24.74         \\
     2968\uC @signal@                            & 320.5         & 321.0         & 3.36          \\
     2969Java @notify@                           & 10160.5       & 10169.4       & 267.71        \\
     2970Pthreads Cond. Variable         & 4949.6        & 5065.2        & 363
    31722971\end{tabular}
    3173 \end{center}
    3174 \caption{External scheduling comparison.
    3175 All numbers are in nanoseconds(\si{\nano\second})}
    3176 \label{tab:ext-sched}
    3177 \end{table}
    3178 
    3179 
    3180 \subsection{Object Creation}
    3181 Finally, the last benchmark measures the cost of creation for concurrent objects.
    3182 Figure~\ref{f:creation} shows the code for @pthread@s and \CFA threads, with results shown in table \ref{tab:creation}.
    3183 As with all other benchmarks, all omitted tests are functionally identical to one of these tests.
    3184 The only note here is that the call stacks of \CFA coroutines are lazily created, therefore without priming the coroutine, the creation cost is very low.
    3185 
    3186 \begin{figure}
    3187 \begin{center}
    3188 @pthread@
    3189 \begin{cfa}
    3190 int main() {
    3191         BENCH(
    3192                 for(size_t i=0; i<n; i++) {
    3193                         pthread_t thread;
    3194                         if(pthread_create(&thread,NULL,foo,NULL)<0) {
    3195                                 perror( "failure" );
    3196                                 return 1;
    3197                         }
    3198 
    3199                         if(pthread_join(thread, NULL)<0) {
    3200                                 perror( "failure" );
    3201                                 return 1;
    3202                         }
    3203                 },
    3204                 result
    3205         )
    3206         printf("%llu\n", result);
    3207 }
    3208 \end{cfa}
    3209 
    3210 
    3211 
    3212 \CFA Threads
    3213 \begin{cfa}
    3214 int main() {
    3215         BENCH(
    3216                 for(size_t i=0; i<n; i++) {
    3217                         MyThread m;
    3218                 },
    3219                 result
    3220         )
    3221         printf("%llu\n", result);
    3222 }
    3223 \end{cfa}
    3224 \end{center}
    3225 \caption{Benchmark code for \protect\lstinline|pthread|s and \CFA to measure object creation}
    3226 \label{f:creation}
    3227 \end{figure}
    3228 
    3229 \begin{table}
    3230 \begin{center}
    3231 \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |}
    3232 \cline{2-4}
    3233 \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\
    3234 \hline
    3235 Pthreads                        & 26996 & 26984.71      & 156.6  \\
    3236 \CFA Coroutine Lazy     & 6             & 5.71  & 0.45   \\
    3237 \CFA Coroutine Eager    & 708           & 706.68        & 4.82   \\
    3238 \CFA Thread                     & 1173.5        & 1176.18       & 15.18  \\
    3239 \uC Coroutine           & 109           & 107.46        & 1.74   \\
    3240 \uC Thread                      & 526           & 530.89        & 9.73   \\
    3241 Goroutine                       & 2520.5        & 2530.93       & 61,56  \\
    3242 Java Thread                     & 91114.5       & 92272.79      & 961.58 \\
    3243 \hline
    3244 \end{tabular}
    3245 \end{center}
    3246 \caption{Creation comparison.
    3247 All numbers are in nanoseconds(\si{\nano\second}).}
    3248 \label{tab:creation}
    3249 \end{table}
    3250 
     2972\end{multicols}
    32512973
    32522974
    32532975\section{Conclusion}
    3254 This paper has achieved a minimal concurrency \textbf{api} that is simple, efficient and usable as the basis for higher-level features.
    3255 The approach presented is based on a lightweight thread-system for parallelism, which sits on top of clusters of processors.
    3256 This M:N model is judged to be both more efficient and allow more flexibility for users.
    3257 Furthermore, this document introduces monitors as the main concurrency tool for users.
    3258 This paper also offers a novel approach allowing multiple monitors to be accessed simultaneously without running into the Nested Monitor Problem~\cite{Lister77}.
    3259 It also offers a full implementation of the concurrency runtime written entirely in \CFA, effectively the largest \CFA code base to date.
    3260 
    3261 
    3262 % ======================================================================
    3263 % ======================================================================
     2976
     2977Advanced control-flow will always be difficult, especially when there is temporal ordering and nondeterminism.
     2978However, many systems exacerbate the difficulty through their presentation mechanisms.
     2979This paper shows it is possible to present a hierarchy of control-flow features, generator, coroutine, thread, and monitor, providing an integrated set of high-level, efficient, and maintainable control-flow features.
     2980Eliminated from \CFA are spurious wakeup and barging, which are nonintuitive and lead to errors, and having to work with a bewildering set of low-level locks and acquisition techniques.
     2981\CFA high-level race-free monitors and tasks provide the core mechanisms for mutual exclusion and synchronization, without having to resort to magic qualifiers like @volatile@/@atomic@.
     2982Extending these mechanisms to handle high-level deadlock-free bulk acquire across both mutual exclusion and synchronization is a unique contribution.
     2983The \CFA runtime provides concurrency based on a preemptive M:N user-level threading-system, executing in clusters, which encapsulate scheduling of work on multiple kernel threads providing parallelism.
     2984The M:N model is judged to be efficient and provide greater flexibility than a 1:1 threading model.
     2985These concepts and the \CFA runtime-system are written in the \CFA language, extensively leveraging the \CFA type-system, which demonstrates the expressiveness of the \CFA language.
     2986Performance comparisons with other concurrent systems/languages show the \CFA approach is competitive across all low-level operations, which translates directly into good performance in well-written concurrent applications.
     2987C programmers should feel comfortable using these mechanisms for developing complex control-flow in applications, with the ability to obtain maximum available performance by selecting mechanisms at the appropriate level of need.
     2988
     2989
    32642990\section{Future Work}
    3265 % ======================================================================
    3266 % ======================================================================
    3267 
    3268 \subsection{Performance} \label{futur:perf}
    3269 This paper presents a first implementation of the \CFA concurrency runtime.
    3270 Therefore, there is still significant work to improve performance.
    3271 Many of the data structures and algorithms may change in the future to more efficient versions.
    3272 For example, the number of monitors in a single bulk acquire is only bound by the stack size, this is probably unnecessarily generous.
    3273 It may be possible that limiting the number helps increase performance.
    3274 However, it is not obvious that the benefit would be significant.
    3275 
    3276 \subsection{Flexible Scheduling} \label{futur:sched}
     2991
     2992While control flow in \CFA has a strong start, development is still underway to complete a number of missing features.
     2993
     2994\paragraph{Flexible Scheduling}
     2995\label{futur:sched}
     2996
    32772997An important part of concurrency is scheduling.
    32782998Different scheduling algorithms can affect performance (both in terms of average and variation).
    32792999However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs.
    3280 One solution is to offer various tweaking options to users, allowing the scheduler to be adjusted to the requirements of the workload.
    3281 However, in order to be truly flexible, it would be interesting to allow users to add arbitrary data and arbitrary scheduling algorithms.
    3282 For example, a web server could attach Type-of-Service information to threads and have a ``ToS aware'' scheduling algorithm tailored to this specific web server.
    3283 This path of flexible schedulers will be explored for \CFA.
    3284 
    3285 \subsection{Non-Blocking I/O} \label{futur:nbio}
    3286 While most of the parallelism tools are aimed at data parallelism and control-flow parallelism, many modern workloads are not bound on computation but on IO operations, a common case being web servers and XaaS (anything as a service).
    3287 These types of workloads often require significant engineering around amortizing costs of blocking IO operations.
    3288 At its core, non-blocking I/O is an operating system level feature that allows queuing IO operations (\eg network operations) and registering for notifications instead of waiting for requests to complete.
    3289 In this context, the role of the language makes Non-Blocking IO easily available and with low overhead.
    3290 The current trend is to use asynchronous programming using tools like callbacks and/or futures and promises, which can be seen in frameworks like Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java and Django~\cite{Django} for Python.
    3291 However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear.
    3292 
    3293 \subsection{Other Concurrency Tools} \label{futur:tools}
    3294 While monitors offer a flexible and powerful concurrent core for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package.
    3295 Examples of such tools can include simple locks and condition variables, futures and promises~\cite{promises}, executors and actors.
    3296 These additional features are useful when monitors offer a level of abstraction that is inadequate for certain tasks.
    3297 
    3298 \subsection{Implicit Threading} \label{futur:implcit}
    3299 Simpler applications can benefit greatly from having implicit parallelism.
    3300 That is, parallelism that does not rely on the user to write concurrency.
    3301 This type of parallelism can be achieved both at the language level and at the library level.
    3302 The canonical example of implicit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithms~\cite{uC++book}.
    3303 Table \ref{f:parfor} shows three different code examples that accomplish point-wise sums of large arrays.
    3304 Note that none of these examples explicitly declare any concurrency or parallelism objects.
    3305 
    3306 \begin{table}
    3307 \begin{center}
    3308 \begin{tabular}[t]{|c|c|c|}
    3309 Sequential & Library Parallel & Language Parallel \\
    3310 \begin{cfa}[tabsize=3]
    3311 void big_sum(
    3312         int* a, int* b,
    3313         int* o,
    3314         size_t len)
    3315 {
    3316         for(
    3317                 int i = 0;
    3318                 i < len;
    3319                 ++i )
    3320         {
    3321                 o[i]=a[i]+b[i];
    3322         }
    3323 }
    3324 
    3325 
    3326 
    3327 
    3328 
    3329 int* a[10000];
    3330 int* b[10000];
    3331 int* c[10000];
    3332 //... fill in a & b
    3333 big_sum(a,b,c,10000);
    3334 \end{cfa} &\begin{cfa}[tabsize=3]
    3335 void big_sum(
    3336         int* a, int* b,
    3337         int* o,
    3338         size_t len)
    3339 {
    3340         range ar(a, a+len);
    3341         range br(b, b+len);
    3342         range or(o, o+len);
    3343         parfor( ai, bi, oi,
    3344         [](     int* ai,
    3345                 int* bi,
    3346                 int* oi)
    3347         {
    3348                 oi=ai+bi;
    3349         });
    3350 }
    3351 
    3352 
    3353 int* a[10000];
    3354 int* b[10000];
    3355 int* c[10000];
    3356 //... fill in a & b
    3357 big_sum(a,b,c,10000);
    3358 \end{cfa}&\begin{cfa}[tabsize=3]
    3359 void big_sum(
    3360         int* a, int* b,
    3361         int* o,
    3362         size_t len)
    3363 {
    3364         parfor (ai,bi,oi)
    3365             in (a, b, o )
    3366         {
    3367                 oi = ai + bi;
    3368         }
    3369 }
    3370 
    3371 
    3372 
    3373 
    3374 
    3375 
    3376 
    3377 int* a[10000];
    3378 int* b[10000];
    3379 int* c[10000];
    3380 //... fill in a & b
    3381 big_sum(a,b,c,10000);
    3382 \end{cfa}
    3383 \end{tabular}
    3384 \end{center}
    3385 \caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.}
    3386 \label{f:parfor}
    3387 \end{table}
    3388 
    3389 Implicit parallelism is a restrictive solution and therefore has its limitations.
    3390 However, it is a quick and simple approach to parallelism, which may very well be sufficient for smaller applications and reduces the amount of boilerplate needed to start benefiting from parallelism in modern CPUs.
    3391 
    3392 
    3393 % A C K N O W L E D G E M E N T S
    3394 % -------------------------------
     3000One solution is to offer various tuning options, allowing the scheduler to be adjusted to the requirements of the workload.
     3001However, to be truly flexible, a pluggable scheduler is necessary.
     3002Currently, the \CFA pluggable scheduler is too simple to handle complex scheduling, \eg quality of service and real-time, where the scheduler must interact with mutex objects to deal with issues like priority inversion~\cite{Buhr00b}.
     3003
     3004\paragraph{Non-Blocking I/O}
     3005\label{futur:nbio}
     3006
     3007Many modern workloads are not bound by computation but IO operations, a common case being web servers and XaaS~\cite{XaaS} (anything as a service).
     3008These types of workloads require significant engineering to amortizing costs of blocking IO-operations.
     3009At its core, non-blocking I/O is an operating-system level feature queuing IO operations, \eg network operations, and registering for notifications instead of waiting for requests to complete.
     3010Current trends use asynchronous programming like callbacks, futures, and/or promises, \eg Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java, and Django~\cite{Django} for Python.
     3011However, these solutions lead to code that is hard to create, read and maintain.
     3012A better approach is to tie non-blocking I/O into the concurrency system to provide ease of use with low overhead, \eg thread-per-connection web-services.
     3013A non-blocking I/O library is currently under development for \CFA.
     3014
     3015\paragraph{Other Concurrency Tools}
     3016\label{futur:tools}
     3017
     3018While monitors offer flexible and powerful concurrency for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package.
     3019Examples of such tools can include futures and promises~\cite{promises}, executors and actors.
     3020These additional features are useful for applications that can be constructed without shared data and direct blocking.
     3021As well, new \CFA extensions should make it possible to create a uniform interface for virtually all mutual exclusion, including monitors and low-level locks.
     3022
     3023\paragraph{Implicit Threading}
     3024\label{futur:implcit}
     3025
     3026Basic concurrent (embarrassingly parallel) applications can benefit greatly from implicit concurrency, where sequential programs are converted to concurrent, possibly with some help from pragmas to guide the conversion.
     3027This type of concurrency can be achieved both at the language level and at the library level.
     3028The canonical example of implicit concurrency is concurrent nested @for@ loops, which are amenable to divide and conquer algorithms~\cite{uC++book}.
     3029The \CFA language features should make it possible to develop a reasonable number of implicit concurrency mechanism to solve basic HPC data-concurrency problems.
     3030However, implicit concurrency is a restrictive solution with significant limitations, so it can never replace explicit concurrent programming.
     3031
     3032
    33953033\section{Acknowledgements}
    33963034
    3397 Thanks to Aaron Moss, Rob Schluntz and Andrew Beach for their work on the \CFA project as well as all the discussions which helped concretize the ideas in this paper.
    3398 Partial funding was supplied by the Natural Sciences and Engineering Research Council of Canada and a corporate partnership with Huawei Ltd.
    3399 
    3400 
    3401 % B I B L I O G R A P H Y
    3402 % -----------------------------
    3403 %\bibliographystyle{plain}
     3035The authors would like to recognize the design assistance of Aaron Moss, Rob Schluntz, Andrew Beach and Michael Brooks on the features described in this paper.
     3036Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}). %, and Peter Buhr is partially funded by the Natural Sciences and Engineering Research Council of Canada.
     3037
     3038{%
     3039\fontsize{9bp}{12bp}\selectfont%
    34043040\bibliography{pl,local}
    3405 
     3041}%
    34063042
    34073043\end{document}
  • doc/papers/concurrency/annex/local.bib

    r7951100 rb067d9b  
    4646    title       = {Thread Building Blocks},
    4747    howpublished= {Intel, \url{https://www.threadingbuildingblocks.org}},
    48     note        = {Accessed: 2018-3},
     48    optnote     = {Accessed: 2018-3},
    4949}
    5050
     
    6666}
    6767
    68 @article{BankTransfer,
     68@misc{BankTransfer,
    6969        key     = {Bank Transfer},
    7070        keywords        = {Bank Transfer},
    7171        title   = {Bank Account Transfer Problem},
    72         publisher       = {Wiki Wiki Web},
    73         address = {http://wiki.c2.com},
     72        howpublished    = {Wiki Wiki Web, \url{http://wiki.c2.com/?BankAccountTransferProblem}},
    7473        year            = 2010
    7574}
  • doc/papers/concurrency/figures/ext_monitor.fig

    r7951100 rb067d9b  
    88-2
    991200 2
    10 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 3450.000 3150 3150 2850 3450 3150 3750
    11 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 4350.000 3150 4050 2850 4350 3150 4650
    12 6 5850 1950 6150 2250
    13 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2100 105 105 6000 2100 6105 2205
    14 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 2160 d\001
     105 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1500.000 3600.000 1500 3300 1200 3600 1500 3900
     115 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1500.000 4500.000 1500 4200 1200 4500 1500 4800
     126 4200 2100 4500 2400
     131 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 2250 105 105 4350 2250 4455 2355
     144 1 -1 0 0 0 10 0.0000 2 105 90 4350 2310 d\001
    1515-6
    16 6 5100 2100 5400 2400
    17 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 2250 105 105 5250 2250 5355 2250
    18 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2295 X\001
     166 4200 1800 4500 2100
     171 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1950 105 105 4350 1950 4455 2055
     184 1 -1 0 0 0 10 0.0000 2 105 90 4350 2010 b\001
    1919-6
    20 6 5100 1800 5400 2100
    21 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 1950 105 105 5250 1950 5355 1950
    22 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2010 Y\001
     206 1420 5595 5625 5805
     211 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 1500 5700 80 80 1500 5700 1580 5780
     221 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2850 5700 105 105 2850 5700 2955 5805
     231 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 4350 5700 105 105 4350 5700 4455 5805
     244 0 -1 0 0 0 12 0.0000 2 135 1035 3075 5775 blocked task\001
     254 0 -1 0 0 0 12 0.0000 2 135 870 1650 5775 active task\001
     264 0 -1 0 0 0 12 0.0000 2 135 1050 4575 5775 routine mask\001
    2327-6
    24 6 5850 1650 6150 1950
    25 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 1800 105 105 6000 1800 6105 1905
    26 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 1860 b\001
     286 3450 1950 3750 2550
     291 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3600 2100 105 105 3600 2100 3705 2100
     302 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     31         3450 1950 3750 1950 3750 2550 3450 2550 3450 1950
     324 1 4 0 0 0 10 0.0000 2 105 120 3600 2160 Y\001
    2733-6
    28 6 3070 5445 7275 5655
    29 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3150 5550 80 80 3150 5550 3230 5630
    30 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4500 5550 105 105 4500 5550 4605 5655
    31 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 6000 5550 105 105 6000 5550 6105 5655
    32 4 0 -1 0 0 0 12 0.0000 2 135 1035 4725 5625 blocked task\001
    33 4 0 -1 0 0 0 12 0.0000 2 135 870 3300 5625 active task\001
    34 4 0 -1 0 0 0 12 0.0000 2 135 1050 6225 5625 routine mask\001
     346 3450 2250 3750 2550
     351 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3600 2400 105 105 3600 2400 3705 2400
     364 1 4 0 0 0 10 0.0000 2 105 120 3600 2445 X\001
    3537-6
    36 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3300 3600 105 105 3300 3600 3405 3705
    37 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3600 3600 105 105 3600 3600 3705 3705
    38 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6600 3900 105 105 6600 3900 6705 4005
    39 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6900 3900 105 105 6900 3900 7005 4005
    40 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2700 105 105 6000 2700 6105 2805
    41 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2400 105 105 6000 2400 6105 2505
    42 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 5100 4575 80 80 5100 4575 5180 4655
     381 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1650 3750 105 105 1650 3750 1755 3855
     391 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1950 3750 105 105 1950 3750 2055 3855
     401 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4950 4050 105 105 4950 4050 5055 4155
     411 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5250 4050 105 105 5250 4050 5355 4155
     421 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 2850 105 105 4350 2850 4455 2955
     431 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 2550 105 105 4350 2550 4455 2655
     441 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3450 4725 80 80 3450 4725 3530 4805
    43452 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    44          4050 2925 5475 2925 5475 3225 4050 3225 4050 2925
     46         2400 3075 3825 3075 3825 3375 2400 3375 2400 3075
    45472 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
    46          3150 3750 3750 3750 3750 4050 3150 4050
     48         1500 3900 2100 3900 2100 4200 1500 4200
    47492 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3
    48          3150 3450 3750 3450 3900 3675
     50         1500 3600 2100 3600 2250 3825
    49512 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    50          3750 3150 3600 3375
     52         2100 3300 1950 3525
    51532 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3
    52          3150 4350 3750 4350 3900 4575
     54         1500 4500 2100 4500 2250 4725
    53552 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    54          3750 4050 3600 4275
     56         2100 4200 1950 4425
    55572 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
    56          3150 4650 3750 4650 3750 4950 4950 4950
     58         1500 4800 2100 4800 2100 5100 3300 5100
    57592 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    58          6450 3750 6300 3975
     60         4800 3900 4650 4125
    59612 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    60          4950 4950 5175 5100
     62         3300 5100 3525 5250
    61632 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9
    62          5250 4950 6450 4950 6450 4050 7050 4050 7050 3750 6450 3750
    63          6450 2850 6150 2850 6150 1650
     64         3600 5100 4800 5100 4800 4200 5400 4200 5400 3900 4800 3900
     65         4800 3000 4500 3000 4500 1800
    64662 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5
    65          5850 4200 5850 3300 4350 3300 4350 4200 5850 4200
    66 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2
     67         4200 4350 4200 3450 2700 3450 2700 4350 4200 4350
     682 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
    6769        1 1 1.00 60.00 120.00
    68         7 1 1.00 60.00 120.00
    69          5250 3150 5250 2400
     70         3600 3225 3600 2550
     712 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     72         4050 3000 4500 3150
    70732 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    71          3150 3150 3750 3150 3750 2850 5700 2850 5700 1650
    72 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
    73          5700 2850 6150 3000
    74 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    75          5100 1800 5400 1800 5400 2400 5100 2400 5100 1800
    76 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2745 a\001
    77 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2445 c\001
    78 4 1 -1 0 0 0 12 0.0000 2 135 315 5100 5325 exit\001
    79 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 3075 A\001
    80 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 4875 condition\001
    81 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 5100 B\001
    82 4 0 -1 0 0 0 12 0.0000 2 135 420 6600 3675 stack\001
    83 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3225 acceptor/\001
    84 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3450 signalled\001
    85 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 2850 condition\001
    86 4 1 -1 0 0 0 12 0.0000 2 165 420 6000 1350 entry\001
    87 4 1 -1 0 0 0 12 0.0000 2 135 495 6000 1575 queue\001
    88 4 0 -1 0 0 0 12 0.0000 2 135 525 6300 2400 arrival\001
    89 4 0 -1 0 0 0 12 0.0000 2 135 630 6300 2175 order of\001
    90 4 1 -1 0 0 0 12 0.0000 2 135 525 5100 3675 shared\001
    91 4 1 -1 0 0 0 12 0.0000 2 135 735 5100 3975 variables\001
    92 4 0 0 50 -1 0 11 0.0000 2 165 855 4275 3150 Acceptables\001
    93 4 0 0 50 -1 0 11 0.0000 2 120 165 5775 2700 W\001
    94 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 2400 X\001
    95 4 0 0 50 -1 0 11 0.0000 2 120 105 5775 2100 Z\001
    96 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 1800 Y\001
     74         1500 3300 2100 3300 2100 3000 4050 3000 4050 1800
     754 1 -1 0 0 0 10 0.0000 2 75 75 4350 2895 a\001
     764 1 -1 0 0 0 10 0.0000 2 75 75 4350 2595 c\001
     774 1 -1 0 0 0 12 0.0000 2 135 315 3450 5475 exit\001
     784 1 -1 0 0 0 12 0.0000 2 135 135 1650 3225 A\001
     794 1 -1 0 0 0 12 0.0000 2 135 795 1650 5025 condition\001
     804 1 -1 0 0 0 12 0.0000 2 135 135 1650 5250 B\001
     814 0 -1 0 0 0 12 0.0000 2 135 420 4950 3825 stack\001
     824 0 -1 0 0 0 12 0.0000 2 180 750 4950 3375 acceptor/\001
     834 0 -1 0 0 0 12 0.0000 2 180 750 4950 3600 signalled\001
     844 1 -1 0 0 0 12 0.0000 2 135 795 1650 3000 condition\001
     854 0 -1 0 0 0 12 0.0000 2 135 525 4650 2550 arrival\001
     864 0 -1 0 0 0 12 0.0000 2 135 630 4650 2325 order of\001
     874 1 -1 0 0 0 12 0.0000 2 135 525 3450 3825 shared\001
     884 1 -1 0 0 0 12 0.0000 2 135 735 3450 4125 variables\001
     894 0 4 50 -1 0 11 0.0000 2 120 135 4075 2025 X\001
     904 0 4 50 -1 0 11 0.0000 2 120 135 4075 2325 Y\001
     914 0 4 50 -1 0 11 0.0000 2 120 135 4075 2625 Y\001
     924 0 4 50 -1 0 11 0.0000 2 120 135 4075 2925 X\001
     934 0 -1 0 0 3 12 0.0000 2 150 540 4950 4425 urgent\001
     944 1 0 50 -1 0 11 0.0000 2 165 600 3075 3300 accepted\001
     954 1 -1 0 0 0 12 0.0000 2 165 960 4275 1725 entry queue\001
  • doc/papers/concurrency/figures/monitor.fig

    r7951100 rb067d9b  
    88-2
    991200 2
    10 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1500.000 2700.000 1500 2400 1200 2700 1500 3000
    11 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1500.000 3600.000 1500 3300 1200 3600 1500 3900
    12 6 4200 1200 4500 1500
    13 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1350 105 105 4350 1350 4455 1455
    14 4 1 -1 0 0 0 10 0.0000 2 105 90 4350 1410 d\001
     105 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1500.000 3300.000 1500 3000 1200 3300 1500 3600
     115 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1500.000 4200.000 1500 3900 1200 4200 1500 4500
     126 1350 5250 5325 5550
     131 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 1500 5400 80 80 1500 5400 1580 5480
     141 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2850 5400 105 105 2850 5400 2955 5505
     151 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 4350 5400 105 105 4350 5400 4455 5505
     164 0 -1 0 0 0 12 0.0000 2 180 765 4575 5475 duplicate\001
     174 0 -1 0 0 0 12 0.0000 2 135 1035 3075 5475 blocked task\001
     184 0 -1 0 0 0 12 0.0000 2 135 870 1650 5475 active task\001
    1519-6
    16 6 4200 900 4500 1200
    17 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1050 105 105 4350 1050 4455 1155
    18 4 1 -1 0 0 0 10 0.0000 2 105 90 4350 1110 b\001
     206 4200 1800 4500 2100
     211 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1950 105 105 4350 1950 4455 2055
     224 1 -1 0 0 0 10 0.0000 2 105 90 4350 2010 d\001
    1923-6
    20 6 2400 1500 2700 1800
    21 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 2550 1650 105 105 2550 1650 2655 1650
    22 4 1 -1 0 0 0 10 0.0000 2 105 90 2550 1710 b\001
     246 4200 1500 4500 1800
     251 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1650 105 105 4350 1650 4455 1755
     264 1 -1 0 0 0 10 0.0000 2 105 90 4350 1710 b\001
    2327-6
    24 6 2400 1800 2700 2100
    25 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 2550 1950 105 105 2550 1950 2655 1950
    26 4 1 -1 0 0 0 10 0.0000 2 75 75 2550 1995 a\001
    27 -6
    28 6 3300 1500 3600 1800
    29 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3450 1650 105 105 3450 1650 3555 1650
    30 4 1 -1 0 0 0 10 0.0000 2 105 90 3450 1710 d\001
    31 -6
    32 6 1350 4650 5325 4950
    33 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 1500 4800 80 80 1500 4800 1580 4880
    34 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2850 4800 105 105 2850 4800 2955 4905
    35 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 4350 4800 105 105 4350 4800 4455 4905
    36 4 0 -1 0 0 0 12 0.0000 2 180 765 4575 4875 duplicate\001
    37 4 0 -1 0 0 0 12 0.0000 2 135 1035 3075 4875 blocked task\001
    38 4 0 -1 0 0 0 12 0.0000 2 135 870 1650 4875 active task\001
    39 -6
    40 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1650 2850 105 105 1650 2850 1755 2955
    41 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1950 2850 105 105 1950 2850 2055 2955
    42 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4950 3150 105 105 4950 3150 5055 3255
    43 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5250 3150 105 105 5250 3150 5355 3255
    44 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1950 105 105 4350 1950 4455 2055
    45 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 1650 105 105 4350 1650 4455 1755
    46 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3450 3825 80 80 3450 3825 3530 3905
    47 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3450 1950 105 105 3450 1950 3555 1950
     281 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1650 3450 105 105 1650 3450 1755 3555
     291 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1950 3450 105 105 1950 3450 2055 3555
     301 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4950 3750 105 105 4950 3750 5055 3855
     311 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5250 3750 105 105 5250 3750 5355 3855
     321 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3450 4425 80 80 3450 4425 3530 4505
     331 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 2550 105 105 4350 2550 4455 2655
     341 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4350 2250 105 105 4350 2250 4455 2355
     352 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 5
     36         1500 3000 2100 3000 2100 2700 2400 2700 2400 2100
     372 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
     38         1500 3600 2100 3600 2100 3900 1500 3900
     392 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3
     40         1500 3300 2100 3300 2250 3525
    48412 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    49          2400 2100 2625 2250
     42         2100 3000 1950 3225
     432 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3
     44         1500 4200 2100 4200 2250 4425
    50452 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    51          3300 2100 3525 2250
     46         2100 3900 1950 4125
     472 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
     48         1500 4500 2100 4500 2100 4800 3300 4800
    52492 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    53          4200 2100 4425 2250
    54 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 5
    55          1500 2400 2100 2400 2100 2100 2400 2100 2400 1500
     50         4800 3600 4650 3825
     512 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
     52         3300 4800 3525 4950
     532 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5
     54         4200 4050 4200 3150 2700 3150 2700 4050 4200 4050
    56552 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
    57          1500 3000 2100 3000 2100 3300 1500 3300
    58 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3
    59          1500 2700 2100 2700 2250 2925
    60 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    61          2100 2400 1950 2625
    62 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3
    63          1500 3600 2100 3600 2250 3825
    64 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    65          2100 3300 1950 3525
    66 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
    67          1500 3900 2100 3900 2100 4200 3300 4200
    68 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    69          4800 3000 4650 3225
    70 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2
    71          3300 4200 3525 4350
    72 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
    73          3600 1500 3600 2100 4200 2100 4200 900
    74 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4
    75          2700 1500 2700 2100 3300 2100 3300 1500
     56         3600 2100 3600 2700 4050 2700 4050 1500
    76572 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9
    77          3600 4200 4800 4200 4800 3300 5400 3300 5400 3000 4800 3000
    78          4800 2100 4500 2100 4500 900
    79 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5
    80          4200 3450 4200 2550 2700 2550 2700 3450 4200 3450
    81 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1995 a\001
    82 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1695 c\001
    83 4 1 -1 0 0 0 12 0.0000 2 135 315 3450 4575 exit\001
    84 4 1 -1 0 0 0 12 0.0000 2 135 135 1650 2325 A\001
    85 4 1 -1 0 0 0 12 0.0000 2 135 795 1650 4125 condition\001
    86 4 1 -1 0 0 0 12 0.0000 2 135 135 1650 4350 B\001
    87 4 0 -1 0 0 0 12 0.0000 2 135 420 4950 2925 stack\001
    88 4 0 -1 0 0 0 12 0.0000 2 180 750 4950 2475 acceptor/\001
    89 4 0 -1 0 0 0 12 0.0000 2 180 750 4950 2700 signalled\001
    90 4 1 -1 0 0 0 12 0.0000 2 135 795 1650 2100 condition\001
    91 4 1 -1 0 0 0 12 0.0000 2 135 135 2550 1425 X\001
    92 4 1 -1 0 0 0 12 0.0000 2 135 135 3450 1425 Y\001
    93 4 1 -1 0 0 0 12 0.0000 2 165 420 4350 600 entry\001
    94 4 1 -1 0 0 0 12 0.0000 2 135 495 4350 825 queue\001
    95 4 0 -1 0 0 0 12 0.0000 2 135 525 4650 1650 arrival\001
    96 4 0 -1 0 0 0 12 0.0000 2 135 630 4650 1425 order of\001
    97 4 1 -1 0 0 0 12 0.0000 2 135 525 3450 2925 shared\001
    98 4 1 -1 0 0 0 12 0.0000 2 135 735 3450 3225 variables\001
    99 4 1 -1 0 0 0 12 0.0000 2 120 510 3000 975 mutex\001
    100 4 1 -1 0 0 0 10 0.0000 2 75 75 3450 1995 c\001
    101 4 1 -1 0 0 0 12 0.0000 2 135 570 3000 1200 queues\001
     58         3600 4800 4800 4800 4800 3900 5400 3900 5400 3600 4800 3600
     59         4800 2700 4500 2700 4500 1500
     602 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
     61         4050 2700 4500 2850
     624 1 -1 0 0 0 12 0.0000 2 135 315 3450 5175 exit\001
     634 1 -1 0 0 0 12 0.0000 2 135 795 1650 4725 condition\001
     644 1 -1 0 0 0 12 0.0000 2 135 135 1650 4950 B\001
     654 0 -1 0 0 0 12 0.0000 2 135 420 4950 3525 stack\001
     664 0 -1 0 0 0 12 0.0000 2 180 750 4950 3075 acceptor/\001
     674 0 -1 0 0 0 12 0.0000 2 180 750 4950 3300 signalled\001
     684 1 -1 0 0 0 12 0.0000 2 135 525 3450 3525 shared\001
     694 1 -1 0 0 0 12 0.0000 2 135 735 3450 3825 variables\001
     704 0 -1 0 0 3 12 0.0000 2 150 540 4950 4125 urgent\001
     714 1 -1 0 0 0 10 0.0000 2 75 75 4350 2595 a\001
     724 1 -1 0 0 0 10 0.0000 2 75 75 4350 2295 c\001
     734 0 -1 0 0 0 12 0.0000 2 135 525 4650 2250 arrival\001
     744 0 -1 0 0 0 12 0.0000 2 135 630 4650 2025 order of\001
     754 0 4 50 -1 0 11 0.0000 2 120 135 4075 1725 X\001
     764 0 4 50 -1 0 11 0.0000 2 120 135 4075 2025 Y\001
     774 0 4 50 -1 0 11 0.0000 2 120 135 4075 2325 Y\001
     784 0 4 50 -1 0 11 0.0000 2 120 135 4075 2625 X\001
     794 1 -1 0 0 0 12 0.0000 2 165 960 4275 1425 entry queue\001
  • doc/papers/general/.gitignore

    r7951100 rb067d9b  
    44*.ps
    55
    6 Paper.tex.plain
    76mail
    87Paper.out.ps
  • doc/papers/general/Makefile

    r7951100 rb067d9b  
    44Figures = figures
    55Macros = ../AMA/AMA-stix/ama
    6 TeXLIB = .:${Macros}:${Build}:../../bibliography:
     6TeXLIB = .:${Macros}:${Build}:
    77LaTeX  = TEXINPUTS=${TeXLIB} && export TEXINPUTS && latex -halt-on-error -output-directory=${Build}
    8 BibTeX = BIBINPUTS=${TeXLIB} && export BIBINPUTS && bibtex
     8BibTeX = BIBINPUTS=../../bibliography: && export BIBINPUTS && bibtex
    99
    1010MAKEFLAGS = --no-print-directory # --silent
     
    4646
    4747Paper.zip :
    48         zip -x general/.gitignore -x general/"*AMA*" -x general/Paper.out.ps -x general/Paper.tex.plain -x general/evaluation.zip -x general/mail -x general/response -x general/test.c -x general/evaluation.zip -x general/Paper.tex.plain -x general/Paper.ps -x general/Paper.pdf -x general/"*build*" -x general/evaluation/.gitignore -x general/evaluation/timing.xlsx -r Paper.zip general
     48        zip -x general/.gitignore -x general/Paper.out.ps -x general/Paper.tex.plain -x -x general/WileyNJD-AMA.bst general/"*evaluation*" -x general/evaluation.zip \
     49                -x general/mail -x general/response -x general/test.c -x general/Paper.ps -x general/"*build*" -r Paper.zip general pl.bib
    4950
    5051evaluation.zip :
    51         zip -x evaluation/.gitignore  -x evaluation/timing.xlsx -x evaluation/timing.dat -r evaluation.zip evaluation
     52        zip -x evaluation/.gitignore -x evaluation/timing.xlsx -x evaluation/timing.dat -r evaluation.zip evaluation
    5253
    5354# File Dependencies #
     
    5960        dvips ${Build}/$< -o $@
    6061
    61 ${BASE}.dvi : Makefile ${Build} ${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \
    62                 ../../bibliography/pl.bib
     62${BASE}.dvi : Makefile ${BASE}.out.ps ${Macros}/WileyNJD-v2.cls WileyNJD-AMA.bst ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} \
     63                ../../bibliography/pl.bib | ${Build}
    6364        # Must have *.aux file containing citations for bibtex
    6465        if [ ! -r ${basename $@}.aux ] ; then ${LaTeX} ${basename $@}.tex ; fi
     
    7576        mkdir -p ${Build}
    7677
    77 ${BASE}.out.ps : ${Build}
     78${BASE}.out.ps : | ${Build}
    7879        ln -fs ${Build}/Paper.out.ps .
    7980
     
    8485        gnuplot -e Build="'${Build}/'" evaluation/timing.gp
    8586
    86 %.tex : %.fig ${Build}
     87%.tex : %.fig | ${Build}
    8788        fig2dev -L eepic $< > ${Build}/$@
    8889
    89 %.ps : %.fig ${Build}
     90%.ps : %.fig | ${Build}
    9091        fig2dev -L ps $< > ${Build}/$@
    9192
    92 %.pstex : %.fig ${Build}
     93%.pstex : %.fig | ${Build}
    9394        fig2dev -L pstex $< > ${Build}/$@
    9495        fig2dev -L pstex_t -p ${Build}/$@ $< > ${Build}/$@_t
  • doc/papers/general/Paper.tex

    r7951100 rb067d9b  
    11\documentclass[AMA,STIX1COL]{WileyNJD-v2}
     2\setlength\typewidth{170mm}
     3\setlength\textwidth{170mm}
    24
    35\articletype{RESEARCH ARTICLE}%
    46
    5 \received{26 April 2016}
    6 \revised{6 June 2016}
    7 \accepted{6 June 2016}
    8 
     7\received{12 March 2018}
     8\revised{8 May 2018}
     9\accepted{28 June 2018}
     10
     11\setlength\typewidth{168mm}
     12\setlength\textwidth{168mm}
    913\raggedbottom
    1014
     
    187191}
    188192
    189 \title{\texorpdfstring{\protect\CFA : Adding Modern Programming Language Features to C}{Cforall : Adding Modern Programming Language Features to C}}
     193\title{\texorpdfstring{\protect\CFA : Adding modern programming language features to C}{Cforall : Adding modern programming language features to C}}
    190194
    191195\author[1]{Aaron Moss}
    192196\author[1]{Robert Schluntz}
    193 \author[1]{Peter A. Buhr*}
     197\author[1]{Peter A. Buhr}
    194198\authormark{MOSS \textsc{et al}}
    195199
    196 \address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Waterloo, ON}, \country{Canada}}}
    197 
    198 \corres{*Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}}
     200\address[1]{\orgdiv{Cheriton School of Computer Science}, \orgname{University of Waterloo}, \orgaddress{\state{Waterloo, Ontario}, \country{Canada}}}
     201
     202\corres{Peter A. Buhr, Cheriton School of Computer Science, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada. \email{pabuhr{\char`\@}uwaterloo.ca}}
    199203
    200204\fundingInfo{Natural Sciences and Engineering Research Council of Canada}
    201205
    202206\abstract[Summary]{
    203 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from hobby projects to commercial operating-systems.
    204 This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more.
    205 Nevertheless, C, first standardized almost forty years ago, lacks many features that make programming in more modern languages safer and more productive.
    206 
    207 The goal of the \CFA project (pronounced ``C-for-all'') is to create an extension of C that provides modern safety and productivity features while still ensuring strong backwards compatibility with C and its programmers.
    208 Prior projects have attempted similar goals but failed to honour C programming-style;
    209 for instance, adding object-oriented or functional programming with garbage collection is a non-starter for many C developers.
    210 Specifically, \CFA is designed to have an orthogonal feature-set based closely on the C programming paradigm, so that \CFA features can be added \emph{incrementally} to existing C code-bases, and C programmers can learn \CFA extensions on an as-needed basis, preserving investment in existing code and programmers.
    211 This paper presents a quick tour of \CFA features showing how their design avoids shortcomings of similar features in C and other C-like languages.
    212 Finally, experimental results are presented to validate several of the new features.
     207The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from hobby projects to commercial operating systems.
     208This installation base and the programmers producing it represent a massive software engineering investment spanning decades and likely to continue for decades more.
     209Nevertheless, C, which was first standardized almost 30 years ago, lacks many features that make programming in more modern languages safer and more productive.
     210The goal of the \CFA project (pronounced ``C for all'') is to create an extension of C that provides modern safety and productivity features while still ensuring strong backward compatibility with C and its programmers.
     211Prior projects have attempted similar goals but failed to honor the C programming style;
     212for instance, adding object-oriented or functional programming with garbage collection is a nonstarter for many C developers.
     213Specifically, \CFA is designed to have an orthogonal feature set based closely on the C programming paradigm, so that \CFA features can be added \emph{incrementally} to existing C code bases, and C programmers can learn \CFA extensions on an as-needed basis, preserving investment in existing code and programmers.
     214This paper presents a quick tour of \CFA features, showing how their design avoids shortcomings of similar features in C and other C-like languages.
     215Experimental results are presented to validate several of the new features.
    213216}%
    214217
    215 \keywords{generic types, tuple types, variadic types, polymorphic functions, C, Cforall}
     218\keywords{C, Cforall, generic types, polymorphic functions, tuple types, variadic types}
    216219
    217220
    218221\begin{document}
    219 \linenumbers                                            % comment out to turn off line numbering
     222%\linenumbers                                            % comment out to turn off line numbering
    220223
    221224\maketitle
    222225
    223226
     227\vspace*{-10pt}
    224228\section{Introduction}
    225229
    226 The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from hobby projects to commercial operating-systems.
    227 This installation base and the programmers producing it represent a massive software-engineering investment spanning decades and likely to continue for decades more.
    228 The TIOBE~\cite{TIOBE} ranks the top 5 most \emph{popular} programming languages as: Java 15\%, \Textbf{C 12\%}, \Textbf{\CC 5.5\%}, Python 5\%, \Csharp 4.5\% = 42\%, where the next 50 languages are less than 4\% each with a long tail.
    229 The top 3 rankings over the past 30 years are:
     230The C programming language is a foundational technology for modern computing with millions of lines of code implementing everything from hobby projects to commercial operating systems.
     231This installation base and the programmers producing it represent a massive software engineering investment spanning decades and likely to continue for decades more.
     232The TIOBE index~\cite{TIOBE} ranks the top five most \emph{popular} programming languages as Java 15\%, \Textbf{C 12\%}, \Textbf{\CC 5.5\%}, and Python 5\%, \Csharp 4.5\% = 42\%, where the next 50 languages are less than 4\% each with a long tail.
     233The top three rankings over the past 30 years are as follows.
    230234\begin{center}
    231235\setlength{\tabcolsep}{10pt}
    232 \lstDeleteShortInline@%
    233 \begin{tabular}{@{}rccccccc@{}}
    234                 & 2018  & 2013  & 2008  & 2003  & 1998  & 1993  & 1988  \\ \hline
    235 Java    & 1             & 2             & 1             & 1             & 18    & -             & -             \\
     236\fontsize{9bp}{11bp}\selectfont
     237\lstDeleteShortInline@%
     238\begin{tabular}{@{}cccccccc@{}}
     239                & 2018  & 2013  & 2008  & 2003  & 1998  & 1993  & 1988  \\
     240Java    & 1             & 2             & 1             & 1             & 18    & --    & --    \\
    236241\Textbf{C}& \Textbf{2} & \Textbf{1} & \Textbf{2} & \Textbf{2} & \Textbf{1} & \Textbf{1} & \Textbf{1} \\
    237242\CC             & 3             & 4             & 3             & 3             & 2             & 2             & 5             \\
     
    241246Love it or hate it, C is extremely popular, highly used, and one of the few systems languages.
    242247In many cases, \CC is often used solely as a better C.
    243 Nevertheless, C, first standardized almost forty years ago~\cite{ANSI89:C}, lacks many features that make programming in more modern languages safer and more productive.
    244 
    245 \CFA (pronounced ``C-for-all'', and written \CFA or Cforall) is an evolutionary extension of the C programming language that adds modern language-features to C, while maintaining source and runtime compatibility in the familiar C programming model.
    246 The four key design goals for \CFA~\cite{Bilson03} are:
    247 (1) The behaviour of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler;
    248 (2) Standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler;
    249 (3) \CFA code must be at least as portable as standard C code;
    250 (4) Extensions introduced by \CFA must be translated in the most efficient way possible.
    251 These goals ensure existing C code-bases can be converted to \CFA incrementally with minimal effort, and C programmers can productively generate \CFA code without training beyond the features being used.
    252 \CC is used similarly, but has the disadvantages of multiple legacy design-choices that cannot be updated and active divergence of the language model from C, requiring significant effort and training to incrementally add \CC to a C-based project.
    253 
    254 All languages features discussed in this paper are working, except some advanced exception-handling features.
    255 Not discussed in this paper are the integrated concurrency-constructs and user-level threading-library~\cite{Delisle18}.
     248Nevertheless, C, which was first standardized almost 30 years ago~\cite{ANSI89:C}, lacks many features that make programming in more modern languages safer and more productive.
     249
     250\CFA (pronounced ``C for all'' and written \CFA or Cforall) is an evolutionary extension of the C programming language that adds modern language features to C, while maintaining source and runtime compatibility in the familiar C programming model.
     251The four key design goals for \CFA~\cite{Bilson03} are as follows:
     252(1) the behavior of standard C code must remain the same when translated by a \CFA compiler as when translated by a C compiler;
     253(2) the standard C code must be as fast and as small when translated by a \CFA compiler as when translated by a C compiler;
     254(3) the \CFA code must be at least as portable as standard C code;
     255(4) extensions introduced by \CFA must be translated in the most efficient way possible.
     256These goals ensure that the existing C code bases can be converted into \CFA incrementally with minimal effort, and C programmers can productively generate \CFA code without training beyond the features being used.
     257\CC is used similarly but has the disadvantages of multiple legacy design choices that cannot be updated and active divergence of the language model from C, requiring significant effort and training to incrementally add \CC to a C-based project.
     258
     259All language features discussed in this paper are working, except some advanced exception-handling features.
     260Not discussed in this paper are the integrated concurrency constructs and user-level threading library~\cite{Delisle18}.
    256261\CFA is an \emph{open-source} project implemented as a source-to-source translator from \CFA to the gcc-dialect of C~\cite{GCCExtensions}, allowing it to leverage the portability and code optimizations provided by gcc, meeting goals (1)--(3).
    257 Ultimately, a compiler is necessary for advanced features and optimal performance.
    258262% @plg2[9]% cd cfa-cc/src; cloc ArgTweak CodeGen CodeTools Common Concurrency ControlStruct Designators GenPoly InitTweak MakeLibCfa.cc MakeLibCfa.h Parser ResolvExpr SymTab SynTree Tuples driver prelude main.cc
    259263% -------------------------------------------------------------------------------
     
    270274% SUM:                           223           8203           8263          46479
    271275% -------------------------------------------------------------------------------
    272 The \CFA translator is 200+ files and 46,000+ lines of code written in C/\CC.
    273 Starting with a translator versus a compiler makes it easier and faster to generate and debug C object-code rather than intermediate, assembler or machine code.
    274 The translator design is based on the \emph{visitor pattern}, allowing multiple passes over the abstract code-tree, which works well for incrementally adding new feature through additional visitor passes.
    275 At the heart of the translator is the type resolver, which handles the polymorphic function/type overload-resolution.
     276The \CFA translator is 200+ files and 46\,000+ lines of code written in C/\CC.
     277A translator versus a compiler makes it easier and faster to generate and debug the C object code rather than the intermediate, assembler, or machine code;
     278ultimately, a compiler is necessary for advanced features and optimal performance.
     279% The translator design is based on the \emph{visitor pattern}, allowing multiple passes over the abstract code-tree, which works well for incrementally adding new feature through additional visitor passes.
     280Two key translator components are expression analysis, determining expression validity and what operations are required for its implementation, and code generation, dealing with multiple forms of overloading, polymorphism, and multiple return values by converting them into the C code for a C compiler that supports none of these features.
     281Details of these components are available in chapters 2 and 3 in the work of Bilson~\cite{Bilson03} and form the base for the current \CFA translator.
    276282% @plg2[8]% cd cfa-cc/src; cloc libcfa
    277283% -------------------------------------------------------------------------------
     
    288294% SUM:                           100           1895           2785          11763
    289295% -------------------------------------------------------------------------------
    290 The \CFA runtime system is 100+ files and 11,000+ lines of code, written in \CFA.
     296The \CFA runtime system is 100+ files and 11\,000+ lines of code, written in \CFA.
    291297Currently, the \CFA runtime is the largest \emph{user} of \CFA providing a vehicle to test the language features and implementation.
    292298% @plg2[6]% cd cfa-cc/src; cloc tests examples benchmark
     
    305311% SUM:                           290          13175           3400          27776
    306312% -------------------------------------------------------------------------------
    307 The \CFA tests are 290+ files and 27,000+ lines of code.
    308 The tests illustrate syntactic and semantic features in \CFA, plus a growing number of runtime benchmarks.
    309 The tests check for correctness and are used for daily regression testing of 3800+ commits.
    310 
    311 Finally, it is impossible to describe a programming language without usages before definitions.
    312 Therefore, syntax and semantics appear before explanations, and related work (Section~\ref{s:RelatedWork}) is deferred until \CFA is presented;
    313 hence, patience is necessary until details are discussed.
    314 
    315 
     313% The \CFA tests are 290+ files and 27,000+ lines of code.
     314% The tests illustrate syntactic and semantic features in \CFA, plus a growing number of runtime benchmarks.
     315% The tests check for correctness and are used for daily regression testing of 3800+ commits.
     316
     317Finally, it is impossible to describe a programming language without usage before definition.
     318Therefore, syntax and semantics appear before explanations;
     319hence, patience is necessary until sufficient details are presented and discussed.
     320Similarly, a detailed comparison with other programming languages is postponed until Section~\ref{s:RelatedWork}.
     321
     322
     323\vspace*{-6pt}
    316324\section{Polymorphic Functions}
    317325
    318 \CFA introduces both ad-hoc and parametric polymorphism to C, with a design originally formalized by Ditchfield~\cite{Ditchfield92}, and first implemented by Bilson~\cite{Bilson03}.
    319 Shortcomings are identified in existing approaches to generic and variadic data types in C-like languages and how these shortcomings are avoided in \CFA.
    320 Specifically, the solution is both reusable and type-checked, as well as conforming to the design goals of \CFA with ergonomic use of existing C abstractions.
     326\CFA introduces both ad hoc and parametric polymorphism to C, with a design originally formalized by Ditchfield~\cite{Ditchfield92} and first implemented by Bilson~\cite{Bilson03}.
     327Shortcomings are identified in the existing approaches to generic and variadic data types in C-like languages and how these shortcomings are avoided in \CFA.
     328Specifically, the solution is both reusable and type checked, as well as conforming to the design goals of \CFA with ergonomic use of existing C abstractions.
    321329The new constructs are empirically compared with C and \CC approaches via performance experiments in Section~\ref{sec:eval}.
    322330
    323331
    324 \subsection{Name Overloading}
     332\vspace*{-6pt}
     333\subsection{Name overloading}
    325334\label{s:NameOverloading}
    326335
    327336\begin{quote}
    328 There are only two hard things in Computer Science: cache invalidation and \emph{naming things} -- Phil Karlton
     337``There are only two hard things in Computer Science: cache invalidation and \emph{naming things}.''---Phil Karlton
    329338\end{quote}
    330339\vspace{-9pt}
    331 C already has a limited form of ad-hoc polymorphism in its basic arithmetic operators, which apply to a variety of different types using identical syntax.
     340C already has a limited form of ad hoc polymorphism in its basic arithmetic operators, which apply to a variety of different types using identical syntax.
    332341\CFA extends the built-in operator overloading by allowing users to define overloads for any function, not just operators, and even any variable;
    333342Section~\ref{sec:libraries} includes a number of examples of how this overloading simplifies \CFA programming relative to C.
    334343Code generation for these overloaded functions and variables is implemented by the usual approach of mangling the identifier names to include a representation of their type, while \CFA decides which overload to apply based on the same ``usual arithmetic conversions'' used in C to disambiguate operator overloads.
    335 As an example:
     344
     345\newpage
    336346\begin{cfa}
    337347int max = 2147483647;                                           $\C[4in]{// (1)}$
     
    339349int max( int a, int b ) { return a < b ? b : a; }  $\C{// (3)}$
    340350double max( double a, double b ) { return a < b ? b : a; }  $\C{// (4)}\CRT$
    341 max( 7, -max );                                         $\C{// uses (3) and (1), by matching int from constant 7}$
     351max( 7, -max );                                         $\C[3in]{// uses (3) and (1), by matching int from constant 7}$
    342352max( max, 3.14 );                                       $\C{// uses (4) and (2), by matching double from constant 3.14}$
    343353max( max, -max );                                       $\C{// ERROR, ambiguous}$
    344 int m = max( max, -max );                       $\C{// uses (3) and (1) twice, by matching return type}$
     354int m = max( max, -max );                       $\C{// uses (3) and (1) twice, by matching return type}\CRT$
    345355\end{cfa}
    346356
     
    348358In some cases, hundreds of names can be reduced to tens, resulting in a significant cognitive reduction.
    349359In the above, the name @max@ has a consistent meaning, and a programmer only needs to remember the single concept: maximum.
    350 To prevent significant ambiguities, \CFA uses the return type in selecting overloads, \eg in the assignment to @m@, the compiler use @m@'s type to unambiguously select the most appropriate call to function @max@ (as does Ada).
     360To prevent significant ambiguities, \CFA uses the return type in selecting overloads, \eg in the assignment to @m@, the compiler uses @m@'s type to unambiguously select the most appropriate call to function @max@ (as does Ada).
    351361As is shown later, there are a number of situations where \CFA takes advantage of available type information to disambiguate, where other programming languages generate ambiguities.
    352362
    353 \Celeven added @_Generic@ expressions~\cite[\S~6.5.1.1]{C11}, which is used with preprocessor macros to provide ad-hoc polymorphism;
     363\Celeven added @_Generic@ expressions (see section~6.5.1.1 of the ISO/IEC 9899~\cite{C11}), which is used with preprocessor macros to provide ad hoc polymorphism;
    354364however, this polymorphism is both functionally and ergonomically inferior to \CFA name overloading.
    355 The macro wrapping the generic expression imposes some limitations;
    356 \eg, it cannot implement the example above, because the variables @max@ are ambiguous with the functions @max@.
     365The macro wrapping the generic expression imposes some limitations, for instance, it cannot implement the example above, because the variables @max@ are ambiguous with the functions @max@.
    357366Ergonomic limitations of @_Generic@ include the necessity to put a fixed list of supported types in a single place and manually dispatch to appropriate overloads, as well as possible namespace pollution from the dispatch functions, which must all have distinct names.
    358 \CFA supports @_Generic@ expressions for backwards compatibility, but it is an unnecessary mechanism. \TODO{actually implement that}
     367\CFA supports @_Generic@ expressions for backward compatibility, but it is an unnecessary mechanism.
    359368
    360369% http://fanf.livejournal.com/144696.html
     
    363372
    364373
    365 \subsection{\texorpdfstring{\protect\lstinline{forall} Functions}{forall Functions}}
     374\vspace*{-10pt}
     375\subsection{\texorpdfstring{\protect\lstinline{forall} functions}{forall functions}}
    366376\label{sec:poly-fns}
    367377
    368 The signature feature of \CFA is parametric-polymorphic functions~\cite{forceone:impl,Cormack90,Duggan96} with functions generalized using a @forall@ clause (giving the language its name):
     378The signature feature of \CFA is parametric-polymorphic functions~\cite{forceone:impl,Cormack90,Duggan96} with functions generalized using a @forall@ clause (giving the language its name).
    369379\begin{cfa}
    370380`forall( otype T )` T identity( T val ) { return val; }
     
    373383This @identity@ function can be applied to any complete \newterm{object type} (or @otype@).
    374384The type variable @T@ is transformed into a set of additional implicit parameters encoding sufficient information about @T@ to create and return a variable of that type.
    375 The \CFA implementation passes the size and alignment of the type represented by an @otype@ parameter, as well as an assignment operator, constructor, copy constructor and destructor.
    376 If this extra information is not needed, \eg for a pointer, the type parameter can be declared as a \newterm{data type} (or @dtype@).
    377 
    378 In \CFA, the polymorphic runtime-cost is spread over each polymorphic call, because more arguments are passed to polymorphic functions;
    379 the experiments in Section~\ref{sec:eval} show this overhead is similar to \CC virtual-function calls.
    380 A design advantage is that, unlike \CC template-functions, \CFA polymorphic-functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat.
    381 
    382 Since bare polymorphic-types provide a restricted set of available operations, \CFA provides a \newterm{type assertion}~\cite[pp.~37-44]{Alphard} mechanism to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type-variable.
    383 For example, the function @twice@ can be defined using the \CFA syntax for operator overloading:
     385The \CFA implementation passes the size and alignment of the type represented by an @otype@ parameter, as well as an assignment operator, constructor, copy constructor, and destructor.
     386If this extra information is not needed, for instance, for a pointer, the type parameter can be declared as a \newterm{data type} (or @dtype@).
     387
     388In \CFA, the polymorphic runtime cost is spread over each polymorphic call, because more arguments are passed to polymorphic functions;
     389the experiments in Section~\ref{sec:eval} show this overhead is similar to \CC virtual function calls.
     390A design advantage is that, unlike \CC template functions, \CFA polymorphic functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat.
     391
     392Since bare polymorphic types provide a restricted set of available operations, \CFA provides a \newterm{type assertion}~\cite[pp.~37-44]{Alphard} mechanism to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type variable.
     393For example, the function @twice@ can be defined using the \CFA syntax for operator overloading.
    384394\begin{cfa}
    385395forall( otype T `| { T ?+?(T, T); }` ) T twice( T x ) { return x `+` x; }  $\C{// ? denotes operands}$
    386396int val = twice( twice( 3.7 ) );  $\C{// val == 14}$
    387397\end{cfa}
    388 which works for any type @T@ with a matching addition operator.
    389 The polymorphism is achieved by creating a wrapper function for calling @+@ with @T@ bound to @double@, then passing this function to the first call of @twice@.
    390 There is now the option of using the same @twice@ and converting the result to @int@ on assignment, or creating another @twice@ with type parameter @T@ bound to @int@ because \CFA uses the return type~\cite{Cormack81,Baker82,Ada} in its type analysis.
    391 The first approach has a late conversion from @double@ to @int@ on the final assignment, while the second has an early conversion to @int@.
    392 \CFA minimizes the number of conversions and their potential to lose information, so it selects the first approach, which corresponds with C-programmer intuition.
     398This works for any type @T@ with a matching addition operator.
     399The polymorphism is achieved by creating a wrapper function for calling @+@ with the @T@ bound to @double@ and then passing this function to the first call of @twice@.
     400There is now the option of using the same @twice@ and converting the result into @int@ on assignment or creating another @twice@ with the type parameter @T@ bound to @int@ because \CFA uses the return type~\cite{Cormack81,Baker82,Ada} in its type analysis.
     401The first approach has a late conversion from @double@ to @int@ on the final assignment, whereas the second has an early conversion to @int@.
     402\CFA minimizes the number of conversions and their potential to lose information;
     403hence, it selects the first approach, which corresponds with C programmer intuition.
    393404
    394405Crucial to the design of a new programming language are the libraries to access thousands of external software features.
    395 Like \CC, \CFA inherits a massive compatible library-base, where other programming languages must rewrite or provide fragile inter-language communication with C.
    396 A simple example is leveraging the existing type-unsafe (@void *@) C @bsearch@ to binary search a sorted float array:
     406Like \CC, \CFA inherits a massive compatible library base, where other programming languages must rewrite or provide fragile interlanguage communication with C.
     407A simple example is leveraging the existing type-unsafe (@void *@) C @bsearch@ to binary search a sorted float array.
    397408\begin{cfa}
    398409void * bsearch( const void * key, const void * base, size_t nmemb, size_t size,
     
    404415double * val = (double *)bsearch( &key, vals, 10, sizeof(vals[0]), comp ); $\C{// search sorted array}$
    405416\end{cfa}
    406 which can be augmented simply with generalized, type-safe, \CFA-overloaded wrappers:
     417This can be augmented simply with generalized, type-safe, \CFA-overloaded wrappers.
    407418\begin{cfa}
    408419forall( otype T | { int ?<?( T, T ); } ) T * bsearch( T key, const T * arr, size_t size ) {
     
    418429\end{cfa}
    419430The nested function @comp@ provides the hidden interface from typed \CFA to untyped (@void *@) C, plus the cast of the result.
    420 Providing a hidden @comp@ function in \CC is awkward as lambdas do not use C calling-conventions and template declarations cannot appear at block scope.
    421 As well, an alternate kind of return is made available: position versus pointer to found element.
    422 \CC's type-system cannot disambiguate between the two versions of @bsearch@ because it does not use the return type in overload resolution, nor can \CC separately compile a template @bsearch@.
     431% FIX
     432Providing a hidden @comp@ function in \CC is awkward as lambdas do not use C calling conventions and template declarations cannot appear in block scope.
     433In addition, an alternate kind of return is made available: position versus pointer to found element.
     434\CC's type system cannot disambiguate between the two versions of @bsearch@ because it does not use the return type in overload resolution, nor can \CC separately compile a template @bsearch@.
    423435
    424436\CFA has replacement libraries condensing hundreds of existing C functions into tens of \CFA overloaded functions, all without rewriting the actual computations (see Section~\ref{sec:libraries}).
     
    430442\end{cfa}
    431443
    432 Call-site inferencing and nested functions provide a localized form of inheritance.
     444Call site inferencing and nested functions provide a localized form of inheritance.
    433445For example, the \CFA @qsort@ only sorts in ascending order using @<@.
    434 However, it is trivial to locally change this behaviour:
     446However, it is trivial to locally change this behavior.
    435447\begin{cfa}
    436448forall( otype T | { int ?<?( T, T ); } ) void qsort( const T * arr, size_t size ) { /* use C qsort */ }
    437449int main() {
    438         int ?<?( double x, double y ) { return x `>` y; } $\C{// locally override behaviour}$
     450        int ?<?( double x, double y ) { return x `>` y; } $\C{// locally override behavior}$
    439451        qsort( vals, 10 );                                                      $\C{// descending sort}$
    440452}
    441453\end{cfa}
    442454The local version of @?<?@ performs @?>?@ overriding the built-in @?<?@ so it is passed to @qsort@.
    443 Hence, programmers can easily form local environments, adding and modifying appropriate functions, to maximize reuse of other existing functions and types.
    444 
    445 To reduce duplication, it is possible to distribute a group of @forall@ (and storage-class qualifiers) over functions/types, so each block declaration is prefixed by the group (see example in Appendix~\ref{s:CforallStack}).
     455Therefore, programmers can easily form local environments, adding and modifying appropriate functions, to maximize the reuse of other existing functions and types.
     456
     457To reduce duplication, it is possible to distribute a group of @forall@ (and storage-class qualifiers) over functions/types, such that each block declaration is prefixed by the group (see the example in Appendix~\ref{s:CforallStack}).
    446458\begin{cfa}
    447459forall( otype `T` ) {                                                   $\C{// distribution block, add forall qualifier to declarations}$
     
    454466
    455467
    456 \vspace*{-2pt}
    457468\subsection{Traits}
    458469
    459 \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each function declaration:
    460 
     470\CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each function declaration.
    461471\begin{cquote}
    462472\lstDeleteShortInline@%
     
    485495\end{cquote}
    486496
    487 Note, the @sumable@ trait does not include a copy constructor needed for the right side of @?+=?@ and return;
    488 it is provided by @otype@, which is syntactic sugar for the following trait:
     497Note that the @sumable@ trait does not include a copy constructor needed for the right side of @?+=?@ and return;
     498it is provided by @otype@, which is syntactic sugar for the following trait.
    489499\begin{cfa}
    490500trait otype( dtype T | sized(T) ) {  // sized is a pseudo-trait for types with known size and alignment
     
    495505};
    496506\end{cfa}
    497 Given the information provided for an @otype@, variables of polymorphic type can be treated as if they were a complete type: stack-allocatable, default or copy-initialized, assigned, and deleted.
    498 
    499 In summation, the \CFA type-system uses \newterm{nominal typing} for concrete types, matching with the C type-system, and \newterm{structural typing} for polymorphic types.
     507Given the information provided for an @otype@, variables of polymorphic type can be treated as if they were a complete type: stack allocatable, default or copy initialized, assigned, and deleted.
     508
     509In summation, the \CFA type system uses \newterm{nominal typing} for concrete types, matching with the C type system, and \newterm{structural typing} for polymorphic types.
    500510Hence, trait names play no part in type equivalence;
    501511the names are simply macros for a list of polymorphic assertions, which are expanded at usage sites.
    502 Nevertheless, trait names form a logical subtype-hierarchy with @dtype@ at the top, where traits often contain overlapping assertions, \eg operator @+@.
    503 Traits are used like interfaces in Java or abstract base-classes in \CC, but without the nominal inheritance-relationships.
    504 Instead, each polymorphic function (or generic type) defines the structural type needed for its execution (polymorphic type-key), and this key is fulfilled at each call site from the lexical environment, which is similar to Go~\cite{Go} interfaces.
    505 Hence, new lexical scopes and nested functions are used extensively to create local subtypes, as in the @qsort@ example, without having to manage a nominal-inheritance hierarchy.
     512Nevertheless, trait names form a logical subtype hierarchy with @dtype@ at the top, where traits often contain overlapping assertions, \eg operator @+@.
     513Traits are used like interfaces in Java or abstract base classes in \CC, but without the nominal inheritance relationships.
     514Instead, each polymorphic function (or generic type) defines the structural type needed for its execution (polymorphic type key), and this key is fulfilled at each call site from the lexical environment, which is similar to the Go~\cite{Go} interfaces.
     515Hence, new lexical scopes and nested functions are used extensively to create local subtypes, as in the @qsort@ example, without having to manage a nominal inheritance hierarchy.
    506516% (Nominal inheritance can be approximated with traits using marker variables or functions, as is done in Go.)
    507517
     
    534544
    535545A significant shortcoming of standard C is the lack of reusable type-safe abstractions for generic data structures and algorithms.
    536 Broadly speaking, there are three approaches to implement abstract data-structures in C.
    537 One approach is to write bespoke data-structures for each context in which they are needed.
    538 While this approach is flexible and supports integration with the C type-checker and tooling, it is also tedious and error-prone, especially for more complex data structures.
    539 A second approach is to use @void *@-based polymorphism, \eg the C standard-library functions @bsearch@ and @qsort@, which allow reuse of code with common functionality.
    540 However, basing all polymorphism on @void *@ eliminates the type-checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that is not otherwise needed.
    541 A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type-checked, but errors may be difficult to interpret.
     546Broadly speaking, there are three approaches to implement abstract data structures in C.
     547One approach is to write bespoke data structures for each context in which they are needed.
     548While this approach is flexible and supports integration with the C type checker and tooling, it is also tedious and error prone, especially for more complex data structures.
     549A second approach is to use @void *@-based polymorphism, \eg the C standard library functions @bsearch@ and @qsort@, which allow for the reuse of code with common functionality.
     550However, basing all polymorphism on @void *@ eliminates the type checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that is otherwise not needed.
     551A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type checked, but errors may be difficult to interpret.
    542552Furthermore, writing and using preprocessor macros is unnatural and inflexible.
    543553
    544 \CC, Java, and other languages use \newterm{generic types} to produce type-safe abstract data-types.
    545 \CFA generic types integrate efficiently and naturally with the existing polymorphic functions, while retaining backwards compatibility with C and providing separate compilation.
     554\CC, Java, and other languages use \newterm{generic types} to produce type-safe abstract data types.
     555\CFA generic types integrate efficiently and naturally with the existing polymorphic functions, while retaining backward compatibility with C and providing separate compilation.
    546556However, for known concrete parameters, the generic-type definition can be inlined, like \CC templates.
    547557
    548 A generic type can be declared by placing a @forall@ specifier on a @struct@ or @union@ declaration, and instantiated using a parenthesized list of types after the type name:
     558A generic type can be declared by placing a @forall@ specifier on a @struct@ or @union@ declaration and instantiated using a parenthesized list of types after the type name.
    549559\begin{cquote}
    550560\lstDeleteShortInline@%
     
    574584
    575585\CFA classifies generic types as either \newterm{concrete} or \newterm{dynamic}.
    576 Concrete types have a fixed memory layout regardless of type parameters, while dynamic types vary in memory layout depending on their type parameters.
     586Concrete types have a fixed memory layout regardless of type parameters, whereas dynamic types vary in memory layout depending on their type parameters.
    577587A \newterm{dtype-static} type has polymorphic parameters but is still concrete.
    578588Polymorphic pointers are an example of dtype-static types;
    579 given some type variable @T@, @T@ is a polymorphic type, as is @T *@, but @T *@ has a fixed size and can therefore be represented by @void *@ in code generation.
    580 
    581 \CFA generic types also allow checked argument-constraints.
    582 For example, the following declaration of a sorted set-type ensures the set key supports equality and relational comparison:
     589given some type variable @T@, @T@ is a polymorphic type, as is @T *@, but @T *@ has a fixed size and can, therefore, be represented by @void *@ in code generation.
     590
     591\CFA generic types also allow checked argument constraints.
     592For example, the following declaration of a sorted set type ensures the set key supports equality and relational comparison.
    583593\begin{cfa}
    584594forall( otype Key | { _Bool ?==?(Key, Key); _Bool ?<?(Key, Key); } ) struct sorted_set;
     
    586596
    587597
    588 \subsection{Concrete Generic-Types}
    589 
    590 The \CFA translator template-expands concrete generic-types into new structure types, affording maximal inlining.
    591 To enable inter-operation among equivalent instantiations of a generic type, the translator saves the set of instantiations currently in scope and reuses the generated structure declarations where appropriate.
    592 A function declaration that accepts or returns a concrete generic-type produces a declaration for the instantiated structure in the same scope, which all callers may reuse.
    593 For example, the concrete instantiation for @pair( const char *, int )@ is:
     598\subsection{Concrete generic types}
     599
     600The \CFA translator template expands concrete generic types into new structure types, affording maximal inlining.
     601To enable interoperation among equivalent instantiations of a generic type, the translator saves the set of instantiations currently in scope and reuses the generated structure declarations where appropriate.
     602A function declaration that accepts or returns a concrete generic type produces a declaration for the instantiated structure in the same scope, which all callers may reuse.
     603For example, the concrete instantiation for @pair( const char *, int )@ is
    594604\begin{cfa}
    595605struct _pair_conc0 {
     
    598608\end{cfa}
    599609
    600 A concrete generic-type with dtype-static parameters is also expanded to a structure type, but this type is used for all matching instantiations.
    601 In the above example, the @pair( F *, T * )@ parameter to @value@ is such a type; its expansion is below and it is used as the type of the variables @q@ and @r@ as well, with casts for member access where appropriate:
     610A concrete generic type with dtype-static parameters is also expanded to a structure type, but this type is used for all matching instantiations.
     611In the above example, the @pair( F *, T * )@ parameter to @value@ is such a type; its expansion is below, and it is used as the type of the variables @q@ and @r@ as well, with casts for member access where appropriate.
    602612\begin{cfa}
    603613struct _pair_conc1 {
     
    607617
    608618
    609 \subsection{Dynamic Generic-Types}
    610 
    611 Though \CFA implements concrete generic-types efficiently, it also has a fully general system for dynamic generic types.
    612 As mentioned in Section~\ref{sec:poly-fns}, @otype@ function parameters (in fact all @sized@ polymorphic parameters) come with implicit size and alignment parameters provided by the caller.
    613 Dynamic generic-types also have an \newterm{offset array} containing structure-member offsets.
    614 A dynamic generic-@union@ needs no such offset array, as all members are at offset 0, but size and alignment are still necessary.
    615 Access to members of a dynamic structure is provided at runtime via base-displacement addressing with the structure pointer and the member offset (similar to the @offsetof@ macro), moving a compile-time offset calculation to runtime.
     619\subsection{Dynamic generic types}
     620
     621Though \CFA implements concrete generic types efficiently, it also has a fully general system for dynamic generic types.
     622As mentioned in Section~\ref{sec:poly-fns}, @otype@ function parameters (in fact, all @sized@ polymorphic parameters) come with implicit size and alignment parameters provided by the caller.
     623Dynamic generic types also have an \newterm{offset array} containing structure-member offsets.
     624A dynamic generic @union@ needs no such offset array, as all members are at offset 0, but size and alignment are still necessary.
     625Access to members of a dynamic structure is provided at runtime via base displacement addressing
     626% FIX
     627using the structure pointer and the member offset (similar to the @offsetof@ macro), moving a compile-time offset calculation to runtime.
    616628
    617629The offset arrays are statically generated where possible.
    618 If a dynamic generic-type is declared to be passed or returned by value from a polymorphic function, the translator can safely assume the generic type is complete (\ie has a known layout) at any call-site, and the offset array is passed from the caller;
     630If a dynamic generic type is declared to be passed or returned by value from a polymorphic function, the translator can safely assume that the generic type is complete (\ie has a known layout) at any call site, and the offset array is passed from the caller;
    619631if the generic type is concrete at the call site, the elements of this offset array can even be statically generated using the C @offsetof@ macro.
    620 As an example, the body of the second @value@ function is implemented as:
     632As an example, the body of the second @value@ function is implemented as
    621633\begin{cfa}
    622634_assign_T( _retval, p + _offsetof_pair[1] ); $\C{// return *p.second}$
    623635\end{cfa}
    624 @_assign_T@ is passed in as an implicit parameter from @otype T@, and takes two @T *@ (@void *@ in the generated code), a destination and a source; @_retval@ is the pointer to a caller-allocated buffer for the return value, the usual \CFA method to handle dynamically-sized return types.
    625 @_offsetof_pair@ is the offset array passed into @value@; this array is generated at the call site as:
     636\newpage
     637\noindent
     638Here, @_assign_T@ is passed in as an implicit parameter from @otype T@, and takes two @T *@ (@void *@ in the generated code), a destination and a source, and @_retval@ is the pointer to a caller-allocated buffer for the return value, the usual \CFA method to handle dynamically sized return types.
     639@_offsetof_pair@ is the offset array passed into @value@;
     640this array is generated at the call site as
    626641\begin{cfa}
    627642size_t _offsetof_pair[] = { offsetof( _pair_conc0, first ), offsetof( _pair_conc0, second ) }
    628643\end{cfa}
    629644
    630 In some cases the offset arrays cannot be statically generated.
    631 For instance, modularity is generally provided in C by including an opaque forward-declaration of a structure and associated accessor and mutator functions in a header file, with the actual implementations in a separately-compiled @.c@ file.
    632 \CFA supports this pattern for generic types, but the caller does not know the actual layout or size of the dynamic generic-type, and only holds it by a pointer.
     645In some cases, the offset arrays cannot be statically generated.
     646For instance, modularity is generally provided in C by including an opaque forward declaration of a structure and associated accessor and mutator functions in a header file, with the actual implementations in a separately compiled @.c@ file.
     647\CFA supports this pattern for generic types, but the caller does not know the actual layout or size of the dynamic generic type and only holds it by a pointer.
    633648The \CFA translator automatically generates \newterm{layout functions} for cases where the size, alignment, and offset array of a generic struct cannot be passed into a function from that function's caller.
    634649These layout functions take as arguments pointers to size and alignment variables and a caller-allocated array of member offsets, as well as the size and alignment of all @sized@ parameters to the generic structure (un@sized@ parameters are forbidden from being used in a context that affects layout).
     
    640655Whether a type is concrete, dtype-static, or dynamic is decided solely on the @forall@'s type parameters.
    641656This design allows opaque forward declarations of generic types, \eg @forall(otype T)@ @struct Box@ -- like in C, all uses of @Box(T)@ can be separately compiled, and callers from other translation units know the proper calling conventions to use.
    642 If the definition of a structure type is included in deciding whether a generic type is dynamic or concrete, some further types may be recognized as dtype-static (\eg @forall(otype T)@ @struct unique_ptr { T * p }@ does not depend on @T@ for its layout, but the existence of an @otype@ parameter means that it \emph{could}.), but preserving separate compilation (and the associated C compatibility) in the existing design is judged to be an appropriate trade-off.
     657If the definition of a structure type is included in deciding whether a generic type is dynamic or concrete, some further types may be recognized as dtype-static (\eg @forall(otype T)@ @struct unique_ptr { T * p }@ does not depend on @T@ for its layout, but the existence of an @otype@ parameter means that it \emph{could}.);
     658however, preserving separate compilation (and the associated C compatibility) in the existing design is judged to be an appropriate trade-off.
    643659
    644660
     
    653669}
    654670\end{cfa}
    655 Since @pair( T *, T * )@ is a concrete type, there are no implicit parameters passed to @lexcmp@, so the generated code is identical to a function written in standard C using @void *@, yet the \CFA version is type-checked to ensure the members of both pairs and the arguments to the comparison function match in type.
    656 
    657 Another useful pattern enabled by reused dtype-static type instantiations is zero-cost \newterm{tag-structures}.
    658 Sometimes information is only used for type-checking and can be omitted at runtime, \eg:
     671Since @pair( T *, T * )@ is a concrete type, there are no implicit parameters passed to @lexcmp@;
     672hence, the generated code is identical to a function written in standard C using @void *@, yet the \CFA version is type checked to ensure members of both pairs and arguments to the comparison function match in type.
     673
     674Another useful pattern enabled by reused dtype-static type instantiations is zero-cost \newterm{tag structures}.
     675Sometimes, information is only used for type checking and can be omitted at runtime.
    659676\begin{cquote}
    660677\lstDeleteShortInline@%
     
    675692                                                        half_marathon;
    676693scalar(litres) two_pools = pool + pool;
    677 `marathon + pool;`      // ERROR, mismatched types
     694`marathon + pool;` // ERROR, mismatched types
    678695\end{cfa}
    679696\end{tabular}
    680697\lstMakeShortInline@%
    681698\end{cquote}
    682 @scalar@ is a dtype-static type, so all uses have a single structure definition, containing @unsigned long@, and can share the same implementations of common functions like @?+?@.
     699Here, @scalar@ is a dtype-static type;
     700hence, all uses have a single structure definition, containing @unsigned long@, and can share the same implementations of common functions like @?+?@.
    683701These implementations may even be separately compiled, unlike \CC template functions.
    684 However, the \CFA type-checker ensures matching types are used by all calls to @?+?@, preventing nonsensical computations like adding a length to a volume.
     702However, the \CFA type checker ensures matching types are used by all calls to @?+?@, preventing nonsensical computations like adding a length to a volume.
    685703
    686704
     
    688706\label{sec:tuples}
    689707
    690 In many languages, functions can return at most one value;
     708In many languages, functions can return, at most, one value;
    691709however, many operations have multiple outcomes, some exceptional.
    692710Consider C's @div@ and @remquo@ functions, which return the quotient and remainder for a division of integer and float values, respectively.
     
    699717double r = remquo( 13.5, 5.2, &q );                     $\C{// return remainder, alias quotient}$
    700718\end{cfa}
    701 @div@ aggregates the quotient/remainder in a structure, while @remquo@ aliases a parameter to an argument.
     719Here, @div@ aggregates the quotient/remainder in a structure, whereas @remquo@ aliases a parameter to an argument.
    702720Both approaches are awkward.
    703 Alternatively, a programming language can directly support returning multiple values, \eg in \CFA:
     721% FIX
     722Alternatively, a programming language can directly support returning multiple values, \eg \CFA provides the following.
    704723\begin{cfa}
    705724[ int, int ] div( int num, int den );           $\C{// return two integers}$
     
    712731This approach is straightforward to understand and use;
    713732therefore, why do few programming languages support this obvious feature or provide it awkwardly?
    714 To answer, there are complex consequences that cascade through multiple aspects of the language, especially the type-system.
    715 This section show these consequences and how \CFA handles them.
     733To answer, there are complex consequences that cascade through multiple aspects of the language, especially the type system.
     734This section shows these consequences and how \CFA handles them.
    716735
    717736
    718737\subsection{Tuple Expressions}
    719738
    720 The addition of multiple-return-value functions (MRVF) are \emph{useless} without a syntax for accepting multiple values at the call-site.
     739The addition of multiple-return-value functions (MRVFs) is \emph{useless} without a syntax for accepting multiple values at the call site.
    721740The simplest mechanism for capturing the return values is variable assignment, allowing the values to be retrieved directly.
    722741As such, \CFA allows assigning multiple values from a function into multiple variables, using a square-bracketed list of lvalue expressions (as above), called a \newterm{tuple}.
    723742
    724 However, functions also use \newterm{composition} (nested calls), with the direct consequence that MRVFs must also support composition to be orthogonal with single-returning-value functions (SRVF), \eg:
     743However, functions also use \newterm{composition} (nested calls), with the direct consequence that MRVFs must also support composition to be orthogonal with single-returning-value functions (SRVFs), \eg, \CFA provides the following.
    725744\begin{cfa}
    726745printf( "%d %d\n", div( 13, 5 ) );                      $\C{// return values seperated into arguments}$
    727746\end{cfa}
    728747Here, the values returned by @div@ are composed with the call to @printf@ by flattening the tuple into separate arguments.
    729 However, the \CFA type-system must support significantly more complex composition:
     748However, the \CFA type-system must support significantly more complex composition.
    730749\begin{cfa}
    731750[ int, int ] foo$\(_1\)$( int );                        $\C{// overloaded foo functions}$
     
    734753`bar`( foo( 3 ), foo( 3 ) );
    735754\end{cfa}
    736 The type-resolver only has the tuple return-types to resolve the call to @bar@ as the @foo@ parameters are identical, which involves unifying the possible @foo@ functions with @bar@'s parameter list.
    737 No combination of @foo@s are an exact match with @bar@'s parameters, so the resolver applies C conversions.
     755The type resolver only has the tuple return types to resolve the call to @bar@ as the @foo@ parameters are identical, which involves unifying the possible @foo@ functions with @bar@'s parameter list.
     756No combination of @foo@s is an exact match with @bar@'s parameters;
     757thus, the resolver applies C conversions.
     758% FIX
    738759The minimal cost is @bar( foo@$_1$@( 3 ), foo@$_2$@( 3 ) )@, giving (@int@, {\color{ForestGreen}@int@}, @double@) to (@int@, {\color{ForestGreen}@double@}, @double@) with one {\color{ForestGreen}safe} (widening) conversion from @int@ to @double@ versus ({\color{red}@double@}, {\color{ForestGreen}@int@}, {\color{ForestGreen}@int@}) to ({\color{red}@int@}, {\color{ForestGreen}@double@}, {\color{ForestGreen}@double@}) with one {\color{red}unsafe} (narrowing) conversion from @double@ to @int@ and two safe conversions.
    739760
    740761
    741 \subsection{Tuple Variables}
     762\subsection{Tuple variables}
    742763
    743764An important observation from function composition is that new variable names are not required to initialize parameters from an MRVF.
    744 \CFA also allows declaration of tuple variables that can be initialized from an MRVF, since it can be awkward to declare multiple variables of different types, \eg:
     765\CFA also allows declaration of tuple variables that can be initialized from an MRVF, since it can be awkward to declare multiple variables of different types.
     766\newpage
    745767\begin{cfa}
    746768[ int, int ] qr = div( 13, 5 );                         $\C{// tuple-variable declaration and initialization}$
    747769[ double, double ] qr = div( 13.5, 5.2 );
    748770\end{cfa}
    749 where the tuple variable-name serves the same purpose as the parameter name(s).
     771Here, the tuple variable name serves the same purpose as the parameter name(s).
    750772Tuple variables can be composed of any types, except for array types, since array sizes are generally unknown in C.
    751773
    752 One way to access the tuple-variable components is with assignment or composition:
     774One way to access the tuple variable components is with assignment or composition.
    753775\begin{cfa}
    754776[ q, r ] = qr;                                                          $\C{// access tuple-variable components}$
    755777printf( "%d %d\n", qr );
    756778\end{cfa}
    757 \CFA also supports \newterm{tuple indexing} to access single components of a tuple expression:
     779\CFA also supports \newterm{tuple indexing} to access single components of a tuple expression.
    758780\begin{cfa}
    759781[int, int] * p = &qr;                                           $\C{// tuple pointer}$
     
    766788
    767789
    768 \subsection{Flattening and Restructuring}
     790\subsection{Flattening and restructuring}
    769791
    770792In function call contexts, tuples support implicit flattening and restructuring conversions.
    771793Tuple flattening recursively expands a tuple into the list of its basic components.
    772 Tuple structuring packages a list of expressions into a value of tuple type, \eg:
     794Tuple structuring packages a list of expressions into a value of tuple type.
    773795\begin{cfa}
    774796int f( int, int );
     
    781803h( x, y );                                                                      $\C{// flatten and structure}$
    782804\end{cfa}
    783 In the call to @f@, @x@ is implicitly flattened so the components of @x@ are passed as the two arguments.
     805In the call to @f@, @x@ is implicitly flattened so the components of @x@ are passed as two arguments.
    784806In the call to @g@, the values @y@ and @10@ are structured into a single argument of type @[int, int]@ to match the parameter type of @g@.
    785807Finally, in the call to @h@, @x@ is flattened to yield an argument list of length 3, of which the first component of @x@ is passed as the first parameter of @h@, and the second component of @x@ and @y@ are structured into the second argument of type @[int, int]@.
    786 The flexible structure of tuples permits a simple and expressive function call syntax to work seamlessly with both SRVF and MRVF, and with any number of arguments of arbitrarily complex structure.
    787 
    788 
    789 \subsection{Tuple Assignment}
    790 
     808The flexible structure of tuples permits a simple and expressive function call syntax to work seamlessly with both SRVFs and MRVFs with any number of arguments of arbitrarily complex structure.
     809
     810
     811\subsection{Tuple assignment}
     812
     813\enlargethispage{-10pt}
    791814An assignment where the left side is a tuple type is called \newterm{tuple assignment}.
    792 There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a non-tuple type, called \newterm{multiple} and \newterm{mass assignment}, respectively.
     815There are two kinds of tuple assignment depending on whether the right side of the assignment operator has a tuple type or a nontuple type, called \newterm{multiple} and \newterm{mass assignment}, respectively.
    793816\begin{cfa}
    794817int x = 10;
     
    800823[y, x] = 3.14;                                                          $\C{// mass assignment}$
    801824\end{cfa}
    802 Both kinds of tuple assignment have parallel semantics, so that each value on the left and right side is evaluated before any assignments occur.
     825Both kinds of tuple assignment have parallel semantics, so that each value on the left and right sides is evaluated before any assignments occur.
    803826As a result, it is possible to swap the values in two variables without explicitly creating any temporary variables or calling a function, \eg, @[x, y] = [y, x]@.
    804827This semantics means mass assignment differs from C cascading assignment (\eg @a = b = c@) in that conversions are applied in each individual assignment, which prevents data loss from the chain of conversions that can happen during a cascading assignment.
    805 For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, yielding @y == 3.14@ and @x == 3@;
    806 whereas, C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, yielding @3@ in @y@ and @x@.
     828For example, @[y, x] = 3.14@ performs the assignments @y = 3.14@ and @x = 3.14@, yielding @y == 3.14@ and @x == 3@, whereas C cascading assignment @y = x = 3.14@ performs the assignments @x = 3.14@ and @y = x@, yielding @3@ in @y@ and @x@.
    807829Finally, tuple assignment is an expression where the result type is the type of the left-hand side of the assignment, just like all other assignment expressions in C.
    808 This example shows mass, multiple, and cascading assignment used in one expression:
     830This example shows mass, multiple, and cascading assignment used in one expression.
    809831\begin{cfa}
    810832[void] f( [int, int] );
     
    813835
    814836
    815 \subsection{Member Access}
    816 
    817 It is also possible to access multiple members from a single expression using a \newterm{member-access}.
    818 The result is a single tuple-valued expression whose type is the tuple of the types of the members, \eg:
     837\subsection{Member access}
     838
     839It is also possible to access multiple members from a single expression using a \newterm{member access}.
     840The result is a single tuple-valued expression whose type is the tuple of the types of the members.
    819841\begin{cfa}
    820842struct S { int x; double y; char * z; } s;
     
    830852[int, int, int] y = x.[2, 0, 2];                        $\C{// duplicate: [y.0, y.1, y.2] = [x.2, x.0.x.2]}$
    831853\end{cfa}
    832 It is also possible for a member access to contain other member accesses, \eg:
     854It is also possible for a member access to contain other member accesses.
    833855\begin{cfa}
    834856struct A { double i; int j; };
     
    897919
    898920Tuples also integrate with \CFA polymorphism as a kind of generic type.
    899 Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with non-tuple types, \eg:
     921Due to the implicit flattening and structuring conversions involved in argument passing, @otype@ and @dtype@ parameters are restricted to matching only with nontuple types.
    900922\begin{cfa}
    901923forall( otype T, dtype U ) void f( T x, U * y );
    902924f( [5, "hello"] );
    903925\end{cfa}
    904 where @[5, "hello"]@ is flattened, giving argument list @5, "hello"@, and @T@ binds to @int@ and @U@ binds to @const char@.
     926Here, @[5, "hello"]@ is flattened, giving argument list @5, "hello"@, and @T@ binds to @int@ and @U@ binds to @const char@.
    905927Tuples, however, may contain polymorphic components.
    906928For example, a plus operator can be written to sum two triples.
     
    920942g( 5, 10.21 );
    921943\end{cfa}
     944\newpage
    922945Hence, function parameter and return lists are flattened for the purposes of type unification allowing the example to pass expression resolution.
    923946This relaxation is possible by extending the thunk scheme described by Bilson~\cite{Bilson03}.
     
    930953
    931954
    932 \subsection{Variadic Tuples}
     955\subsection{Variadic tuples}
    933956\label{sec:variadic-tuples}
    934957
    935 To define variadic functions, \CFA adds a new kind of type parameter, @ttype@ (tuple type).
    936 Matching against a @ttype@ parameter consumes all remaining argument components and packages them into a tuple, binding to the resulting tuple of types.
    937 In a given parameter list, there must be at most one @ttype@ parameter that occurs last, which matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates.
     958To define variadic functions, \CFA adds a new kind of type parameter, \ie @ttype@ (tuple type).
     959Matching against a @ttype@ parameter consumes all the remaining argument components and packages them into a tuple, binding to the resulting tuple of types.
     960In a given parameter list, there must be, at most, one @ttype@ parameter that occurs last, which matches normal variadic semantics, with a strong feeling of similarity to \CCeleven variadic templates.
    938961As such, @ttype@ variables are also called \newterm{argument packs}.
    939962
     
    941964Since nothing is known about a parameter pack by default, assertion parameters are key to doing anything meaningful.
    942965Unlike variadic templates, @ttype@ polymorphic functions can be separately compiled.
    943 For example, a generalized @sum@ function:
     966For example, the following is a generalized @sum@ function.
    944967\begin{cfa}
    945968int sum$\(_0\)$() { return 0; }
     
    950973\end{cfa}
    951974Since @sum@\(_0\) does not accept any arguments, it is not a valid candidate function for the call @sum(10, 20, 30)@.
    952 In order to call @sum@\(_1\), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list and @Params@ is bound to @[20, 30]@.
     975In order to call @sum@\(_1\), @10@ is matched with @x@, and the argument resolution moves on to the argument pack @rest@, which consumes the remainder of the argument list, and @Params@ is bound to @[20, 30]@.
    953976The process continues until @Params@ is bound to @[]@, requiring an assertion @int sum()@, which matches @sum@\(_0\) and terminates the recursion.
    954977Effectively, this algorithm traces as @sum(10, 20, 30)@ $\rightarrow$ @10 + sum(20, 30)@ $\rightarrow$ @10 + (20 + sum(30))@ $\rightarrow$ @10 + (20 + (30 + sum()))@ $\rightarrow$ @10 + (20 + (30 + 0))@.
    955978
    956 It is reasonable to take the @sum@ function a step further to enforce a minimum number of arguments:
     979It is reasonable to take the @sum@ function a step further to enforce a minimum number of arguments.
    957980\begin{cfa}
    958981int sum( int x, int y ) { return x + y; }
     
    961984}
    962985\end{cfa}
    963 One more step permits the summation of any sumable type with all arguments of the same type:
     986One more step permits the summation of any sumable type with all arguments of the same type.
    964987\begin{cfa}
    965988trait sumable( otype T ) {
     
    9901013This example showcases a variadic-template-like decomposition of the provided argument list.
    9911014The individual @print@ functions allow printing a single element of a type.
    992 The polymorphic @print@ allows printing any list of types, where as each individual type has a @print@ function.
     1015The polymorphic @print@ allows printing any list of types, where each individual type has a @print@ function.
    9931016The individual print functions can be used to build up more complicated @print@ functions, such as @S@, which cannot be done with @printf@ in C.
    9941017This mechanism is used to seamlessly print tuples in the \CFA I/O library (see Section~\ref{s:IOLibrary}).
    9951018
    9961019Finally, it is possible to use @ttype@ polymorphism to provide arbitrary argument forwarding functions.
    997 For example, it is possible to write @new@ as a library function:
     1020For example, it is possible to write @new@ as a library function.
    9981021\begin{cfa}
    9991022forall( otype R, otype S ) void ?{}( pair(R, S) *, R, S );
     
    10041027\end{cfa}
    10051028The @new@ function provides the combination of type-safe @malloc@ with a \CFA constructor call, making it impossible to forget constructing dynamically allocated objects.
    1006 This function provides the type-safety of @new@ in \CC, without the need to specify the allocated type again, thanks to return-type inference.
     1029This function provides the type safety of @new@ in \CC, without the need to specify the allocated type again, due to return-type inference.
    10071030
    10081031
     
    10101033
    10111034Tuples are implemented in the \CFA translator via a transformation into \newterm{generic types}.
    1012 For each $N$, the first time an $N$-tuple is seen in a scope a generic type with $N$ type parameters is generated, \eg:
     1035For each $N$, the first time an $N$-tuple is seen in a scope, a generic type with $N$ type parameters is generated.
     1036For example, the following
    10131037\begin{cfa}
    10141038[int, int] f() {
     
    10171041}
    10181042\end{cfa}
    1019 is transformed into:
     1043is transformed into
    10201044\begin{cfa}
    10211045forall( dtype T0, dtype T1 | sized(T0) | sized(T1) ) struct _tuple2 {
     
    10831107
    10841108The various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions.
    1085 A variable is generated to store the value produced by a statement expression, since its members may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg in a unique expression.
     1109A variable is generated to store the value produced by a statement expression, since its members may need to be constructed with a nontrivial constructor and it may need to be referred to multiple time, \eg in a unique expression.
    10861110The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language.
    10871111However, there are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this restriction is not new.
     
    10911115\section{Control Structures}
    10921116
    1093 \CFA identifies inconsistent, problematic, and missing control structures in C, and extends, modifies, and adds control structures to increase functionality and safety.
    1094 
    1095 
    1096 \subsection{\texorpdfstring{\protect\lstinline{if} Statement}{if Statement}}
    1097 
    1098 The @if@ expression allows declarations, similar to @for@ declaration expression:
     1117\CFA identifies inconsistent, problematic, and missing control structures in C, as well as extends, modifies, and adds control structures to increase functionality and safety.
     1118
     1119
     1120\subsection{\texorpdfstring{\protect\lstinline@if@ statement}{if statement}}
     1121
     1122The @if@ expression allows declarations, similar to the @for@ declaration expression.
    10991123\begin{cfa}
    11001124if ( int x = f() ) ...                                          $\C{// x != 0}$
     
    11031127\end{cfa}
    11041128Unless a relational expression is specified, each variable is compared not equal to 0, which is the standard semantics for the @if@ expression, and the results are combined using the logical @&&@ operator.\footnote{\CC only provides a single declaration always compared not equal to 0.}
    1105 The scope of the declaration(s) is local to the @if@ statement but exist within both the ``then'' and ``else'' clauses.
    1106 
    1107 
    1108 \subsection{\texorpdfstring{\protect\lstinline{switch} Statement}{switch Statement}}
     1129The scope of the declaration(s) is local to the @if@ statement but exists within both the ``then'' and ``else'' clauses.
     1130
     1131
     1132\subsection{\texorpdfstring{\protect\lstinline@switch@ statement}{switch statement}}
    11091133
    11101134There are a number of deficiencies with the C @switch@ statements: enumerating @case@ lists, placement of @case@ clauses, scope of the switch body, and fall through between case clauses.
    11111135
    1112 C has no shorthand for specifying a list of case values, whether the list is non-contiguous or contiguous\footnote{C provides this mechanism via fall through.}.
    1113 \CFA provides a shorthand for a non-contiguous list:
     1136C has no shorthand for specifying a list of case values, whether the list is noncontiguous or contiguous\footnote{C provides this mechanism via fall through.}.
     1137\CFA provides a shorthand for a noncontiguous list:
    11141138\begin{cquote}
    11151139\lstDeleteShortInline@%
     
    11261150\lstMakeShortInline@%
    11271151\end{cquote}
    1128 for a contiguous list:\footnote{gcc has the same mechanism but awkward syntax, \lstinline@2 ...42@, as a space is required after a number, otherwise the first period is a decimal point.}
     1152for a contiguous list:\footnote{gcc has the same mechanism but awkward syntax, \lstinline@2 ...42@, as a space is required after a number;
     1153otherwise, the first period is a decimal point.}
    11291154\begin{cquote}
    11301155\lstDeleteShortInline@%
     
    11571182}
    11581183\end{cfa}
    1159 \CFA precludes this form of transfer \emph{into} a control structure because it causes undefined behaviour, especially with respect to missed initialization, and provides very limited functionality.
    1160 
    1161 C allows placement of declaration within the @switch@ body and unreachable code at the start, resulting in undefined behaviour:
     1184\CFA precludes this form of transfer \emph{into} a control structure because it causes an undefined behavior, especially with respect to missed initialization, and provides very limited functionality.
     1185
     1186C allows placement of declaration within the @switch@ body and unreachable code at the start, resulting in an undefined behavior.
    11621187\begin{cfa}
    11631188switch ( x ) {
     
    11761201
    11771202C @switch@ provides multiple entry points into the statement body, but once an entry point is selected, control continues across \emph{all} @case@ clauses until the end of the @switch@ body, called \newterm{fall through};
    1178 @case@ clauses are made disjoint by the @break@ statement.
     1203@case@ clauses are made disjoint by the @break@
     1204\newpage
     1205\noindent
     1206statement.
    11791207While fall through \emph{is} a useful form of control flow, it does not match well with programmer intuition, resulting in errors from missing @break@ statements.
    1180 For backwards compatibility, \CFA provides a \emph{new} control structure, @choose@, which mimics @switch@, but reverses the meaning of fall through (see Figure~\ref{f:ChooseSwitchStatements}), similar to Go.
     1208For backward compatibility, \CFA provides a \emph{new} control structure, \ie @choose@, which mimics @switch@, but reverses the meaning of fall through (see Figure~\ref{f:ChooseSwitchStatements}), similar to Go.
    11811209
    11821210\begin{figure}
    11831211\centering
     1212\fontsize{9bp}{11bp}\selectfont
    11841213\lstDeleteShortInline@%
    11851214\begin{tabular}{@{}l|@{\hspace{\parindentlnth}}l@{}}
     
    12181247\end{tabular}
    12191248\lstMakeShortInline@%
    1220 \caption{\lstinline|choose| versus \lstinline|switch| Statements}
     1249\caption{\lstinline|choose| versus \lstinline|switch| statements}
    12211250\label{f:ChooseSwitchStatements}
     1251\vspace*{-11pt}
    12221252\end{figure}
    12231253
    1224 Finally, Figure~\ref{f:FallthroughStatement} shows @fallthrough@ may appear in contexts other than terminating a @case@ clause, and have an explicit transfer label allowing separate cases but common final-code for a set of cases.
     1254Finally, Figure~\ref{f:FallthroughStatement} shows @fallthrough@ may appear in contexts other than terminating a @case@ clause and have an explicit transfer label allowing separate cases but common final code for a set of cases.
    12251255The target label must be below the @fallthrough@ and may not be nested in a control structure, \ie @fallthrough@ cannot form a loop, and the target label must be at the same or higher level as the containing @case@ clause and located at the same level as a @case@ clause;
    12261256the target label may be case @default@, but only associated with the current @switch@/@choose@ statement.
     
    12281258\begin{figure}
    12291259\centering
     1260\fontsize{9bp}{11bp}\selectfont
    12301261\lstDeleteShortInline@%
    12311262\begin{tabular}{@{}l|@{\hspace{\parindentlnth}}l@{}}
     
    12561287\end{tabular}
    12571288\lstMakeShortInline@%
    1258 \caption{\lstinline|fallthrough| Statement}
     1289\caption{\lstinline|fallthrough| statement}
    12591290\label{f:FallthroughStatement}
     1291\vspace*{-11pt}
    12601292\end{figure}
    12611293
    12621294
    1263 \subsection{\texorpdfstring{Labelled \protect\lstinline{continue} / \protect\lstinline{break}}{Labelled continue / break}}
     1295\vspace*{-8pt}
     1296\subsection{\texorpdfstring{Labeled \protect\lstinline@continue@ / \protect\lstinline@break@}{Labeled continue / break}}
    12641297
    12651298While C provides @continue@ and @break@ statements for altering control flow, both are restricted to one level of nesting for a particular control structure.
    1266 Unfortunately, this restriction forces programmers to use @goto@ to achieve the equivalent control-flow for more than one level of nesting.
    1267 To prevent having to switch to the @goto@, \CFA extends the @continue@ and @break@ with a target label to support static multi-level exit~\cite{Buhr85}, as in Java.
     1299Unfortunately, this restriction forces programmers to use @goto@ to achieve the equivalent control flow for more than one level of nesting.
     1300To prevent having to switch to the @goto@, \CFA extends @continue@ and @break@ with a target label to support static multilevel exit~\cite{Buhr85}, as in Java.
    12681301For both @continue@ and @break@, the target label must be directly associated with a @for@, @while@ or @do@ statement;
    12691302for @break@, the target label can also be associated with a @switch@, @if@ or compound (@{}@) statement.
    1270 Figure~\ref{f:MultiLevelExit} shows @continue@ and @break@ indicating the specific control structure, and the corresponding C program using only @goto@ and labels.
    1271 The innermost loop has 7 exit points, which cause continuation or termination of one or more of the 7 nested control-structures.
     1303Figure~\ref{f:MultiLevelExit} shows @continue@ and @break@ indicating the specific control structure and the corresponding C program using only @goto@ and labels.
     1304The innermost loop has seven exit points, which cause a continuation or termination of one or more of the seven nested control structures.
    12721305
    12731306\begin{figure}
     1307\fontsize{9bp}{11bp}\selectfont
    12741308\lstDeleteShortInline@%
    12751309\begin{tabular}{@{\hspace{\parindentlnth}}l|@{\hspace{\parindentlnth}}l@{\hspace{\parindentlnth}}l@{}}
     
    13361370\end{tabular}
    13371371\lstMakeShortInline@%
    1338 \caption{Multi-level Exit}
     1372\caption{Multilevel exit}
    13391373\label{f:MultiLevelExit}
     1374\vspace*{-5pt}
    13401375\end{figure}
    13411376
    1342 With respect to safety, both labelled @continue@ and @break@ are a @goto@ restricted in the following ways:
    1343 \begin{itemize}
     1377With respect to safety, both labeled @continue@ and @break@ are @goto@ restricted in the following ways.
     1378\begin{list}{$\bullet$}{\topsep=4pt\itemsep=0pt\parsep=0pt}
    13441379\item
    13451380They cannot create a loop, which means only the looping constructs cause looping.
     
    13471382\item
    13481383They cannot branch into a control structure.
    1349 This restriction prevents missing declarations and/or initializations at the start of a control structure resulting in undefined behaviour.
    1350 \end{itemize}
    1351 The advantage of the labelled @continue@/@break@ is allowing static multi-level exits without having to use the @goto@ statement, and tying control flow to the target control structure rather than an arbitrary point in a program.
    1352 Furthermore, the location of the label at the \emph{beginning} of the target control structure informs the reader (eye candy) that complex control-flow is occurring in the body of the control structure.
     1384This restriction prevents missing declarations and/or initializations at the start of a control structure resulting in an undefined behavior.
     1385\end{list}
     1386The advantage of the labeled @continue@/@break@ is allowing static multilevel exits without having to use the @goto@ statement and tying control flow to the target control structure rather than an arbitrary point in a program.
     1387Furthermore, the location of the label at the \emph{beginning} of the target control structure informs the reader (eye candy) that complex control flow is
     1388occurring in the body of the control structure.
    13531389With @goto@, the label is at the end of the control structure, which fails to convey this important clue early enough to the reader.
    1354 Finally, using an explicit target for the transfer instead of an implicit target allows new constructs to be added or removed without affecting existing constructs.
     1390Finally, using an explicit target for the transfer instead of an implicit target allows new constructs to be added or removed without affecting the existing constructs.
    13551391Otherwise, the implicit targets of the current @continue@ and @break@, \ie the closest enclosing loop or @switch@, change as certain constructs are added or removed.
    13561392
    13571393
    1358 \subsection{Exception Handling}
    1359 
    1360 The following framework for \CFA exception-handling is in place, excluding some runtime type-information and virtual functions.
     1394\vspace*{-5pt}
     1395\subsection{Exception handling}
     1396
     1397The following framework for \CFA exception handling is in place, excluding some runtime type information and virtual functions.
    13611398\CFA provides two forms of exception handling: \newterm{fix-up} and \newterm{recovery} (see Figure~\ref{f:CFAExceptionHandling})~\cite{Buhr92b,Buhr00a}.
    1362 Both mechanisms provide dynamic call to a handler using dynamic name-lookup, where fix-up has dynamic return and recovery has static return from the handler.
     1399Both mechanisms provide dynamic call to a handler using dynamic name lookup, where fix-up has dynamic return and recovery has static return from the handler.
    13631400\CFA restricts exception types to those defined by aggregate type @exception@.
    13641401The form of the raise dictates the set of handlers examined during propagation: \newterm{resumption propagation} (@resume@) only examines resumption handlers (@catchResume@); \newterm{terminating propagation} (@throw@) only examines termination handlers (@catch@).
    1365 If @resume@ or @throw@ have no exception type, it is a reresume/rethrow, meaning the currently exception continues propagation.
     1402If @resume@ or @throw@ has no exception type, it is a reresume/rethrow, which means that the current exception continues propagation.
    13661403If there is no current exception, the reresume/rethrow results in a runtime error.
    13671404
    13681405\begin{figure}
     1406\fontsize{9bp}{11bp}\selectfont
     1407\lstDeleteShortInline@%
    13691408\begin{cquote}
    1370 \lstDeleteShortInline@%
    13711409\begin{tabular}{@{}l|@{\hspace{\parindentlnth}}l@{}}
    13721410\multicolumn{1}{@{}c|@{\hspace{\parindentlnth}}}{\textbf{Resumption}}   & \multicolumn{1}{c@{}}{\textbf{Termination}}   \\
     
    13991437\end{cfa}
    14001438\end{tabular}
    1401 \lstMakeShortInline@%
    14021439\end{cquote}
    1403 \caption{\CFA Exception Handling}
     1440\lstMakeShortInline@%
     1441\caption{\CFA exception handling}
    14041442\label{f:CFAExceptionHandling}
     1443\vspace*{-5pt}
    14051444\end{figure}
    14061445
    1407 The set of exception types in a list of catch clause may include both a resumption and termination handler:
     1446The set of exception types in a list of catch clauses may include both a resumption and a termination handler.
    14081447\begin{cfa}
    14091448try {
     
    14191458The termination handler is available because the resumption propagation did not unwind the stack.
    14201459
    1421 An additional feature is conditional matching in a catch clause:
     1460An additional feature is conditional matching in a catch clause.
    14221461\begin{cfa}
    14231462try {
     
    14281467   catch ( IOError err ) { ... }                        $\C{// handler error from other files}$
    14291468\end{cfa}
    1430 where the throw inserts the failing file-handle into the I/O exception.
    1431 Conditional catch cannot be trivially mimicked by other mechanisms because once an exception is caught, handler clauses in that @try@ statement are no longer eligible..
    1432 
    1433 The resumption raise can specify an alternate stack on which to raise an exception, called a \newterm{nonlocal raise}:
     1469Here, the throw inserts the failing file handle into the I/O exception.
     1470Conditional catch cannot be trivially mimicked by other mechanisms because once an exception is caught, handler clauses in that @try@ statement are no longer eligible.
     1471
     1472The resumption raise can specify an alternate stack on which to raise an exception, called a \newterm{nonlocal raise}.
    14341473\begin{cfa}
    14351474resume( $\emph{exception-type}$, $\emph{alternate-stack}$ )
     
    14391478Nonlocal raise is restricted to resumption to provide the exception handler the greatest flexibility because processing the exception does not unwind its stack, allowing it to continue after the handler returns.
    14401479
    1441 To facilitate nonlocal raise, \CFA provides dynamic enabling and disabling of nonlocal exception-propagation.
    1442 The constructs for controlling propagation of nonlocal exceptions are the @enable@ and the @disable@ blocks:
     1480To facilitate nonlocal raise, \CFA provides dynamic enabling and disabling of nonlocal exception propagation.
     1481The constructs for controlling propagation of nonlocal exceptions are the @enable@ and @disable@ blocks.
    14431482\begin{cquote}
    14441483\lstDeleteShortInline@%
     
    14461485\begin{cfa}
    14471486enable $\emph{exception-type-list}$ {
    1448         // allow non-local raise
     1487        // allow nonlocal raise
    14491488}
    14501489\end{cfa}
     
    14521491\begin{cfa}
    14531492disable $\emph{exception-type-list}$ {
    1454         // disallow non-local raise
     1493        // disallow nonlocal raise
    14551494}
    14561495\end{cfa}
     
    14601499The arguments for @enable@/@disable@ specify the exception types allowed to be propagated or postponed, respectively.
    14611500Specifying no exception type is shorthand for specifying all exception types.
    1462 Both @enable@ and @disable@ blocks can be nested, turning propagation on/off on entry, and on exit, the specified exception types are restored to their prior state.
    1463 Coroutines and tasks start with non-local exceptions disabled, allowing handlers to be put in place, before non-local exceptions are explicitly enabled.
     1501Both @enable@ and @disable@ blocks can be nested;
     1502turning propagation on/off on entry and on exit, the specified exception types are restored to their prior state.
     1503Coroutines and tasks start with nonlocal exceptions disabled, allowing handlers to be put in place, before nonlocal exceptions are explicitly enabled.
    14641504\begin{cfa}
    14651505void main( mytask & t ) {                                       $\C{// thread starts here}$
    1466         // non-local exceptions disabled
    1467         try {                                                                   $\C{// establish handles for non-local exceptions}$
    1468                 enable {                                                        $\C{// allow non-local exception delivery}$
     1506        // nonlocal exceptions disabled
     1507        try {                                                                   $\C{// establish handles for nonlocal exceptions}$
     1508                enable {                                                        $\C{// allow nonlocal exception delivery}$
    14691509                        // task body
    14701510                }
     
    14741514\end{cfa}
    14751515
    1476 Finally, \CFA provides a Java like  @finally@ clause after the catch clauses:
     1516Finally, \CFA provides a Java-like  @finally@ clause after the catch clauses.
    14771517\begin{cfa}
    14781518try {
     
    14831523}
    14841524\end{cfa}
    1485 The finally clause is always executed, i.e., if the try block ends normally or if an exception is raised.
     1525The finally clause is always executed, \ie, if the try block ends normally or if an exception is raised.
    14861526If an exception is raised and caught, the handler is run before the finally clause.
    14871527Like a destructor (see Section~\ref{s:ConstructorsDestructors}), a finally clause can raise an exception but not if there is an exception being propagated.
    1488 Mimicking the @finally@ clause with mechanisms like RAII is non-trivial when there are multiple types and local accesses.
    1489 
    1490 
    1491 \subsection{\texorpdfstring{\protect\lstinline{with} Statement}{with Statement}}
     1528Mimicking the @finally@ clause with mechanisms like Resource Aquisition Is Initialization (RAII) is nontrivial when there are multiple types and local accesses.
     1529
     1530
     1531\subsection{\texorpdfstring{\protect\lstinline{with} statement}{with statement}}
    14921532\label{s:WithStatement}
    14931533
    1494 Heterogeneous data is often aggregated into a structure/union.
    1495 To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate member-qualification by opening a scope containing the member identifiers.
     1534Heterogeneous data are often aggregated into a structure/union.
     1535To reduce syntactic noise, \CFA provides a @with@ statement (see section~4.F in the Pascal User Manual and Report~\cite{Pascal}) to elide aggregate member qualification by opening a scope containing the member identifiers.
    14961536\begin{cquote}
    14971537\vspace*{-\baselineskip}%???
     
    15211561Object-oriented programming languages only provide implicit qualification for the receiver.
    15221562
    1523 In detail, the @with@ statement has the form:
     1563In detail, the @with@ statement has the form
    15241564\begin{cfa}
    15251565$\emph{with-statement}$:
     
    15271567\end{cfa}
    15281568and may appear as the body of a function or nested within a function body.
    1529 Each expression in the expression-list provides a type and object.
     1569Each expression in the expression list provides a type and object.
    15301570The type must be an aggregate type.
    15311571(Enumerations are already opened.)
    1532 The object is the implicit qualifier for the open structure-members.
     1572The object is the implicit qualifier for the open structure members.
    15331573
    15341574All expressions in the expression list are open in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right.
    1535 The difference between parallel and nesting occurs for members with the same name and type:
     1575The difference between parallel and nesting occurs for members with the same name and type.
    15361576\begin{cfa}
    15371577struct S { int `i`; int j; double m; } s, w;    $\C{// member i has same type in structure types S and T}$
     
    15471587}
    15481588\end{cfa}
    1549 For parallel semantics, both @s.i@ and @t.i@ are visible, so @i@ is ambiguous without qualification;
    1550 for nested semantics, @t.i@ hides @s.i@, so @i@ implies @t.i@.
     1589For parallel semantics, both @s.i@ and @t.i@ are visible and, therefore, @i@ is ambiguous without qualification;
     1590for nested semantics, @t.i@ hides @s.i@ and, therefore, @i@ implies @t.i@.
    15511591\CFA's ability to overload variables means members with the same name but different types are automatically disambiguated, eliminating most qualification when opening multiple aggregates.
    15521592Qualification or a cast is used to disambiguate.
    15531593
    1554 There is an interesting problem between parameters and the function-body @with@, \eg:
     1594There is an interesting problem between parameters and the function body @with@.
    15551595\begin{cfa}
    15561596void ?{}( S & s, int i ) with ( s ) {           $\C{// constructor}$
     
    15581598}
    15591599\end{cfa}
    1560 Here, the assignment @s.i = i@ means @s.i = s.i@, which is meaningless, and there is no mechanism to qualify the parameter @i@, making the assignment impossible using the function-body @with@.
    1561 To solve this problem, parameters are treated like an initialized aggregate:
     1600Here, the assignment @s.i = i@ means @s.i = s.i@, which is meaningless, and there is no mechanism to qualify the parameter @i@, making the assignment impossible using the function body @with@.
     1601To solve this problem, parameters are treated like an initialized aggregate
    15621602\begin{cfa}
    15631603struct Params {
     
    15661606} params;
    15671607\end{cfa}
    1568 and implicitly opened \emph{after} a function-body open, to give them higher priority:
     1608\newpage
     1609and implicitly opened \emph{after} a function body open, to give them higher priority
    15691610\begin{cfa}
    15701611void ?{}( S & s, int `i` ) with ( s ) `{` `with( $\emph{\color{red}params}$ )` {
     
    15721613} `}`
    15731614\end{cfa}
    1574 Finally, a cast may be used to disambiguate among overload variables in a @with@ expression:
     1615Finally, a cast may be used to disambiguate among overload variables in a @with@ expression
    15751616\begin{cfa}
    15761617with ( w ) { ... }                                                      $\C{// ambiguous, same name and no context}$
    15771618with ( (S)w ) { ... }                                           $\C{// unambiguous, cast}$
    15781619\end{cfa}
    1579 and @with@ expressions may be complex expressions with type reference (see Section~\ref{s:References}) to aggregate:
     1620and @with@ expressions may be complex expressions with type reference (see Section~\ref{s:References}) to aggregate
    15801621\begin{cfa}
    15811622struct S { int i, j; } sv;
     
    16011642\CFA attempts to correct and add to C declarations, while ensuring \CFA subjectively ``feels like'' C.
    16021643An important part of this subjective feel is maintaining C's syntax and procedural paradigm, as opposed to functional and object-oriented approaches in other systems languages such as \CC and Rust.
    1603 Maintaining the C approach means that C coding-patterns remain not only useable but idiomatic in \CFA, reducing the mental burden of retraining C programmers and switching between C and \CFA development.
     1644Maintaining the C approach means that C coding patterns remain not only useable but idiomatic in \CFA, reducing the mental burden of retraining C programmers and switching between C and \CFA development.
    16041645Nevertheless, some features from other approaches are undeniably convenient;
    16051646\CFA attempts to adapt these features to the C paradigm.
    16061647
    16071648
    1608 \subsection{Alternative Declaration Syntax}
     1649\subsection{Alternative declaration syntax}
    16091650
    16101651C declaration syntax is notoriously confusing and error prone.
    1611 For example, many C programmers are confused by a declaration as simple as:
     1652For example, many C programmers are confused by a declaration as simple as the following.
    16121653\begin{cquote}
    16131654\lstDeleteShortInline@%
     
    16211662\lstMakeShortInline@%
    16221663\end{cquote}
    1623 Is this an array of 5 pointers to integers or a pointer to an array of 5 integers?
     1664Is this an array of five pointers to integers or a pointer to an array of five integers?
    16241665If there is any doubt, it implies productivity and safety issues even for basic programs.
    16251666Another example of confusion results from the fact that a function name and its parameters are embedded within the return type, mimicking the way the return value is used at the function's call site.
    1626 For example, a function returning a pointer to an array of integers is defined and used in the following way:
     1667For example, a function returning a pointer to an array of integers is defined and used in the following way.
    16271668\begin{cfa}
    16281669int `(*`f`())[`5`]` {...};                                      $\C{// definition}$
     
    16321673While attempting to make the two contexts consistent is a laudable goal, it has not worked out in practice.
    16331674
    1634 \CFA provides its own type, variable and function declarations, using a different syntax~\cite[pp.~856--859]{Buhr94a}.
    1635 The new declarations place qualifiers to the left of the base type, while C declarations place qualifiers to the right.
     1675\newpage
     1676\CFA provides its own type, variable, and function declarations, using a different syntax~\cite[pp.~856--859]{Buhr94a}.
     1677The new declarations place qualifiers to the left of the base type, whereas C declarations place qualifiers to the right.
    16361678The qualifiers have the same meaning but are ordered left to right to specify a variable's type.
    16371679\begin{cquote}
     
    16591701\lstMakeShortInline@%
    16601702\end{cquote}
    1661 The only exception is bit-field specification, which always appear to the right of the base type.
     1703The only exception is bit-field specification, which always appears to the right of the base type.
    16621704% Specifically, the character @*@ is used to indicate a pointer, square brackets @[@\,@]@ are used to represent an array or function return value, and parentheses @()@ are used to indicate a function parameter.
    16631705However, unlike C, \CFA type declaration tokens are distributed across all variables in the declaration list.
    1664 For instance, variables @x@ and @y@ of type pointer to integer are defined in \CFA as follows:
     1706For instance, variables @x@ and @y@ of type pointer to integer are defined in \CFA as
    16651707\begin{cquote}
    16661708\lstDeleteShortInline@%
     
    17251767\end{comment}
    17261768
    1727 All specifiers (@extern@, @static@, \etc) and qualifiers (@const@, @volatile@, \etc) are used in the normal way with the new declarations and also appear left to right, \eg:
     1769All specifiers (@extern@, @static@, \etc) and qualifiers (@const@, @volatile@, \etc) are used in the normal way with the new declarations and also appear left to right.
    17281770\begin{cquote}
    17291771\lstDeleteShortInline@%
    17301772\begin{tabular}{@{}l@{\hspace{2\parindentlnth}}l@{\hspace{2\parindentlnth}}l@{}}
    17311773\multicolumn{1}{@{}c@{\hspace{2\parindentlnth}}}{\textbf{\CFA}} & \multicolumn{1}{c@{\hspace{2\parindentlnth}}}{\textbf{C}}     \\
    1732 \begin{cfa}
     1774\begin{cfa}[basicstyle=\linespread{0.9}\fontsize{9bp}{12bp}\selectfont\sf]
    17331775extern const * const int x;
    17341776static const * [5] const int y;
    17351777\end{cfa}
    17361778&
    1737 \begin{cfa}
     1779\begin{cfa}[basicstyle=\linespread{0.9}\fontsize{9bp}{12bp}\selectfont\sf]
    17381780int extern const * const x;
    17391781static const int (* const y)[5]
    17401782\end{cfa}
    17411783&
    1742 \begin{cfa}
     1784\begin{cfa}[basicstyle=\linespread{0.9}\fontsize{9bp}{12bp}\selectfont\sf]
    17431785// external const pointer to const int
    17441786// internal const pointer to array of 5 const int
     
    17481790\end{cquote}
    17491791Specifiers must appear at the start of a \CFA function declaration\footnote{\label{StorageClassSpecifier}
    1750 The placement of a storage-class specifier other than at the beginning of the declaration specifiers in a declaration is an obsolescent feature.~\cite[\S~6.11.5(1)]{C11}}.
     1792The placement of a storage-class specifier other than at the beginning of the declaration specifiers in a declaration is an obsolescent feature (see section~6.11.5(1) in ISO/IEC 9899~\cite{C11}).}.
    17511793
    17521794The new declaration syntax can be used in other contexts where types are required, \eg casts and the pseudo-function @sizeof@:
     
    17691811
    17701812The syntax of the new function-prototype declaration follows directly from the new function-definition syntax;
    1771 as well, parameter names are optional, \eg:
     1813also, parameter names are optional.
    17721814\begin{cfa}
    17731815[ int x ] f ( /* void */ );             $\C[2.5in]{// returning int with no parameters}$
     
    17771819[ * int, int ] j ( int );               $\C{// returning pointer to int and int with int parameter}$
    17781820\end{cfa}
    1779 This syntax allows a prototype declaration to be created by cutting and pasting source text from the function-definition header (or vice versa).
    1780 Like C, it is possible to declare multiple function-prototypes in a single declaration, where the return type is distributed across \emph{all} function names in the declaration list, \eg:
     1821This syntax allows a prototype declaration to be created by cutting and pasting the source text from the function-definition header (or vice versa).
     1822Like C, it is possible to declare multiple function prototypes in a single declaration, where the return type is distributed across \emph{all} function names in the declaration list.
    17811823\begin{cquote}
    17821824\lstDeleteShortInline@%
     
    17931835\lstMakeShortInline@%
    17941836\end{cquote}
    1795 where \CFA allows the last function in the list to define its body.
    1796 
    1797 The syntax for pointers to \CFA functions specifies the pointer name on the right, \eg:
     1837Here, \CFA allows the last function in the list to define its body.
     1838
     1839The syntax for pointers to \CFA functions specifies the pointer name on the right.
    17981840\begin{cfa}
    17991841* [ int x ] () fp;                              $\C{// pointer to function returning int with no parameters}$
     
    18021844* [ * int, int ] ( int ) jp;    $\C{// pointer to function returning pointer to int and int with int parameter}\CRT$
    18031845\end{cfa}
    1804 Note, the name of the function pointer is specified last, as for other variable declarations.
    1805 
    1806 Finally, new \CFA declarations may appear together with C declarations in the same program block, but cannot be mixed within a specific declaration.
    1807 Therefore, a programmer has the option of either continuing to use traditional C declarations or take advantage of the new style.
    1808 Clearly, both styles need to be supported for some time due to existing C-style header-files, particularly for UNIX-like systems.
     1846\newpage
     1847\noindent
     1848Note that the name of the function pointer is specified last, as for other variable declarations.
     1849
     1850Finally, new \CFA declarations may appear together with C declarations in the same program block but cannot be mixed within a specific declaration.
     1851Therefore, a programmer has the option of either continuing to use traditional C declarations or taking advantage of the new style.
     1852Clearly, both styles need to be supported for some time due to existing C-style header files, particularly for UNIX-like systems.
    18091853
    18101854
     
    18141858All variables in C have an \newterm{address}, a \newterm{value}, and a \newterm{type};
    18151859at the position in the program's memory denoted by the address, there exists a sequence of bits (the value), with the length and semantic meaning of this bit sequence defined by the type.
    1816 The C type-system does not always track the relationship between a value and its address;
    1817 a value that does not have a corresponding address is called a \newterm{rvalue} (for ``right-hand value''), while a value that does have an address is called a \newterm{lvalue} (for ``left-hand value'').
    1818 For example, in @int x; x = 42;@ the variable expression @x@ on the left-hand-side of the assignment is a lvalue, while the constant expression @42@ on the right-hand-side of the assignment is a rvalue.
    1819 Despite the nomenclature of ``left-hand'' and ``right-hand'', an expression's classification as lvalue or rvalue is entirely dependent on whether it has an address or not; in imperative programming, the address of a value is used for both reading and writing (mutating) a value, and as such, lvalues can be converted to rvalues and read from, but rvalues cannot be mutated because they lack a location to store the updated value.
     1860The C type system does not always track the relationship between a value and its address;
     1861a value that does not have a corresponding address is called an \newterm{rvalue} (for ``right-hand value''), whereas a value that does have an address is called an \newterm{lvalue} (for ``left-hand value'').
     1862For example, in @int x; x = 42;@ the variable expression @x@ on the left-hand side of the assignment is an lvalue, whereas the constant expression @42@ on the right-hand side of the assignment is an rvalue.
     1863Despite the nomenclature of ``left-hand'' and ``right-hand'', an expression's classification as an lvalue or an rvalue is entirely dependent on whether it has an address or not; in imperative programming, the address of a value is used for both reading and writing (mutating) a value, and as such, lvalues can be converted into rvalues and read from, but rvalues cannot be mutated because they lack a location to store the updated value.
    18201864
    18211865Within a lexical scope, lvalue expressions have an \newterm{address interpretation} for writing a value or a \newterm{value interpretation} to read a value.
    1822 For example, in @x = y@, @x@ has an address interpretation, while @y@ has a value interpretation.
     1866For example, in @x = y@, @x@ has an address interpretation, whereas @y@ has a value interpretation.
    18231867While this duality of interpretation is useful, C lacks a direct mechanism to pass lvalues between contexts, instead relying on \newterm{pointer types} to serve a similar purpose.
    18241868In C, for any type @T@ there is a pointer type @T *@, the value of which is the address of a value of type @T@.
    1825 A pointer rvalue can be explicitly \newterm{dereferenced} to the pointed-to lvalue with the dereference operator @*?@, while the rvalue representing the address of a lvalue can be obtained with the address-of operator @&?@.
    1826 
     1869A pointer rvalue can be explicitly \newterm{dereferenced} to the pointed-to lvalue with the dereference operator @*?@, whereas the rvalue representing the address of an lvalue can be obtained with the address-of operator @&?@.
    18271870\begin{cfa}
    18281871int x = 1, y = 2, * p1, * p2, ** p3;
     
    18321875*p2 = ((*p1 + *p2) * (**p3 - *p1)) / (**p3 - 15);
    18331876\end{cfa}
    1834 
    18351877Unfortunately, the dereference and address-of operators introduce a great deal of syntactic noise when dealing with pointed-to values rather than pointers, as well as the potential for subtle bugs because of pointer arithmetic.
    18361878For both brevity and clarity, it is desirable for the compiler to figure out how to elide the dereference operators in a complex expression such as the assignment to @*p2@ above.
    1837 However, since C defines a number of forms of \newterm{pointer arithmetic}, two similar expressions involving pointers to arithmetic types (\eg @*p1 + x@ and @p1 + x@) may each have well-defined but distinct semantics, introducing the possibility that a programmer may write one when they mean the other, and precluding any simple algorithm for elision of dereference operators.
     1879However, since C defines a number of forms of \newterm{pointer arithmetic}, two similar expressions involving pointers to arithmetic types (\eg @*p1 + x@ and @p1 + x@) may each have well-defined but distinct semantics, introducing the possibility that a programmer may write one when they mean the other and precluding any simple algorithm for elision of dereference operators.
    18381880To solve these problems, \CFA introduces reference types @T &@;
    1839 a @T &@ has exactly the same value as a @T *@, but where the @T *@ takes the address interpretation by default, a @T &@ takes the value interpretation by default, as below:
    1840 
     1881a @T &@ has exactly the same value as a @T *@, but where the @T *@ takes the address interpretation by default, a @T &@ takes the value interpretation by default, as below.
    18411882\begin{cfa}
    18421883int x = 1, y = 2, & r1, & r2, && r3;
     
    18461887r2 = ((r1 + r2) * (r3 - r1)) / (r3 - 15);       $\C{// implicit dereferencing}$
    18471888\end{cfa}
    1848 
    18491889Except for auto-dereferencing by the compiler, this reference example is exactly the same as the previous pointer example.
    1850 Hence, a reference behaves like a variable name -- an lvalue expression which is interpreted as a value -- but also has the type system track the address of that value.
    1851 One way to conceptualize a reference is via a rewrite rule, where the compiler inserts a dereference operator before the reference variable for each reference qualifier in the reference variable declaration, so the previous example implicitly acts like:
    1852 
     1890Hence, a reference behaves like a variable name---an lvalue expression that is interpreted as a value---but also has the type system track the address of that value.
     1891One way to conceptualize a reference is via a rewrite rule, where the compiler inserts a dereference operator before the reference variable for each reference qualifier in the reference variable declaration;
     1892thus, the previous example implicitly acts like the following.
    18531893\begin{cfa}
    18541894`*`r2 = ((`*`r1 + `*`r2) * (`**`r3 - `*`r1)) / (`**`r3 - 15);
    18551895\end{cfa}
    1856 
    18571896References in \CFA are similar to those in \CC, with important improvements, which can be seen in the example above.
    18581897Firstly, \CFA does not forbid references to references.
    1859 This provides a much more orthogonal design for library implementors, obviating the need for workarounds such as @std::reference_wrapper@.
     1898This provides a much more orthogonal design for library \mbox{implementors}, obviating the need for workarounds such as @std::reference_wrapper@.
    18601899Secondly, \CFA references are rebindable, whereas \CC references have a fixed address.
    1861 Rebinding allows \CFA references to be default-initialized (\eg to a null pointer\footnote{
    1862 While effort has been made into non-null reference checking in \CC and Java, the exercise seems moot for any non-managed languages (C/\CC), given that it only handles one of many different error situations, \eg using a pointer after its storage is deleted.}) and point to different addresses throughout their lifetime, like pointers.
     1900Rebinding allows \CFA references to be default initialized (\eg to a null pointer\footnote{
     1901While effort has been made into non-null reference checking in \CC and Java, the exercise seems moot for any nonmanaged languages (C/\CC), given that it only handles one of many different error situations, \eg using a pointer after its storage is deleted.}) and point to different addresses throughout their lifetime, like pointers.
    18631902Rebinding is accomplished by extending the existing syntax and semantics of the address-of operator in C.
    18641903
    1865 In C, the address of a lvalue is always a rvalue, as in general that address is not stored anywhere in memory, and does not itself have an address.
    1866 In \CFA, the address of a @T &@ is a lvalue @T *@, as the address of the underlying @T@ is stored in the reference, and can thus be mutated there.
     1904In C, the address of an lvalue is always an rvalue, as, in general, that address is not stored anywhere in memory and does not itself have an address.
     1905In \CFA, the address of a @T &@ is an lvalue @T *@, as the address of the underlying @T@ is stored in the reference and can thus be mutated there.
    18671906The result of this rule is that any reference can be rebound using the existing pointer assignment semantics by assigning a compatible pointer into the address of the reference, \eg @&r1 = &x;@ above.
    18681907This rebinding occurs to an arbitrary depth of reference nesting;
    18691908loosely speaking, nested address-of operators produce a nested lvalue pointer up to the depth of the reference.
    18701909These explicit address-of operators can be thought of as ``cancelling out'' the implicit dereference operators, \eg @(&`*`)r1 = &x@ or @(&(&`*`)`*`)r3 = &(&`*`)r1@ or even @(&`*`)r2 = (&`*`)`*`r3@ for @&r2 = &r3@.
    1871 More precisely:
     1910The precise rules are
    18721911\begin{itemize}
    18731912\item
    1874 if @R@ is an rvalue of type {@T &@$_1 \cdots$@ &@$_r$} where $r \ge 1$ references (@&@ symbols) then @&R@ has type {@T `*`&@$_{\color{red}2} \cdots$@ &@$_{\color{red}r}$}, \\ \ie @T@ pointer with $r-1$ references (@&@ symbols).
    1875        
     1913If @R@ is an rvalue of type @T &@$_1\cdots$ @&@$_r$, where $r \ge 1$ references (@&@ symbols), than @&R@ has type @T `*`&@$_{\color{red}2}\cdots$ @&@$_{\color{red}r}$, \ie @T@ pointer with $r-1$ references (@&@ symbols).
    18761914\item
    1877 if @L@ is an lvalue of type {@T &@$_1 \cdots$@ &@$_l$} where $l \ge 0$ references (@&@ symbols) then @&L@ has type {@T `*`&@$_{\color{red}1} \cdots$@ &@$_{\color{red}l}$}, \\ \ie @T@ pointer with $l$ references (@&@ symbols).
     1915If @L@ is an lvalue of type @T &@$_1\cdots$ @&@$_l$, where $l \ge 0$ references (@&@ symbols), than @&L@ has type @T `*`&@$_{\color{red}1}\cdots$ @&@$_{\color{red}l}$, \ie @T@ pointer with $l$ references (@&@ symbols).
    18781916\end{itemize}
    1879 Since pointers and references share the same internal representation, code using either is equally performant; in fact the \CFA compiler converts references to pointers internally, and the choice between them is made solely on convenience, \eg many pointer or value accesses.
     1917Since pointers and references share the same internal representation, code using either is equally performant;
     1918in fact, the \CFA compiler converts references into pointers internally, and the choice between them is made solely on convenience, \eg many pointer or value accesses.
    18801919
    18811920By analogy to pointers, \CFA references also allow cv-qualifiers such as @const@:
     
    18921931There are three initialization contexts in \CFA: declaration initialization, argument/parameter binding, and return/temporary binding.
    18931932In each of these contexts, the address-of operator on the target lvalue is elided.
    1894 The syntactic motivation is clearest when considering overloaded operator-assignment, \eg @int ?+=?(int &, int)@; given @int x, y@, the expected call syntax is @x += y@, not @&x += y@.
    1895 
    1896 More generally, this initialization of references from lvalues rather than pointers is an instance of a ``lvalue-to-reference'' conversion rather than an elision of the address-of operator;
     1933The syntactic motivation is clearest when considering overloaded operator assignment, \eg @int ?+=?(int &, int)@; given @int x, y@, the expected call syntax is @x += y@, not @&x += y@.
     1934
     1935More generally, this initialization of references from lvalues rather than pointers is an instance of an ``lvalue-to-reference'' conversion rather than an elision of the address-of operator;
    18971936this conversion is used in any context in \CFA where an implicit conversion is allowed.
    1898 Similarly, use of a the value pointed to by a reference in an rvalue context can be thought of as a ``reference-to-rvalue'' conversion, and \CFA also includes a qualifier-adding ``reference-to-reference'' conversion, analogous to the @T *@ to @const T *@ conversion in standard C.
    1899 The final reference conversion included in \CFA is ``rvalue-to-reference'' conversion, implemented by means of an implicit temporary.
     1937Similarly, use of the value pointed to by a reference in an rvalue context can be thought of as a ``reference-to-rvalue'' conversion, and \CFA also includes a qualifier-adding ``reference-to-reference'' conversion, analogous to the @T *@ to @const T *@ conversion in standard C.
     1938The final reference conversion included in \CFA is an ``rvalue-to-reference'' conversion, implemented by means of an implicit temporary.
    19001939When an rvalue is used to initialize a reference, it is instead used to initialize a hidden temporary value with the same lexical scope as the reference, and the reference is initialized to the address of this temporary.
    19011940\begin{cfa}
     
    19051944f( 3, x + y, (S){ 1.0, 7.0 }, (int [3]){ 1, 2, 3 } ); $\C{// pass rvalue to lvalue \(\Rightarrow\) implicit temporary}$
    19061945\end{cfa}
    1907 This allows complex values to be succinctly and efficiently passed to functions, without the syntactic overhead of explicit definition of a temporary variable or the runtime cost of pass-by-value.
    1908 \CC allows a similar binding, but only for @const@ references; the more general semantics of \CFA are an attempt to avoid the \newterm{const poisoning} problem~\cite{Taylor10}, in which addition of a @const@ qualifier to one reference requires a cascading chain of added qualifiers.
    1909 
    1910 
    1911 \subsection{Type Nesting}
    1912 
    1913 Nested types provide a mechanism to organize associated types and refactor a subset of members into a named aggregate (\eg sub-aggregates @name@, @address@, @department@, within aggregate @employe@).
    1914 Java nested types are dynamic (apply to objects), \CC are static (apply to the \lstinline[language=C++]@class@), and C hoists (refactors) nested types into the enclosing scope, meaning there is no need for type qualification.
    1915 Since \CFA in not object-oriented, adopting dynamic scoping does not make sense;
    1916 instead \CFA adopts \CC static nesting, using the member-selection operator ``@.@'' for type qualification, as does Java, rather than the \CC type-selection operator ``@::@'' (see Figure~\ref{f:TypeNestingQualification}).
     1946This allows complex values to be succinctly and efficiently passed to functions, without the syntactic overhead of the explicit definition of a temporary variable or the runtime cost of pass-by-value.
     1947\CC allows a similar binding, but only for @const@ references; the more general semantics of \CFA are an attempt to avoid the \newterm{const poisoning} problem~\cite{Taylor10}, in which the addition of a @const@ qualifier to one reference requires a cascading chain of added qualifiers.
     1948
     1949
     1950\subsection{Type nesting}
     1951
     1952Nested types provide a mechanism to organize associated types and refactor a subset of members into a named aggregate (\eg subaggregates @name@, @address@, @department@, within aggregate @employe@).
     1953Java nested types are dynamic (apply to objects), \CC are static (apply to the \lstinline[language=C++]@class@), and C hoists (refactors) nested types into the enclosing scope, which means there is no need for type qualification.
     1954Since \CFA in not object oriented, adopting dynamic scoping does not make sense;
     1955instead, \CFA adopts \CC static nesting, using the member-selection operator ``@.@'' for type qualification, as does Java, rather than the \CC type-selection operator ``@::@'' (see Figure~\ref{f:TypeNestingQualification}).
     1956In the C left example, types @C@, @U@ and @T@ are implicitly hoisted outside of type @S@ into the containing block scope.
     1957In the \CFA right example, the types are not hoisted and accessible.
     1958
    19171959\begin{figure}
    19181960\centering
     1961\fontsize{9bp}{11bp}\selectfont\sf
    19191962\lstDeleteShortInline@%
    19201963\begin{tabular}{@{}l@{\hspace{3em}}l|l@{}}
     
    19782021\end{tabular}
    19792022\lstMakeShortInline@%
    1980 \caption{Type Nesting / Qualification}
     2023\caption{Type nesting / qualification}
    19812024\label{f:TypeNestingQualification}
     2025\vspace*{-8pt}
    19822026\end{figure}
    1983 In the C left example, types @C@, @U@ and @T@ are implicitly hoisted outside of type @S@ into the containing block scope.
    1984 In the \CFA right example, the types are not hoisted and accessible.
    1985 
    1986 
    1987 \subsection{Constructors and Destructors}
     2027
     2028
     2029\vspace*{-8pt}
     2030\subsection{Constructors and destructors}
    19882031\label{s:ConstructorsDestructors}
    19892032
    1990 One of the strengths (and weaknesses) of C is memory-management control, allowing resource release to be precisely specified versus unknown release with garbage-collected memory-management.
     2033One of the strengths (and weaknesses) of C is memory-management control, allowing resource release to be precisely specified versus unknown release with garbage-collected memory management.
    19912034However, this manual approach is verbose, and it is useful to manage resources other than memory (\eg file handles) using the same mechanism as memory.
    1992 \CC addresses these issues using Resource Aquisition Is Initialization (RAII), implemented by means of \newterm{constructor} and \newterm{destructor} functions;
     2035\CC addresses these issues using RAII, implemented by means of \newterm{constructor} and \newterm{destructor} functions;
    19932036\CFA adopts constructors and destructors (and @finally@) to facilitate RAII.
    1994 While constructors and destructors are a common feature of object-oriented programming-languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm.
    1995 Specifically, \CFA constructors and destructors are denoted by name and first parameter-type versus name and nesting in an aggregate type.
     2037While constructors and destructors are a common feature of object-oriented programming languages, they are an independent capability allowing \CFA to adopt them while retaining a procedural paradigm.
     2038Specifically, \CFA constructors and destructors are denoted by name and first parameter type versus name and nesting in an aggregate type.
    19962039Constructor calls seamlessly integrate with existing C initialization syntax, providing a simple and familiar syntax to C programmers and allowing constructor calls to be inserted into legacy C code with minimal code changes.
    19972040
     
    20022045The constructor and destructor have return type @void@, and the first parameter is a reference to the object type to be constructed or destructed.
    20032046While the first parameter is informally called the @this@ parameter, as in object-oriented languages, any variable name may be used.
    2004 Both constructors and destructors allow additional parameters after the @this@ parameter for specifying values for initialization/de-initialization\footnote{
    2005 Destruction parameters are useful for specifying storage-management actions, such as de-initialize but not deallocate.}.
    2006 \begin{cfa}
     2047Both constructors and destructors allow additional parameters after the @this@ parameter for specifying values for initialization/deinitialization\footnote{
     2048Destruction parameters are useful for specifying storage-management actions, such as deinitialize but not deallocate.}.
     2049\begin{cfa}[basicstyle=\linespread{0.9}\fontsize{9bp}{11bp}\selectfont\sf]
    20072050struct VLA { int size, * data; };                       $\C{// variable length array of integers}$
    20082051void ?{}( VLA & vla ) with ( vla ) { size = 10;  data = alloc( size ); }  $\C{// default constructor}$
     
    20132056\end{cfa}
    20142057@VLA@ is a \newterm{managed type}\footnote{
    2015 A managed type affects the runtime environment versus a self-contained type.}: a type requiring a non-trivial constructor or destructor, or with a member of a managed type.
     2058A managed type affects the runtime environment versus a self-contained type.}: a type requiring a nontrivial constructor or destructor, or with a member of a managed type.
    20162059A managed type is implicitly constructed at allocation and destructed at deallocation to ensure proper interaction with runtime resources, in this case, the @data@ array in the heap.
    2017 For details of the code-generation placement of implicit constructor and destructor calls among complex executable statements see~\cite[\S~2.2]{Schluntz17}.
    2018 
    2019 \CFA also provides syntax for \newterm{initialization} and \newterm{copy}:
     2060For details of the code-generation placement of implicit constructor and destructor calls among complex executable statements, see section~2.2 in the work of Schlintz~\cite{Schluntz17}.
     2061
     2062\CFA also provides syntax for \newterm{initialization} and \newterm{copy}.
    20202063\begin{cfa}
    20212064void ?{}( VLA & vla, int size, char fill = '\0' ) {  $\C{// initialization}$
     
    20262069}
    20272070\end{cfa}
    2028 (Note, the example is purposely simplified using shallow-copy semantics.)
    2029 An initialization constructor-call has the same syntax as a C initializer, except the initialization values are passed as arguments to a matching constructor (number and type of paremeters).
     2071(Note that the example is purposely simplified using shallow-copy semantics.)
     2072An initialization constructor call has the same syntax as a C initializer, except that the initialization values are passed as arguments to a matching constructor (number and type of parameters).
    20302073\begin{cfa}
    20312074VLA va = `{` 20, 0 `}`,  * arr = alloc()`{` 5, 0 `}`;
    20322075\end{cfa}
    2033 Note, the use of a \newterm{constructor expression} to initialize the storage from the dynamic storage-allocation.
     2076Note the use of a \newterm{constructor expression} to initialize the storage from the dynamic storage allocation.
    20342077Like \CC, the copy constructor has two parameters, the second of which is a value parameter with the same type as the first parameter;
    20352078appropriate care is taken to not recursively call the copy constructor when initializing the second parameter.
     
    20372080\CFA constructors may be explicitly called, like Java, and destructors may be explicitly called, like \CC.
    20382081Explicit calls to constructors double as a \CC-style \emph{placement syntax}, useful for construction of members in user-defined constructors and reuse of existing storage allocations.
    2039 Like the other operators in \CFA, there is a concise syntax for constructor/destructor function calls:
     2082Like the other operators in \CFA, there is a concise syntax for constructor/destructor function calls.
    20402083\begin{cfa}
    20412084{
     
    20532096To provide a uniform type interface for @otype@ polymorphism, the \CFA compiler automatically generates a default constructor, copy constructor, assignment operator, and destructor for all types.
    20542097These default functions can be overridden by user-generated versions.
    2055 For compatibility with the standard behaviour of C, the default constructor and destructor for all basic, pointer, and reference types do nothing, while the copy constructor and assignment operator are bitwise copies;
    2056 if default zero-initialization is desired, the default constructors can be overridden.
     2098For compatibility with the standard behavior of C, the default constructor and destructor for all basic, pointer, and reference types do nothing, whereas the copy constructor and assignment operator are bitwise copies;
     2099if default zero initialization is desired, the default constructors can be overridden.
    20572100For user-generated types, the four functions are also automatically generated.
    20582101@enum@ types are handled the same as their underlying integral type, and unions are also bitwise copied and no-op initialized and destructed.
    20592102For compatibility with C, a copy constructor from the first union member type is also defined.
    2060 For @struct@ types, each of the four functions are implicitly defined to call their corresponding functions on each member of the struct.
    2061 To better simulate the behaviour of C initializers, a set of \newterm{member constructors} is also generated for structures.
    2062 A constructor is generated for each non-empty prefix of a structure's member-list to copy-construct the members passed as parameters and default-construct the remaining members.
     2103For @struct@ types, each of the four functions is implicitly defined to call their corresponding functions on each member of the struct.
     2104To better simulate the behavior of C initializers, a set of \newterm{member constructors} is also generated for structures.
     2105A constructor is generated for each nonempty prefix of a structure's member list to copy-construct the members passed as parameters and default-construct the remaining members.
    20632106To allow users to limit the set of constructors available for a type, when a user declares any constructor or destructor, the corresponding generated function and all member constructors for that type are hidden from expression resolution;
    2064 similarly, the generated default constructor is hidden upon declaration of any constructor.
     2107similarly, the generated default constructor is hidden upon the declaration of any constructor.
    20652108These semantics closely mirror the rule for implicit declaration of constructors in \CC\cite[p.~186]{ANSI98:C++}.
    20662109
    2067 In some circumstance programmers may not wish to have implicit constructor and destructor generation and calls.
    2068 In these cases, \CFA provides the initialization syntax \lstinline|S x `@=` {}|, and the object becomes unmanaged, so implicit constructor and destructor calls are not generated.
     2110In some circumstance, programmers may not wish to have implicit constructor and destructor generation and calls.
     2111In these cases, \CFA provides the initialization syntax \lstinline|S x `@=` {}|, and the object becomes unmanaged;
     2112hence, implicit \mbox{constructor} and destructor calls are not generated.
    20692113Any C initializer can be the right-hand side of an \lstinline|@=| initializer, \eg \lstinline|VLA a @= { 0, 0x0 }|, with the usual C initialization semantics.
    20702114The same syntax can be used in a compound literal, \eg \lstinline|a = (VLA)`@`{ 0, 0x0 }|, to create a C-style literal.
    2071 The point of \lstinline|@=| is to provide a migration path from legacy C code to \CFA, by providing a mechanism to incrementally convert to implicit initialization.
     2115The point of \lstinline|@=| is to provide a migration path from legacy C code to \CFA, by providing a mechanism to incrementally convert into implicit initialization.
    20722116
    20732117
     
    20772121\section{Literals}
    20782122
    2079 C already includes limited polymorphism for literals -- @0@ can be either an integer or a pointer literal, depending on context, while the syntactic forms of literals of the various integer and float types are very similar, differing from each other only in suffix.
    2080 In keeping with the general \CFA approach of adding features while respecting the ``C-style'' of doing things, C's polymorphic constants and typed literal syntax are extended to interoperate with user-defined types, while maintaining a backwards-compatible semantics.
     2123C already includes limited polymorphism for literals---@0@ can be either an integer or a pointer literal, depending on context, whereas the syntactic forms of literals of the various integer and float types are very similar, differing from each other only in suffix.
     2124In keeping with the general \CFA approach of adding features while respecting the ``C style'' of doing things, C's polymorphic constants and typed literal syntax are extended to interoperate with user-defined types, while maintaining a backward-compatible semantics.
    20812125
    20822126A simple example is allowing the underscore, as in Ada, to separate prefixes, digits, and suffixes in all \CFA constants, \eg @0x`_`1.ffff`_`ffff`_`p`_`128`_`l@, where the underscore is also the standard separator in C identifiers.
    2083 \CC uses a single quote as a separator but it is restricted among digits, precluding its use in the literal prefix or suffix, \eg @0x1.ffff@@`'@@ffffp128l@, and causes problems with most IDEs, which must be extended to deal with this alternate use of the single quote.
     2127\CC uses a single quote as a separator, but it is restricted among digits, precluding its use in the literal prefix or suffix, \eg @0x1.ffff@@`'@@ffffp128l@, and causes problems with most integrated development environments (IDEs), which must be extended to deal with this alternate use of the single quote.
    20842128
    20852129
     
    21242168
    21252169In C, @0@ has the special property that it is the only ``false'' value;
    2126 by the standard, any value that compares equal to @0@ is false, while any value that compares unequal to @0@ is true.
    2127 As such, an expression @x@ in any boolean context (such as the condition of an @if@ or @while@ statement, or the arguments to @&&@, @||@, or @?:@\,) can be rewritten as @x != 0@ without changing its semantics.
     2170by the standard, any value that compares equal to @0@ is false, whereas any value that compares unequal to @0@ is true.
     2171As such, an expression @x@ in any Boolean context (such as the condition of an @if@ or @while@ statement, or the arguments to @&&@, @||@, or @?:@\,) can be rewritten as @x != 0@ without changing its semantics.
    21282172Operator overloading in \CFA provides a natural means to implement this truth-value comparison for arbitrary types, but the C type system is not precise enough to distinguish an equality comparison with @0@ from an equality comparison with an arbitrary integer or pointer.
    21292173To provide this precision, \CFA introduces a new type @zero_t@ as the type of literal @0@ (somewhat analagous to @nullptr_t@ and @nullptr@ in \CCeleven);
     
    21312175With this addition, \CFA rewrites @if (x)@ and similar expressions to @if ( (x) != 0 )@ or the appropriate analogue, and any type @T@ is ``truthy'' by defining an operator overload @int ?!=?( T, zero_t )@.
    21322176\CC makes types truthy by adding a conversion to @bool@;
    2133 prior to the addition of explicit cast operators in \CCeleven, this approach had the pitfall of making truthy types transitively convertable to any numeric type;
     2177prior to the addition of explicit cast operators in \CCeleven, this approach had the pitfall of making truthy types transitively convertible into any numeric type;
    21342178\CFA avoids this issue.
    21352179
     
    21422186
    21432187
    2144 \subsection{User Literals}
     2188\subsection{User literals}
    21452189
    21462190For readability, it is useful to associate units to scale literals, \eg weight (stone, pound, kilogram) or time (seconds, minutes, hours).
    2147 The left of Figure~\ref{f:UserLiteral} shows the \CFA alternative call-syntax (postfix: literal argument before function name), using the backquote, to convert basic literals into user literals.
     2191The left of Figure~\ref{f:UserLiteral} shows the \CFA alternative call syntax (postfix: literal argument before function name), using the backquote, to convert basic literals into user literals.
    21482192The backquote is a small character, making the unit (function name) predominate.
    2149 For examples, the multi-precision integer-type in Section~\ref{s:MultiPrecisionIntegers} has user literals:
     2193For examples, the multiprecision integer type in Section~\ref{s:MultiPrecisionIntegers} has the following user literals.
    21502194{\lstset{language=CFA,moredelim=**[is][\color{red}]{|}{|},deletedelim=**[is][]{`}{`}}
    21512195\begin{cfa}
     
    21532197y = "12345678901234567890123456789"|`mp| + "12345678901234567890123456789"|`mp|;
    21542198\end{cfa}
    2155 Because \CFA uses a standard function, all types and literals are applicable, as well as overloading and conversions, where @?`@ denotes a postfix-function name and @`@ denotes a postfix-function call.
     2199Because \CFA uses a standard function, all types and literals are applicable, as well as overloading and conversions, where @?`@ denotes a postfix-function name and @`@  denotes a postfix-function call.
    21562200}%
    21572201\begin{cquote}
     
    21952239\end{cquote}
    21962240
    2197 The right of Figure~\ref{f:UserLiteral} shows the equivalent \CC version using the underscore for the call-syntax.
     2241The right of Figure~\ref{f:UserLiteral} shows the equivalent \CC version using the underscore for the call syntax.
    21982242However, \CC restricts the types, \eg @unsigned long long int@ and @long double@ to represent integral and floating literals.
    21992243After which, user literals must match (no conversions);
     
    22022246\begin{figure}
    22032247\centering
     2248\fontsize{9bp}{11bp}\selectfont
    22042249\lstset{language=CFA,moredelim=**[is][\color{red}]{|}{|},deletedelim=**[is][]{`}{`}}
    22052250\lstDeleteShortInline@%
     
    22572302\end{tabular}
    22582303\lstMakeShortInline@%
    2259 \caption{User Literal}
     2304\caption{User literal}
    22602305\label{f:UserLiteral}
    22612306\end{figure}
     
    22652310\label{sec:libraries}
    22662311
    2267 As stated in Section~\ref{sec:poly-fns}, \CFA inherits a large corpus of library code, where other programming languages must rewrite or provide fragile inter-language communication with C.
     2312As stated in Section~\ref{sec:poly-fns}, \CFA inherits a large corpus of library code, where other programming languages must rewrite or provide fragile interlanguage communication with C.
    22682313\CFA has replacement libraries condensing hundreds of existing C names into tens of \CFA overloaded names, all without rewriting the actual computations.
    2269 In many cases, the interface is an inline wrapper providing overloading during compilation but zero cost at runtime.
     2314In many cases, the interface is an inline wrapper providing overloading during compilation but of zero cost at runtime.
    22702315The following sections give a glimpse of the interface reduction to many C libraries.
    22712316In many cases, @signed@/@unsigned@ @char@, @short@, and @_Complex@ functions are available (but not shown) to ensure expression computations remain in a single type, as conversions can distort results.
     
    22752320
    22762321C library @limits.h@ provides lower and upper bound constants for the basic types.
    2277 \CFA name overloading is used to condense these typed constants, \eg:
     2322\CFA name overloading is used to condense these typed constants.
    22782323\begin{cquote}
    22792324\lstDeleteShortInline@%
     
    22942339\lstMakeShortInline@%
    22952340\end{cquote}
    2296 The result is a significant reduction in names to access typed constants, \eg:
     2341The result is a significant reduction in names to access typed constants.
    22972342\begin{cquote}
    22982343\lstDeleteShortInline@%
     
    23202365
    23212366C library @math.h@ provides many mathematical functions.
    2322 \CFA function overloading is used to condense these mathematical functions, \eg:
     2367\CFA function overloading is used to condense these mathematical functions.
    23232368\begin{cquote}
    23242369\lstDeleteShortInline@%
     
    23392384\lstMakeShortInline@%
    23402385\end{cquote}
    2341 The result is a significant reduction in names to access math functions, \eg:
     2386The result is a significant reduction in names to access math functions.
    23422387\begin{cquote}
    23432388\lstDeleteShortInline@%
     
    23582403\lstMakeShortInline@%
    23592404\end{cquote}
    2360 While \Celeven has type-generic math~\cite[\S~7.25]{C11} in @tgmath.h@ to provide a similar mechanism, these macros are limited, matching a function name with a single set of floating type(s).
     2405While \Celeven has type-generic math (see section~7.25 of the ISO/IEC 9899\cite{C11}) in @tgmath.h@ to provide a similar mechanism, these macros are limited, matching a function name with a single set of floating type(s).
    23612406For example, it is impossible to overload @atan@ for both one and two arguments;
    2362 instead the names @atan@ and @atan2@ are required (see Section~\ref{s:NameOverloading}).
    2363 The key observation is that only a restricted set of type-generic macros are provided for a limited set of function names, which do not generalize across the type system, as in \CFA.
     2407instead, the names @atan@ and @atan2@ are required (see Section~\ref{s:NameOverloading}).
     2408The key observation is that only a restricted set of type-generic macros is provided for a limited set of function names, which do not generalize across the type system, as in \CFA.
    23642409
    23652410
     
    23672412
    23682413C library @stdlib.h@ provides many general functions.
    2369 \CFA function overloading is used to condense these utility functions, \eg:
     2414\CFA function overloading is used to condense these utility functions.
    23702415\begin{cquote}
    23712416\lstDeleteShortInline@%
     
    23862431\lstMakeShortInline@%
    23872432\end{cquote}
    2388 The result is a significant reduction in names to access utility functions, \eg:
     2433The result is a significant reduction in names to access the utility functions.
    23892434\begin{cquote}
    23902435\lstDeleteShortInline@%
     
    24052450\lstMakeShortInline@%
    24062451\end{cquote}
    2407 In additon, there are polymorphic functions, like @min@ and @max@, that work on any type with operators @?<?@ or @?>?@.
     2452In addition, there are polymorphic functions, like @min@ and @max@, that work on any type with operator @?<?@ or @?>?@.
    24082453
    24092454The following shows one example where \CFA \emph{extends} an existing standard C interface to reduce complexity and provide safety.
    2410 C/\Celeven provide a number of complex and overlapping storage-management operation to support the following capabilities:
    2411 \begin{description}%[topsep=3pt,itemsep=2pt,parsep=0pt]
     2455C/\Celeven provide a number of complex and overlapping storage-management operations to support the following capabilities.
     2456\begin{list}{}{\itemsep=0pt\parsep=0pt\labelwidth=0pt\leftmargin\parindent\itemindent-\leftmargin\let\makelabel\descriptionlabel}
    24122457\item[fill]
    24132458an allocation with a specified character.
    24142459\item[resize]
    24152460an existing allocation to decrease or increase its size.
    2416 In either case, new storage may or may not be allocated and, if there is a new allocation, as much data from the existing allocation is copied.
     2461In either case, new storage may or may not be allocated, and if there is a new allocation, as much data from the existing allocation are copied.
    24172462For an increase in storage size, new storage after the copied data may be filled.
     2463\newpage
    24182464\item[align]
    24192465an allocation on a specified memory boundary, \eg, an address multiple of 64 or 128 for cache-line purposes.
     
    24212467allocation with a specified number of elements.
    24222468An array may be filled, resized, or aligned.
    2423 \end{description}
    2424 Table~\ref{t:StorageManagementOperations} shows the capabilities provided by C/\Celeven allocation-functions and how all the capabilities can be combined into two \CFA functions.
    2425 \CFA storage-management functions extend the C equivalents by overloading, providing shallow type-safety, and removing the need to specify the base allocation-size.
    2426 Figure~\ref{f:StorageAllocation} contrasts \CFA and C storage-allocation performing the same operations with the same type safety.
     2469\end{list}
     2470Table~\ref{t:StorageManagementOperations} shows the capabilities provided by C/\Celeven allocation functions and how all the capabilities can be combined into two \CFA functions.
     2471\CFA storage-management functions extend the C equivalents by overloading, providing shallow type safety, and removing the need to specify the base allocation size.
     2472Figure~\ref{f:StorageAllocation} contrasts \CFA and C storage allocation performing the same operations with the same type safety.
    24272473
    24282474\begin{table}
    2429 \caption{Storage-Management Operations}
     2475\caption{Storage-management operations}
    24302476\label{t:StorageManagementOperations}
    24312477\centering
    24322478\lstDeleteShortInline@%
    24332479\lstMakeShortInline~%
    2434 \begin{tabular}{@{}r|r|l|l|l|l@{}}
    2435 \multicolumn{1}{c}{}&           & \multicolumn{1}{c|}{fill}     & resize        & align & array \\
    2436 \hline
     2480\begin{tabular}{@{}rrllll@{}}
     2481\multicolumn{1}{c}{}&           & \multicolumn{1}{c}{fill}      & resize        & align & array \\
    24372482C               & ~malloc~                      & no                    & no            & no            & no    \\
    24382483                & ~calloc~                      & yes (0 only)  & no            & no            & yes   \\
     
    24402485                & ~memalign~            & no                    & no            & yes           & no    \\
    24412486                & ~posix_memalign~      & no                    & no            & yes           & no    \\
    2442 \hline
    24432487C11             & ~aligned_alloc~       & no                    & no            & yes           & no    \\
    2444 \hline
    24452488\CFA    & ~alloc~                       & yes/copy              & no/yes        & no            & yes   \\
    24462489                & ~align_alloc~         & yes                   & no            & yes           & yes   \\
     
    24522495\begin{figure}
    24532496\centering
     2497\fontsize{9bp}{11bp}\selectfont
    24542498\begin{cfa}[aboveskip=0pt,xleftmargin=0pt]
    24552499size_t  dim = 10;                                                       $\C{// array dimension}$
     
    24892533\end{tabular}
    24902534\lstMakeShortInline@%
    2491 \caption{\CFA versus C Storage-Allocation}
     2535\caption{\CFA versus C storage allocation}
    24922536\label{f:StorageAllocation}
    24932537\end{figure}
    24942538
    24952539Variadic @new@ (see Section~\ref{sec:variadic-tuples}) cannot support the same overloading because extra parameters are for initialization.
    2496 Hence, there are @new@ and @anew@ functions for single and array variables, and the fill value is the arguments to the constructor, \eg:
     2540Hence, there are @new@ and @anew@ functions for single and array variables, and the fill value is the arguments to the constructor.
    24972541\begin{cfa}
    24982542struct S { int i, j; };
     
    25012545S * as = anew( dim, 2, 3 );                                     $\C{// each array element initialized to 2, 3}$
    25022546\end{cfa}
    2503 Note, \CC can only initialize array elements via the default constructor.
    2504 
    2505 Finally, the \CFA memory-allocator has \newterm{sticky properties} for dynamic storage: fill and alignment are remembered with an object's storage in the heap.
     2547Note that \CC can only initialize array elements via the default constructor.
     2548
     2549Finally, the \CFA memory allocator has \newterm{sticky properties} for dynamic storage: fill and alignment are remembered with an object's storage in the heap.
    25062550When a @realloc@ is performed, the sticky properties are respected, so that new storage is correctly aligned and initialized with the fill character.
    25072551
     
    25102554\label{s:IOLibrary}
    25112555
    2512 The goal of \CFA I/O is to simplify the common cases, while fully supporting polymorphism and user defined types in a consistent way.
     2556The goal of \CFA I/O is to simplify the common cases, while fully supporting polymorphism and user-defined types in a consistent way.
    25132557The approach combines ideas from \CC and Python.
    25142558The \CFA header file for the I/O library is @fstream@.
     
    25392583\lstMakeShortInline@%
    25402584\end{cquote}
    2541 The \CFA form has half the characters of the \CC form, and is similar to Python I/O with respect to implicit separators.
     2585The \CFA form has half the characters of the \CC form and is similar to Python I/O with respect to implicit separators.
    25422586Similar simplification occurs for tuple I/O, which prints all tuple values separated by ``\lstinline[showspaces=true]@, @''.
    25432587\begin{cfa}
     
    25722616\lstMakeShortInline@%
    25732617\end{cquote}
    2574 There is a weak similarity between the \CFA logical-or operator and the Shell pipe-operator for moving data, where data flows in the correct direction for input but the opposite direction for output.
     2618There is a weak similarity between the \CFA logical-or operator and the Shell pipe operator for moving data, where data flow in the correct direction for input but in the opposite direction for output.
    25752619\begin{comment}
    25762620The implicit separator character (space/blank) is a separator not a terminator.
     
    25932637\end{itemize}
    25942638\end{comment}
    2595 There are functions to set and get the separator string, and manipulators to toggle separation on and off in the middle of output.
    2596 
    2597 
    2598 \subsection{Multi-precision Integers}
     2639There are functions to set and get the separator string and manipulators to toggle separation on and off in the middle of output.
     2640
     2641
     2642\subsection{Multiprecision integers}
    25992643\label{s:MultiPrecisionIntegers}
    26002644
    2601 \CFA has an interface to the GMP multi-precision signed-integers~\cite{GMP}, similar to the \CC interface provided by GMP.
    2602 The \CFA interface wraps GMP functions into operator functions to make programming with multi-precision integers identical to using fixed-sized integers.
    2603 The \CFA type name for multi-precision signed-integers is @Int@ and the header file is @gmp@.
    2604 Figure~\ref{f:GMPInterface} shows a multi-precision factorial-program contrasting the GMP interface in \CFA and C.
    2605 
    2606 \begin{figure}
     2645\CFA has an interface to the GNU multiple precision (GMP) signed integers~\cite{GMP}, similar to the \CC interface provided by GMP.
     2646The \CFA interface wraps GMP functions into operator functions to make programming with multiprecision integers identical to using fixed-sized integers.
     2647The \CFA type name for multiprecision signed integers is @Int@ and the header file is @gmp@.
     2648Figure~\ref{f:GMPInterface} shows a multiprecision factorial program contrasting the GMP interface in \CFA and C.
     2649
     2650\begin{figure}[b]
    26072651\centering
     2652\fontsize{9bp}{11bp}\selectfont
    26082653\lstDeleteShortInline@%
    26092654\begin{tabular}{@{}l@{\hspace{3\parindentlnth}}l@{}}
     
    26362681\end{tabular}
    26372682\lstMakeShortInline@%
    2638 \caption{GMP Interface \CFA versus C}
     2683\caption{GMP interface \CFA versus C}
    26392684\label{f:GMPInterface}
    26402685\end{figure}
    26412686
    26422687
     2688\vspace{-4pt}
    26432689\section{Polymorphism Evaluation}
    26442690\label{sec:eval}
     
    26492695% Though \CFA provides significant added functionality over C, these features have a low runtime penalty.
    26502696% In fact, it is shown that \CFA's generic programming can enable faster runtime execution than idiomatic @void *@-based C code.
    2651 The experiment is a set of generic-stack micro-benchmarks~\cite{CFAStackEvaluation} in C, \CFA, and \CC (see implementations in Appendix~\ref{sec:BenchmarkStackImplementations}).
     2697The experiment is a set of generic-stack microbenchmarks~\cite{CFAStackEvaluation} in C, \CFA, and \CC (see implementations in Appendix~\ref{sec:BenchmarkStackImplementations}).
    26522698Since all these languages share a subset essentially comprising standard C, maximal-performance benchmarks should show little runtime variance, differing only in length and clarity of source code.
    26532699A more illustrative comparison measures the costs of idiomatic usage of each language's features.
    2654 Figure~\ref{fig:BenchmarkTest} shows the \CFA benchmark tests for a generic stack based on a singly linked-list.
     2700Figure~\ref{fig:BenchmarkTest} shows the \CFA benchmark tests for a generic stack based on a singly linked list.
    26552701The benchmark test is similar for the other languages.
    26562702The experiment uses element types @int@ and @pair(short, char)@, and pushes $N=40M$ elements on a generic stack, copies the stack, clears one of the stacks, and finds the maximum value in the other stack.
    26572703
    26582704\begin{figure}
     2705\fontsize{9bp}{11bp}\selectfont
    26592706\begin{cfa}[xleftmargin=3\parindentlnth,aboveskip=0pt,belowskip=0pt]
    26602707int main() {
     
    26762723}
    26772724\end{cfa}
    2678 \caption{\protect\CFA Benchmark Test}
     2725\caption{\protect\CFA benchmark test}
    26792726\label{fig:BenchmarkTest}
     2727\vspace*{-10pt}
    26802728\end{figure}
    26812729
    2682 The structure of each benchmark implemented is: C with @void *@-based polymorphism, \CFA with parametric polymorphism, \CC with templates, and \CC using only class inheritance for polymorphism, called \CCV.
     2730The structure of each benchmark implemented is C with @void *@-based polymorphism, \CFA with parametric polymorphism, \CC with templates, and \CC using only class inheritance for polymorphism, called \CCV.
    26832731The \CCV variant illustrates an alternative object-oriented idiom where all objects inherit from a base @object@ class, mimicking a Java-like interface;
    2684 hence runtime checks are necessary to safely down-cast objects.
    2685 The most notable difference among the implementations is in memory layout of generic types: \CFA and \CC inline the stack and pair elements into corresponding list and pair nodes, while C and \CCV lack such a capability and instead must store generic objects via pointers to separately-allocated objects.
    2686 Note, the C benchmark uses unchecked casts as C has no runtime mechanism to perform such checks, while \CFA and \CC provide type-safety statically.
     2732hence, runtime checks are necessary to safely downcast objects.
     2733The most notable difference among the implementations is in memory layout of generic types: \CFA and \CC inline the stack and pair elements into corresponding list and pair nodes, whereas C and \CCV lack such capability and, instead, must store generic objects via pointers to separately allocated objects.
     2734Note that the C benchmark uses unchecked casts as C has no runtime mechanism to perform such checks, whereas \CFA and \CC provide type safety statically.
    26872735
    26882736Figure~\ref{fig:eval} and Table~\ref{tab:eval} show the results of running the benchmark in Figure~\ref{fig:BenchmarkTest} and its C, \CC, and \CCV equivalents.
    2689 The graph plots the median of 5 consecutive runs of each program, with an initial warm-up run omitted.
     2737The graph plots the median of five consecutive runs of each program, with an initial warm-up run omitted.
    26902738All code is compiled at \texttt{-O2} by gcc or g++ 6.4.0, with all \CC code compiled as \CCfourteen.
    26912739The benchmarks are run on an Ubuntu 16.04 workstation with 16 GB of RAM and a 6-core AMD FX-6300 CPU with 3.5 GHz maximum clock frequency.
     
    26932741\begin{figure}
    26942742\centering
    2695 \input{timing}
    2696 \caption{Benchmark Timing Results (smaller is better)}
     2743\resizebox{0.7\textwidth}{!}{\input{timing}}
     2744\caption{Benchmark timing results (smaller is better)}
    26972745\label{fig:eval}
     2746\vspace*{-10pt}
    26982747\end{figure}
    26992748
    27002749\begin{table}
     2750\vspace*{-10pt}
    27012751\caption{Properties of benchmark code}
    27022752\label{tab:eval}
    27032753\centering
     2754\vspace*{-4pt}
    27042755\newcommand{\CT}[1]{\multicolumn{1}{c}{#1}}
    2705 \begin{tabular}{rrrrr}
    2706                                                                         & \CT{C}        & \CT{\CFA}     & \CT{\CC}      & \CT{\CCV}             \\ \hline
    2707 maximum memory usage (MB)                       & 10,001        & 2,502         & 2,503         & 11,253                \\
     2756\begin{tabular}{lrrrr}
     2757                                                                        & \CT{C}        & \CT{\CFA}     & \CT{\CC}      & \CT{\CCV}             \\
     2758maximum memory usage (MB)                       & 10\,001       & 2\,502        & 2\,503        & 11\,253               \\
    27082759source code size (lines)                        & 201           & 191           & 125           & 294                   \\
    27092760redundant type annotations (lines)      & 27            & 0                     & 2                     & 16                    \\
    27102761binary size (KB)                                        & 14            & 257           & 14            & 37                    \\
    27112762\end{tabular}
     2763\vspace*{-16pt}
    27122764\end{table}
    27132765
    2714 The C and \CCV variants are generally the slowest with the largest memory footprint, because of their less-efficient memory layout and the pointer-indirection necessary to implement generic types;
     2766\enlargethispage{-10pt}
     2767The C and \CCV variants are generally the slowest with the largest memory footprint, due to their less-efficient memory layout and the pointer indirection necessary to implement generic types;
    27152768this inefficiency is exacerbated by the second level of generic types in the pair benchmarks.
    2716 By contrast, the \CFA and \CC variants run in roughly equivalent time for both the integer and pair because of equivalent storage layout, with the inlined libraries (\ie no separate compilation) and greater maturity of the \CC compiler contributing to its lead.
    2717 \CCV is slower than C largely due to the cost of runtime type-checking of down-casts (implemented with @dynamic_cast@);
     2769By contrast, the \CFA and \CC variants run in roughly equivalent time for both the integer and pair because of the equivalent storage layout, with the inlined libraries (\ie no separate compilation) and greater maturity of the \CC compiler contributing to its lead.
     2770\CCV is slower than C largely due to the cost of runtime type checking of downcasts (implemented with @dynamic_cast@).
    27182771The outlier for \CFA, pop @pair@, results from the complexity of the generated-C polymorphic code.
    27192772The gcc compiler is unable to optimize some dead code and condense nested calls;
     
    27212774Finally, the binary size for \CFA is larger because of static linking with the \CFA libraries.
    27222775
    2723 \CFA is also competitive in terms of source code size, measured as a proxy for programmer effort. The line counts in Table~\ref{tab:eval} include implementations of @pair@ and @stack@ types for all four languages for purposes of direct comparison, though it should be noted that \CFA and \CC have pre-written data structures in their standard libraries that programmers would generally use instead. Use of these standard library types has minimal impact on the performance benchmarks, but shrinks the \CFA and \CC benchmarks to 39 and 42 lines, respectively.
     2776\CFA is also competitive in terms of source code size, measured as a proxy for programmer effort. The line counts in Table~\ref{tab:eval} include implementations of @pair@ and @stack@ types for all four languages for purposes of direct comparison, although it should be noted that \CFA and \CC have prewritten data structures in their standard libraries that programmers would generally use instead. Use of these standard library types has minimal impact on the performance benchmarks, but shrinks the \CFA and \CC benchmarks to 39 and 42 lines, respectively.
    27242777The difference between the \CFA and \CC line counts is primarily declaration duplication to implement separate compilation; a header-only \CFA library would be similar in length to the \CC version.
    2725 On the other hand, C does not have a generic collections-library in its standard distribution, resulting in frequent reimplementation of such collection types by C programmers.
    2726 \CCV does not use the \CC standard template library by construction, and in fact includes the definition of @object@ and wrapper classes for @char@, @short@, and @int@ in its line count, which inflates this count somewhat, as an actual object-oriented language would include these in the standard library;
     2778On the other hand, C does not have a generic collections library in its standard distribution, resulting in frequent reimplementation of such collection types by C programmers.
     2779\CCV does not use the \CC standard template library by construction and, in fact, includes the definition of @object@ and wrapper classes for @char@, @short@, and @int@ in its line count, which inflates this count somewhat, as an actual object-oriented language would include these in the standard library;
    27272780with their omission, the \CCV line count is similar to C.
    27282781We justify the given line count by noting that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or wrapper types, which may be similarly verbose.
    27292782
    2730 Line-count is a fairly rough measure of code complexity;
    2731 another important factor is how much type information the programmer must specify manually, especially where that information is not compiler-checked.
    2732 Such unchecked type information produces a heavier documentation burden and increased potential for runtime bugs, and is much less common in \CFA than C, with its manually specified function pointer arguments and format codes, or \CCV, with its extensive use of un-type-checked downcasts, \eg @object@ to @integer@ when popping a stack.
     2783Line count is a fairly rough measure of code complexity;
     2784another important factor is how much type information the programmer must specify manually, especially where that information is not compiler checked.
     2785Such unchecked type information produces a heavier documentation burden and increased potential for runtime bugs and is much less common in \CFA than C, with its manually specified function pointer arguments and format codes, or \CCV, with its extensive use of un-type-checked downcasts, \eg @object@ to @integer@ when popping a stack.
    27332786To quantify this manual typing, the ``redundant type annotations'' line in Table~\ref{tab:eval} counts the number of lines on which the type of a known variable is respecified, either as a format specifier, explicit downcast, type-specific function, or by name in a @sizeof@, struct literal, or @new@ expression.
    2734 The \CC benchmark uses two redundant type annotations to create a new stack nodes, while the C and \CCV benchmarks have several such annotations spread throughout their code.
     2787The \CC benchmark uses two redundant type annotations to create a new stack nodes, whereas the C and \CCV benchmarks have several such annotations spread throughout their code.
    27352788The \CFA benchmark is able to eliminate all redundant type annotations through use of the polymorphic @alloc@ function discussed in Section~\ref{sec:libraries}.
    27362789
    2737 We conjecture these results scale across most generic data-types as the underlying polymorphism implement is constant.
    2738 
    2739 
     2790We conjecture that these results scale across most generic data types as the underlying polymorphism implement is constant.
     2791
     2792
     2793\vspace*{-8pt}
    27402794\section{Related Work}
    27412795\label{s:RelatedWork}
     
    27532807\CC provides three disjoint polymorphic extensions to C: overloading, inheritance, and templates.
    27542808The overloading is restricted because resolution does not use the return type, inheritance requires learning object-oriented programming and coping with a restricted nominal-inheritance hierarchy, templates cannot be separately compiled resulting in compilation/code bloat and poor error messages, and determining how these mechanisms interact and which to use is confusing.
    2755 In contrast, \CFA has a single facility for polymorphic code supporting type-safe separate-compilation of polymorphic functions and generic (opaque) types, which uniformly leverage the C procedural paradigm.
     2809In contrast, \CFA has a single facility for polymorphic code supporting type-safe separate compilation of polymorphic functions and generic (opaque) types, which uniformly leverage the C procedural paradigm.
    27562810The key mechanism to support separate compilation is \CFA's \emph{explicit} use of assumed type properties.
    2757 Until \CC concepts~\cite{C++Concepts} are standardized (anticipated for \CCtwenty), \CC provides no way to specify the requirements of a generic function beyond compilation errors during template expansion;
     2811Until \CC concepts~\cite{C++Concepts} are standardized (anticipated for \CCtwenty), \CC provides no way of specifying the requirements of a generic function beyond compilation errors during template expansion;
    27582812furthermore, \CC concepts are restricted to template polymorphism.
    27592813
    27602814Cyclone~\cite{Grossman06} also provides capabilities for polymorphic functions and existential types, similar to \CFA's @forall@ functions and generic types.
    2761 Cyclone existential types can include function pointers in a construct similar to a virtual function-table, but these pointers must be explicitly initialized at some point in the code, a tedious and potentially error-prone process.
     2815Cyclone existential types can include function pointers in a construct similar to a virtual function table, but these pointers must be explicitly initialized at some point in the code, which is a tedious and potentially error-prone process.
    27622816Furthermore, Cyclone's polymorphic functions and types are restricted to abstraction over types with the same layout and calling convention as @void *@, \ie only pointer types and @int@.
    27632817In \CFA terms, all Cyclone polymorphism must be dtype-static.
    27642818While the Cyclone design provides the efficiency benefits discussed in Section~\ref{sec:generic-apps} for dtype-static polymorphism, it is more restrictive than \CFA's general model.
    2765 Smith and Volpano~\cite{Smith98} present Polymorphic C, an ML dialect with polymorphic functions, C-like syntax, and pointer types; it lacks many of C's features, however, most notably structure types, and so is not a practical C replacement.
     2819Smith and Volpano~\cite{Smith98} present Polymorphic C, an ML dialect with polymorphic functions, C-like syntax, and pointer types;
     2820it lacks many of C's features, most notably structure types, and hence, is not a practical C replacement.
    27662821
    27672822Objective-C~\cite{obj-c-book} is an industrially successful extension to C.
    2768 However, Objective-C is a radical departure from C, using an object-oriented model with message-passing.
     2823However, Objective-C is a radical departure from C, using an object-oriented model with message passing.
    27692824Objective-C did not support type-checked generics until recently \cite{xcode7}, historically using less-efficient runtime checking of object types.
    2770 The GObject~\cite{GObject} framework also adds object-oriented programming with runtime type-checking and reference-counting garbage-collection to C;
    2771 these features are more intrusive additions than those provided by \CFA, in addition to the runtime overhead of reference-counting.
    2772 Vala~\cite{Vala} compiles to GObject-based C, adding the burden of learning a separate language syntax to the aforementioned demerits of GObject as a modernization path for existing C code-bases.
    2773 Java~\cite{Java8} included generic types in Java~5, which are type-checked at compilation and type-erased at runtime, similar to \CFA's.
    2774 However, in Java, each object carries its own table of method pointers, while \CFA passes the method pointers separately to maintain a C-compatible layout.
     2825The GObject~\cite{GObject} framework also adds object-oriented programming with runtime type-checking and reference-counting garbage collection to C;
     2826these features are more intrusive additions than those provided by \CFA, in addition to the runtime overhead of reference counting.
     2827Vala~\cite{Vala} compiles to GObject-based C, adding the burden of learning a separate language syntax to the aforementioned demerits of GObject as a modernization path for existing C code bases.
     2828Java~\cite{Java8} included generic types in Java~5, which are type checked at compilation and type erased at runtime, similar to \CFA's.
     2829However, in Java, each object carries its own table of method pointers, whereas \CFA passes the method pointers separately to maintain a C-compatible layout.
    27752830Java is also a garbage-collected, object-oriented language, with the associated resource usage and C-interoperability burdens.
    27762831
    2777 D~\cite{D}, Go, and Rust~\cite{Rust} are modern, compiled languages with abstraction features similar to \CFA traits, \emph{interfaces} in D and Go and \emph{traits} in Rust.
     2832D~\cite{D}, Go, and Rust~\cite{Rust} are modern compiled languages with abstraction features similar to \CFA traits, \emph{interfaces} in D and Go, and \emph{traits} in Rust.
    27782833However, each language represents a significant departure from C in terms of language model, and none has the same level of compatibility with C as \CFA.
    27792834D and Go are garbage-collected languages, imposing the associated runtime overhead.
    27802835The necessity of accounting for data transfer between managed runtimes and the unmanaged C runtime complicates foreign-function interfaces to C.
    27812836Furthermore, while generic types and functions are available in Go, they are limited to a small fixed set provided by the compiler, with no language facility to define more.
    2782 D restricts garbage collection to its own heap by default, while Rust is not garbage-collected, and thus has a lighter-weight runtime more interoperable with C.
     2837D restricts garbage collection to its own heap by default, whereas Rust is not garbage collected and, thus, has a lighter-weight runtime more interoperable with C.
    27832838Rust also possesses much more powerful abstraction capabilities for writing generic code than Go.
    2784 On the other hand, Rust's borrow-checker provides strong safety guarantees but is complex and difficult to learn and imposes a distinctly idiomatic programming style.
     2839On the other hand, Rust's borrow checker provides strong safety guarantees but is complex and difficult to learn and imposes a distinctly idiomatic programming style.
    27852840\CFA, with its more modest safety features, allows direct ports of C code while maintaining the idiomatic style of the original source.
    27862841
    27872842
    2788 \subsection{Tuples/Variadics}
    2789 
     2843\vspace*{-18pt}
     2844\subsection{Tuples/variadics}
     2845
     2846\vspace*{-5pt}
    27902847Many programming languages have some form of tuple construct and/or variadic functions, \eg SETL, C, KW-C, \CC, D, Go, Java, ML, and Scala.
    27912848SETL~\cite{SETL} is a high-level mathematical programming language, with tuples being one of the primary data types.
    27922849Tuples in SETL allow subscripting, dynamic expansion, and multiple assignment.
    2793 C provides variadic functions through @va_list@ objects, but the programmer is responsible for managing the number of arguments and their types, so the mechanism is type unsafe.
     2850C provides variadic functions through @va_list@ objects, but the programmer is responsible for managing the number of arguments and their types;
     2851thus, the mechanism is type unsafe.
    27942852KW-C~\cite{Buhr94a}, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, taking much of its inspiration from SETL.
    27952853The main contributions of that work were adding MRVF, tuple mass and multiple assignment, and record-member access.
    2796 \CCeleven introduced @std::tuple@ as a library variadic template structure.
     2854\CCeleven introduced @std::tuple@ as a library variadic-template structure.
    27972855Tuples are a generalization of @std::pair@, in that they allow for arbitrary length, fixed-size aggregation of heterogeneous values.
    27982856Operations include @std::get<N>@ to extract values, @std::tie@ to create a tuple of references used for assignment, and lexicographic comparisons.
    2799 \CCseventeen proposes \emph{structured bindings}~\cite{Sutter15} to eliminate pre-declaring variables and use of @std::tie@ for binding the results.
    2800 This extension requires the use of @auto@ to infer the types of the new variables, so complicated expressions with a non-obvious type must be documented with some other mechanism.
     2857\CCseventeen proposes \emph{structured bindings}~\cite{Sutter15} to eliminate predeclaring variables and the use of @std::tie@ for binding the results.
     2858This extension requires the use of @auto@ to infer the types of the new variables; hence, complicated expressions with a nonobvious type must be documented with some other mechanism.
    28012859Furthermore, structured bindings are not a full replacement for @std::tie@, as it always declares new variables.
    28022860Like \CC, D provides tuples through a library variadic-template structure.
    28032861Go does not have tuples but supports MRVF.
    2804 Java's variadic functions appear similar to C's but are type-safe using homogeneous arrays, which are less useful than \CFA's heterogeneously-typed variadic functions.
     2862Java's variadic functions appear similar to C's but are type safe using homogeneous arrays, which are less useful than \CFA's heterogeneously typed variadic functions.
    28052863Tuples are a fundamental abstraction in most functional programming languages, such as Standard ML~\cite{sml}, Haskell, and Scala~\cite{Scala}, which decompose tuples using pattern matching.
    28062864
    28072865
     2866\vspace*{-18pt}
    28082867\subsection{C Extensions}
    28092868
    2810 \CC is the best known C-based language, and is similar to \CFA in that both are extensions to C with source and runtime backwards compatibility.
    2811 Specific difference between \CFA and \CC have been identified in prior sections, with a final observation that \CFA has equal or fewer tokens to express the same notion in many cases.
     2869\vspace*{-5pt}
     2870\CC is the best known C-based language and is similar to \CFA in that both are extensions to C with source and runtime backward compatibility.
     2871Specific differences between \CFA and \CC have been identified in prior sections, with a final observation that \CFA has equal or fewer tokens to express the same notion in many cases.
    28122872The key difference in design philosophies is that \CFA is easier for C programmers to understand by maintaining a procedural paradigm and avoiding complex interactions among extensions.
    28132873\CC, on the other hand, has multiple overlapping features (such as the three forms of polymorphism), many of which have complex interactions with its object-oriented design.
    2814 As a result, \CC has a steep learning curve for even experienced C programmers, especially when attempting to maintain performance equivalent to C legacy-code.
    2815 
    2816 There are several other C extension-languages with less usage and even more dramatic changes than \CC.
    2817 Objective-C and Cyclone are two other extensions to C with different design goals than \CFA, as discussed above.
     2874As a result, \CC has a steep learning curve for even experienced C programmers, especially when attempting to maintain performance equivalent to C legacy code.
     2875
     2876There are several other C extension languages with less usage and even more dramatic changes than \CC.
     2877\mbox{Objective-C} and Cyclone are two other extensions to C with different design goals than \CFA, as discussed above.
    28182878Other languages extend C with more focused features.
    28192879$\mu$\CC~\cite{uC++book}, CUDA~\cite{Nickolls08}, ispc~\cite{Pharr12}, and Sierra~\cite{Leissa14} add concurrent or data-parallel primitives to C or \CC;
    2820 data-parallel features have not yet been added to \CFA, but are easily incorporated within its design, while concurrency primitives similar to those in $\mu$\CC have already been added~\cite{Delisle18}.
    2821 Finally, CCured~\cite{Necula02} and Ironclad \CC~\cite{DeLozier13} attempt to provide a more memory-safe C by annotating pointer types with garbage collection information; type-checked polymorphism in \CFA covers several of C's memory-safety issues, but more aggressive approaches such as annotating all pointer types with their nullability or requiring runtime garbage collection are contradictory to \CFA's backwards compatibility goals.
     2880data-parallel features have not yet been added to \CFA, but are easily incorporated within its design, whereas concurrency primitives similar to those in $\mu$\CC have already been added~\cite{Delisle18}.
     2881Finally, CCured~\cite{Necula02} and Ironclad \CC~\cite{DeLozier13} attempt to provide a more memory-safe C by annotating pointer types with garbage collection information; type-checked polymorphism in \CFA covers several of C's memory-safety issues, but more aggressive approaches such as annotating all pointer types with their nullability or requiring runtime garbage collection are contradictory to \CFA's backward compatibility goals.
    28222882
    28232883
    28242884\section{Conclusion and Future Work}
    28252885
    2826 The goal of \CFA is to provide an evolutionary pathway for large C development-environments to be more productive and safer, while respecting the talent and skill of C programmers.
    2827 While other programming languages purport to be a better C, they are in fact new and interesting languages in their own right, but not C extensions.
    2828 The purpose of this paper is to introduce \CFA, and showcase language features that illustrate the \CFA type-system and approaches taken to achieve the goal of evolutionary C extension.
    2829 The contributions are a powerful type-system using parametric polymorphism and overloading, generic types, tuples, advanced control structures, and extended declarations, which all have complex interactions.
     2886The goal of \CFA is to provide an evolutionary pathway for large C development environments to be more productive and safer, while respecting the talent and skill of C programmers.
     2887While other programming languages purport to be a better C, they are, in fact, new and interesting languages in their own right, but not C extensions.
     2888The purpose of this paper is to introduce \CFA, and showcase language features that illustrate the \CFA type system and approaches taken to achieve the goal of evolutionary C extension.
     2889The contributions are a powerful type system using parametric polymorphism and overloading, generic types, tuples, advanced control structures, and extended declarations, which all have complex interactions.
    28302890The work is a challenging design, engineering, and implementation exercise.
    28312891On the surface, the project may appear as a rehash of similar mechanisms in \CC.
    28322892However, every \CFA feature is different than its \CC counterpart, often with extended functionality, better integration with C and its programmers, and always supporting separate compilation.
    2833 All of these new features are being used by the \CFA development-team to build the \CFA runtime-system.
     2893All of these new features are being used by the \CFA development team to build the \CFA runtime system.
    28342894Finally, we demonstrate that \CFA performance for some idiomatic cases is better than C and close to \CC, showing the design is practically applicable.
    28352895
    2836 While all examples in the paper compile and run, a public beta-release of \CFA will take 6--8 months to reduce compilation time, provide better debugging, and add a few more libraries.
    2837 There is also new work on a number of \CFA features, including arrays with size, runtime type-information, virtual functions, user-defined conversions, and modules.
    2838 While \CFA polymorphic functions use dynamic virtual-dispatch with low runtime overhead (see Section~\ref{sec:eval}), it is not as low as \CC template-inlining.
    2839 Hence it may be beneficial to provide a mechanism for performance-sensitive code.
    2840 Two promising approaches are an @inline@ annotation at polymorphic function call sites to create a template-specialization of the function (provided the code is visible) or placing an @inline@ annotation on polymorphic function-definitions to instantiate a specialized version for some set of types (\CC template specialization).
    2841 These approaches are not mutually exclusive and allow performance optimizations to be applied only when necessary, without suffering global code-bloat.
    2842 In general, we believe separate compilation, producing smaller code, works well with loaded hardware-caches, which may offset the benefit of larger inlined-code.
     2896While all examples in the paper compile and run, there are ongoing efforts to reduce compilation time, provide better debugging, and add more libraries;
     2897when this work is complete in early 2019, a public beta release will be available at \url{https://github.com/cforall/cforall}.
     2898There is also new work on a number of \CFA features, including arrays with size, runtime type information, virtual functions, user-defined conversions, and modules.
     2899While \CFA polymorphic functions use dynamic virtual dispatch with low runtime overhead (see Section~\ref{sec:eval}), it is not as low as \CC template inlining.
     2900Hence, it may be beneficial to provide a mechanism for performance-sensitive code.
     2901Two promising approaches are an @inline@ annotation at polymorphic function call sites to create a template specialization of the function (provided the code is visible) or placing an @inline@ annotation on polymorphic function definitions to instantiate a specialized version for some set of types (\CC template specialization).
     2902 These approaches are not mutually exclusive and allow performance optimizations to be applied only when necessary, without suffering global code bloat.
     2903In general, we believe separate compilation, producing smaller code, works well with loaded hardware caches, which may offset the benefit of larger inlined code.
    28432904
    28442905
    28452906\section{Acknowledgments}
    28462907
    2847 The authors would like to recognize the design assistance of Glen Ditchfield, Richard Bilson, Thierry Delisle, Andrew Beach and Brice Dobry on the features described in this paper, and thank Magnus Madsen for feedback on the writing.
    2848 Funding for this project has been provided by Huawei Ltd.\ (\url{http://www.huawei.com}), and Aaron Moss and Peter Buhr are partially funded by the Natural Sciences and Engineering Research Council of Canada.
     2908The authors would like to recognize the design assistance of Glen Ditchfield, Richard Bilson, Thierry Delisle, Andrew Beach, and Brice Dobry on the features described in this paper and thank Magnus Madsen for feedback on the writing.
     2909Funding for this project was provided by Huawei Ltd (\url{http://www.huawei.com}), and Aaron Moss and Peter Buhr were partially funded by the Natural Sciences and Engineering Research Council of Canada.
    28492910
    28502911{%
    28512912\fontsize{9bp}{12bp}\selectfont%
     2913\vspace*{-3pt}
    28522914\bibliography{pl}
    28532915}%
     
    29282990
    29292991
     2992\enlargethispage{1000pt}
    29302993\subsection{\CFA}
    29312994\label{s:CforallStack}
     
    29943057
    29953058
     3059\newpage
    29963060\subsection{\CC}
    29973061
Note: See TracChangeset for help on using the changeset viewer.