\documentclass[twoside,11pt]{article} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Latex packages used in the document. \usepackage{fullpage,times,comment} \usepackage{epic,eepic} \usepackage{upquote} % switch curled `'" to straight \usepackage{calc} \usepackage{varioref} % extended references \usepackage[labelformat=simple,aboveskip=0pt,farskip=0pt]{subfig} \renewcommand{\thesubfigure}{\alph{subfigure})} \usepackage[flushmargin]{footmisc} % support label/reference in footnote \usepackage{latexsym} % \Box glyph \usepackage{mathptmx} % better math font with "times" \usepackage[toc]{appendix} % article does not have appendix \usepackage[usenames]{color} \input{common} % common CFA document macros \usepackage[dvips,plainpages=false,pdfpagelabels,pdfpagemode=UseNone,colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue,pagebackref=true,breaklinks=true]{hyperref} \usepackage{breakurl} \urlstyle{sf} % reduce spacing \setlist[itemize]{topsep=5pt,parsep=0pt}% global \setlist[enumerate]{topsep=5pt,parsep=0pt}% global \usepackage[pagewise]{lineno} \renewcommand{\linenumberfont}{\scriptsize\sffamily} % Default underscore is too low and wide. Cannot use lstlisting "literate" as replacing underscore % removes it as a variable-name character so keywords in variables are highlighted. MUST APPEAR % AFTER HYPERREF. \renewcommand{\textunderscore}{\leavevmode\makebox[1.2ex][c]{\rule{1ex}{0.075ex}}} \newcommand{\NOTE}{\textbf{NOTE}} \newcommand{\TODO}[1]{{\color{Purple}#1}} \setlength{\topmargin}{-0.45in} % move running title into header \setlength{\headsep}{0.25in} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \CFAStyle % CFA code-style for all languages \lstset{ language=C++,moredelim=**[is][\color{red}]{@}{@} % make C++ the default language }% lstset \lstnewenvironment{C++}[1][] % use C++ style {\lstset{language=C++,moredelim=**[is][\color{red}]{@}{@}}\lstset{#1}}{} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \setcounter{secnumdepth}{3} % number subsubsections \setcounter{tocdepth}{3} % subsubsections in table of contents \makeindex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{\Huge \lstinline|cfa-cc| Developer's Reference }% title \author{\LARGE Fangren Yu }% author \date{ September, 2020 }% date %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \pagestyle{headings} % changed after setting pagestyle \renewcommand{\sectionmark}[1]{\markboth{\thesection\quad #1}{\thesection\quad #1}} \renewcommand{\subsectionmark}[1]{\markboth{\thesubsection\quad #1}{\thesubsection\quad #1}} \pagenumbering{roman} %\linenumbers % comment out to turn off line numbering \maketitle \clearpage \pdfbookmark[1]{Contents}{section} \tableofcontents \clearpage \thispagestyle{plain} \pagenumbering{arabic} \section{Overview} @cfa-cc@ is the reference compiler for the \CFA programming language, which is a non-object-oriented extension to C. \CFA attempts to introduce productive modern programming language features to C while maintaining as much backward-compatibility as possible, so that most existing C programs can seamlessly work with \CFA. Since the \CFA project dates back to the early 2000s, and only restarted in the past few years, there is a significant amount of legacy code in the current compiler codebase with little documentation. The lack of documentation makes it difficult to develop new features from the current implementation and diagnose problems. Currently, the \CFA team is also facing poor compiler performance. For the development of a new programming language, writing standard libraries is an important component. The slow compiler causes building of the library files to take tens of minutes, making iterative development and testing almost impossible. There is an ongoing effort to rewrite the core data-structure of the compiler to overcome the performance issue, but many bugs have appeared during this work, and lack of documentation is hampering debugging. This developer's reference manual begins the documentation and should be continuously im\-proved until it eventually covers the entire compiler codebase. For now, the focus is mainly on the parts being rewritten, and also the primary performance bottleneck, namely the resolution algorithm. Its aimed is to provide new project developers with guidance in understanding the codebase, and clarify the purpose and behaviour of certain functions that are not mentioned in the previous \CFA research papers~\cite{Bilson03,Ditchfield92,Moss19}. \section{Compiler Framework} \CFA source code is first transformed into an abstract syntax tree (AST) by the parser before analyzed by the compiler. \subsection{AST Representation} There are 4 major categories of AST nodes used by the compiler, along with some derived structures. \subsubsection{Declaration Nodes} A declaration node represents either of: \begin{itemize} \item type declaration: @struct@, @union@, @typedef@ or type parameter (see \VRef[Appendix]{s:KindsTypeParameters}) \item variable declaration \item function declaration \end{itemize} Declarations are introduced by standard C declarations, with the usual scoping rules. In addition, declarations can also be qualified by the \lstinline[language=CFA]@forall@ clause (which is the origin of \CFA's name): \begin{cfa} forall ( <$\emph{TypeParameterList}$> | <$\emph{AssertionList}$> ) $\emph{declaration}$ \end{cfa} Type parameters in \CFA are similar to \CC template type parameters. The \CFA declaration \begin{cfa} forall (dtype T) ... \end{cfa} behaves similarly to the \CC template declaration \begin{C++} template ... \end{C++} Assertions are a distinctive feature of \CFA, similar to \emph{interfaces} in D and Go, and \emph{traits} in Rust. Contrary to the \CC template where arbitrary functions and operators can be used in a template definition, in a \CFA parametric function, operations on parameterized types must be declared in assertions. Consider the following \CC template: \begin{C++} @template@ forall T foo( T t ) { return t + t * t; } \end{C++} where there are no explicit requirements on the type @T@. Therefore, the \CC compiler must deduce what operators are required during textual (macro) expansion of the template at each usage. As a result, templates cannot be compiled. \CFA assertions specify restrictions on type parameters: \begin{cfa} forall( dtype T | @{ T ?+?( T, T ); T ?*?( T, T ) }@ ) int foo ( T t ) { return t + t * t; } \end{cfa} Assertions are written using the usual \CFA function declaration syntax. Only types with operators ``@+@'' and ``@*@'' work with this function, and the function prototype is sufficient to allow separate compilation. Type parameters and assertions are used in the following compiler data-structures. \subsubsection{Type Nodes} Type nodes represent the type of an object or expression. Named types reference the corresponding type declarations. The type of a function is its function pointer type (same as standard C). With the addition of type parameters, named types may contain a list of parameter values (actual parameter types). \subsubsection{Statement Nodes} Statement nodes represent the executable statements in the program, including basic expression statements, control flows and blocks. Local declarations (within a block statement) are represented as declaration statements. \subsubsection{Expression Nodes} Some expressions are represented differently before and after the resolution stage: \begin{itemize} \item Name expressions: @NameExpr@ pre-resolution, @VariableExpr@ post-resolution \item Member expressions: @UntypedMemberExpr@ pre-resolution, @MemberExpr@ post-resolution \item \begin{sloppypar} Function call expressions (including overloadable operators): @UntypedExpr@ pre-resolution, @ApplicationExpr@ post-resolution \end{sloppypar} \end{itemize} The pre-resolution representation contains only the symbols. Post-resolution links them to the actual variable and function declarations. \subsection{Compilation Passes} Compilation steps are implemented as passes, which follows a general structural recursion pattern on the syntax tree. The basic workflow of compilation passes follows preorder and postorder traversal on the AST data-structure, implemented with visitor pattern, and can be loosely described with the following pseudocode: \begin{C++} Pass::visit( node_t node ) { previsit( node ); if ( visit_children ) for each child of node: child.accept( this ); postvisit( node ); } \end{C++} Operations in @previsit@ happen in preorder (top to bottom) and operations in @postvisit@ happen in postorder (bottom to top). The precise order of recursive operations on child nodes can be found in @Common/PassVisitor.impl.h@ (old) and @AST/Pass.impl.hpp@ (new). Implementations of compilation passes follow certain conventions: \begin{itemize} \item Passes \textbf{should not} directly override the visit method (Non-virtual Interface principle); if a pass desires different recursion behaviour, it should set @visit_children@ to false and perform recursive calls manually within previsit or postvisit procedures. To enable this option, inherit from the @WithShortCircuiting@ mixin. \item previsit may mutate the node but \textbf{must not} change the node type or return @nullptr@. \item postvisit may mutate the node, reconstruct it to a different node type, or delete it by returning @nullptr@. \item If the previsit or postvisit method is not defined for a node type, the step is skipped. If the return type is declared as @void@, the original node is returned by default. These behaviours are controlled by template specialization rules; see @Common/PassVisitor.proto.h@ (old) and @AST/@ @Pass.proto.hpp@ (new) for details. \end{itemize} Other useful mixin classes for compilation passes include: \begin{itemize} \item @WithGuards@ allows saving and restoring variable values automatically upon entering/exiting the current node. \item @WithVisitorRef@ creates a wrapped entity for the current pass (the actual argument passed to recursive calls internally) for explicit recursion, usually used together with @WithShortCircuiting@. \item @WithSymbolTable@ gives a managed symbol table with built-in scoping-rule handling (\eg on entering and exiting a block statement) \end{itemize} \NOTE: If a pass extends the functionality of another existing pass, due to \CC overloading resolution rules, it \textbf{must} explicitly introduce the inherited previsit and postvisit procedures to its own scope, or otherwise they are not picked up by template resolution: \begin{C++} class Pass2: public Pass1 { @using Pass1::previsit;@ @using Pass1::postvisit;@ // new procedures } \end{C++} \subsection{Data Structure Change (new-ast)} It has been observed that excessive copying of syntax tree structures accounts for a majority of computation cost and significantly slows down the compiler. In the previous implementation of the syntax tree, every internal node has a unique parent; therefore all copies are required to duplicate the entire subtree. A new, experimental re-implementation of the syntax tree (source under directory @AST/@ hereby referred to as ``new-ast'') attempts to overcome this issue with a functional approach that allows sharing of common sub-structures and only makes copies when necessary. The core of new-ast is a customized implementation of smart pointers, similar to @std::shared_ptr@ and @std::weak_ptr@ in the \CC standard library. Reference counting is used to detect sharing and allowing certain optimizations. For a purely functional (immutable) data-structure, all mutations are modelled by shallow copies along the path of mutation. With reference counting optimization, unique nodes are allowed to be mutated in place. This however, may potentially introduce some complications and bugs; a few issues are discussed near the end of this section. \subsubsection{Source: \lstinline{AST/Node.hpp}} Class @ast::Node@ is the base class of all new-ast node classes, which implements reference counting mechanism. Two different counters are recorded: ``strong'' reference count for number of nodes semantically owning it; ``weak'' reference count for number of nodes holding a mere reference and only need to observe changes. Class @ast::ptr_base@ is the smart pointer implementation and also takes care of resource management. Direct access through the smart pointer is read-only. A mutable access should be obtained by calling @shallowCopy@ or mutate as below. Currently, the weak pointers are only used to reference declaration nodes from a named type, or a variable expression. Since declaration nodes are intended to denote unique entities in the program, weak pointers always point to unique (unshared) nodes. This property may change in the future, and weak references to shared nodes may introduce some problems; see mutate function below. All node classes should always use smart pointers in structure definitions versus raw pointers. Function \begin{C++} void ast::Node::increment(ref_type ref) \end{C++} increments this node's strong or weak reference count. Function \begin{C++} void ast::Node::decrement(ref_type ref, bool do_delete = true) \end{C++} decrements this node's strong or weak reference count. If strong reference count reaches zero, the node is deleted. \NOTE: Setting @do_delete@ to false may result in a detached node. Subsequent code should manually delete the node or assign it to a strong pointer to prevent memory leak. Reference counting functions are internally called by @ast::ptr_base@. Function \begin{C++} template node_t * shallowCopy(const node_t * node) \end{C++} returns a mutable, shallow copy of node: all child pointers are pointing to the same child nodes. Function \begin{C++} template node_t * mutate(const node_t * node) \end{C++} returns a mutable pointer to the same node, if the node is unique (strong reference count is 1); otherwise, it returns @shallowCopy(node)@. It is an error to mutate a shared node that is weak-referenced. Currently this does not happen. A problem may appear once weak pointers to shared nodes (\eg expression nodes) are used; special care is needed. \NOTE: This naive uniqueness check may not be sufficient in some cases. A discussion of the issue is presented at the end of this section. Functions \begin{C++} template const node_t * mutate_field(const node_t * node, field_t parent_t::* field, assn_t && val) \end{C++} \begin{C++} template const node_t * mutate_field_index(const node_t * node, coll_t parent_t::* field, ind_t i, field_t && val) \end{C++} are helpers for mutating a field on a node using pointer to a member function (creates shallow copy when necessary). \subsubsection{Issue: Undetected Sharing} The @mutate@ behaviour described above has a problem: deeper shared nodes may be mistakenly considered as unique. \VRef[Figure]{f:DeepNodeSharing} shows how the problem could arise: \begin{figure} \centering \input{DeepNodeSharing} \caption{Deep sharing of nodes} \label{f:DeepNodeSharing} \end{figure} Given the tree rooted at P1, which is logically the chain P1-A-B, and P2 is irrelevant, assume @mutate(B)@ is called. The algorithm considers B as unique since it is only directly owned by A. However, the other tree P2-A-B indirectly shares the node B and is therefore wrongly mutated. To partly address this problem, if the mutation is called higher up the tree, a chain mutation helper can be used. \subsubsection{Source: \lstinline{AST/Chain.hpp}} Function \begin{C++} template auto chain_mutate(ptr_base & base) \end{C++} returns a chain mutator handle that takes pointer-to-member to go down the tree, while creating shallow copies as necessary; see @struct _chain_mutator@ in the source code for details. For example, in the above diagram, if mutation of B is wanted while at P1, the call using @chain_mutate@ looks like the following: \begin{C++} chain_mutate(P1.a)(&A.b) = new_value_of_b; \end{C++} \NOTE: if some node in chain mutate is shared (therefore shallow copied), it implies that every node further down is also copied, thus correctly executing the functional mutation algorithm. This example code creates copies of both A and B and performs mutation on the new nodes, so that the other tree P2-A-B is untouched. However, if a pass traverses down to node B and performs mutation, for example, in @postvisit(B)@, information on sharing higher up is lost. Since the new-ast structure is only in experimental use with the resolver algorithm, which mostly rebuilds the tree bottom-up, this issue does not actually happen. It should be addressed in the future when other compilation passes are migrated to new-ast and many of them contain procedural mutations, where it might cause accidental mutations to other logically independent trees (\eg common sub-expression) and become a bug. \section{Compiler Algorithm Documentation} This compiler algorithm documentation covers most of the resolver, data structures used in variable and expression resolution, and a few directly related passes. Later passes involving code generation are not included yet; documentation for those will be done latter. \subsection{Symbol Table} \NOTE: For historical reasons, the symbol-table data-structure is called @indexer@ in the old implementation. Hereby, the name is changed to @SymbolTable@. The symbol table stores a mapping from names to declarations, implements a similar name-space separation rule, and provides the same scoping rules as standard C.\footnote{ISO/IEC 9899:1999, Sections 6.2.1 and 6.2.3.} The difference in name-space rule is that @typedef@ aliases are no longer considered ordinary identifiers. In addition to C tag-types (@struct@, @union@, @enum@), \CFA introduces another tag type, @trait@, which is a named collection of assertions. \subsubsection{Source: \lstinline{AST/SymbolTable.hpp}} Function \begin{C++} SymbolTable::addId(const DeclWithType * decl) \end{C++} provides name mangling of identifiers, since \CFA allows overloading of variables and functions. The mangling scheme is closely based on the Itanium \CC ABI,\footnote{\url{https://itanium-cxx-abi.github.io/cxx-abi/abi.html}, Section 5.1} while making adaptations to \CFA specific features, mainly assertions and overloaded variables by type. Naming conflicts are handled by mangled names; lookup by name returns a list of declarations with the same identifier name. Functions \begin{C++} SymbolTable::addStruct(const StructDecl * decl) SymbolTable::addUnion(const UnionDecl * decl) SymbolTable::addEnum(const EnumDecl * decl) SymbolTable::addTrait(const TraitDecl * decl) \end{C++} add a tag-type declaration to the symbol table. Function \begin{C++} SymbolTable::addType(const NamedTypeDecl * decl) \end{C++} adds a @typedef@ alias to the symbol table. \textbf{C Incompatibility Note}: Since \CFA allows using @struct@, @union@ and @enum@ type-names without a prefix keyword, as in \CC, @typedef@ names and tag-type names cannot be disambiguated by syntax rules. Currently the compiler puts them together and disallows collision. The following program is valid C but invalid \CFA (and \CC): \begin{C++} struct A {}; typedef int A; // gcc: ok, cfa: Cannot redefine typedef A struct A sa; // C disambiguates via struct prefix A ia; \end{C++} In practices, such usage is extremely rare, and hence, this change (as in \CC) has minimal impact on existing C programs. The declaration \begin{C++} struct A {}; typedef struct A A; // A is an alias for struct A A a; struct A b; \end{C++} is not an error because the alias name is identical to the original. Finally, the following program is allowed in \CFA: \begin{C++} typedef int A; void A(); // name mangled // gcc: A redeclared as different kind of symbol, cfa: ok \end{C++} because the function name is mangled. \subsection{Type Environment and Unification} The following core ideas underlie the parametric type-resolution algorithm. A type environment organizes type parameters into \textbf{equivalent classes} and maps them to actual types. Unification is the algorithm that takes two (possibly parametric) types and parameter mappings, and attempts to produce a common type by matching information in the type environments. The unification algorithm is recursive in nature and runs in two different modes internally: \begin{itemize} \item Exact unification mode requires equivalent parameters to match perfectly. \item Inexact unification mode allows equivalent parameters to be converted to a common type. \end{itemize} For a pair of matching parameters (actually, their equivalent classes), if either side is open (not bound to a concrete type yet), they are combined. Within the inexact mode, types are allowed to differ on their cv-qualifiers (\eg @const@, @volatile@, \etc); additionally, if a type never appear either in a parameter list or as the base type of a pointer, it may also be widened (\ie safely converted). As \CFA currently does not implement subclassing as in object-oriented languages, widening conversions are only on the primitive types, \eg conversion from @int@ to @long int@. The need for two unification modes comes from the fact that parametric types are considered compatible only if all parameters are exactly the same (not just compatible). Pointer types also behaves similarly; in fact, they may be viewed as a primitive kind of parametric types. @int *@ and @long *@ are different types, just like @vector(int)@ and @vector(long)@ are, for the parametric type @*(T)@ / @vector(T)@, respectively. The resolver uses the following @public@ functions:\footnote{ Actual code also tracks assertions on type parameters; those extra arguments are omitted here for conciseness.} \subsubsection{Source: \lstinline{ResolvExpr/Unify.cc}} Function \begin{C++} bool unify(const Type * type1, const Type * type2, TypeEnvironment & env, OpenVarSet & openVars, const SymbolTable & symtab, Type *& commonType) \end{C++} returns a boolean indicating if the unification succeeds or fails after attempting to unify @type1@ and @type2@ within current type environment. If the unify succeeds, @env@ is modified by combining the equivalence classes of matching parameters in @type1@ and @type2@, and their common type is written to @commonType@. If the unify fails, nothing changes. Functions \begin{C++} bool typesCompatible(const Type * type1, const Type * type2, const SymbolTable & symtab, const TypeEnvironment & env) bool typesCompatibleIgnoreQualifiers(const Type * type1, const Type * type2, const SymbolTable & symtab, const TypeEnvironment & env) \end{C++} return a boolean indicating if types @type1@ and @type2@ can possibly be the same type. The second version ignores the outermost cv-qualifiers if present.\footnote{ In \lstinline@const int * const@, only the second \lstinline@const@ is ignored.} These function have no side effects. \NOTE: No attempt is made to widen the types (exact unification is used), although the function names may suggest otherwise, \eg @typesCompatible(int, long)@ returns false. \subsection{Expression Resolution} The design of the current version of expression resolver is outlined in the Ph.D.\ thesis by Aaron Moss~\cite{Moss19}. A summary of the resolver algorithm for each expression type is presented below. All overloadable operators are modelled as function calls. For a function call, interpretations of the function and arguments are found recursively. Then the following steps produce a filtered list of valid interpretations: \begin{enumerate} \item From all possible combinations of interpretations of the function and arguments, those where argument types may be converted to function parameter types are considered valid. \item Valid interpretations with the minimum sum of argument costs are kept. \item \label{p:argcost} Argument costs are then discarded; the actual cost for the function call expression is the sum of conversion costs from the argument types to parameter types. \item \label{p:returntype} For each return type, the interpretations with satisfiable assertions are then sorted by actual cost computed in step~\ref{p:argcost}. If for a given type, the minimum cost interpretations are not unique, that return type is ambiguous. If the minimum cost interpretation is unique but contains an ambiguous argument, it is also ambiguous. \end{enumerate} Therefore, for each return type, the resolver produces: \begin{itemize} \item no alternatives \item a single valid alternative \item an ambiguous alternative \end{itemize} \NOTE: an ambiguous alternative may be discarded at the parent expressions because a different return type matches better for the parent expressions. The \emph{non}-overloadable expressions in \CFA are: cast expressions, address-of (unary @&@) expressions, short-circuiting logical expressions (@&&@, @||@) and ternary conditional expression (@?:@). For a cast expression, the convertible argument types are kept. Then the result is selected by lowest argument cost, and further by lowest conversion cost to target type. If the lowest cost is still not unique or an ambiguous argument interpretation is selected, the cast expression is ambiguous. In an expression statement, the top level expression is implicitly cast to @void@. For an address-of expression, only lvalue results are kept and the minimum cost is selected. For logical expressions @&&@ and @||@, arguments are implicitly cast to @bool@, and follow the rules fr cast expression above. For the ternary conditional expression, the condition is implicitly cast to @bool@, and the branch expressions must have compatible types. Each pair of compatible branch expression types produce a possible interpretation, and the cost is defined as the sum of the expression costs plus the sum of conversion costs to the common type. \subsection{Conversion and Application Cost} There were some unclear parts in the previous documentation in the cost system, as described in the Moss thesis~\cite{Moss19}, section 4.1.2. Some clarification are presented in this section. \begin{enumerate} \item Conversion to a type denoted by parameter may incur additional cost if the match is not exact. For example, if a function is declared to accept @(T, T)@ and receives @(int, long)@, @T@ is deducted @long@ and an additional widening conversion cost is added for @int@ to @T@. \item The specialization level of a function is the sum of the least depth of an appearance of a type parameter (counting pointers, references and parameterized types), plus the number of assertions. A higher specialization level is favoured if argument conversion costs are equal. \item Coercion of pointer types is only allowed in explicit cast expressions; the only allowed implicit pointer casts are adding qualifiers to the base type and cast to @void*@, and these counts as safe conversions. Note that implicit cast from @void *@ to other pointer types is no longer valid, as opposed to standard C. \end{enumerate} \subsection{Assertion Satisfaction} The resolver tries to satisfy assertions on expressions only when it is needed: either while selecting from multiple alternatives of a same result type for a function call (step \ref{p:returntype} of resolving function calls) or upon reaching the top level of an expression statement. Unsatisfiable alternatives are discarded. Satisfiable alternatives receive \textbf{implicit parameters}: in \CFA, parametric functions may be separately compiled, as opposed to \CC templates which are only compiled at instantiation. Given the parametric function-definition: \begin{C++} forall (otype T | {void foo(T);}) void bar (T t) { foo(t); } \end{C++} the function @bar@ does not know which @foo@ to call when compiled without knowing the call site, so it requests a function pointer to be passed as an extra argument. At the call site, implicit parameters are automatically inserted by the compiler. Implementation of implicit parameters is discussed in \VRef[Appendix]{s:ImplementationParametricFunctions}. \section{Tests} \subsection{Test Suites} Automatic test suites are located under the @tests/@ directory. A test case consists of an input CFA source file (suffix @.cfa@), and an expected output file located in the @tests/.expect/@ directory, with the same file name ending with suffix @.txt@. For example, the test named @tests/tuple/tupleCast.cfa@ has the following files, for example: \begin{C++} tests/ tuple/ .expect/ tupleCast.txt tupleCast.cfa \end{C++} If compilation fails, the error output is compared to the expect file. If the compilation succeeds but does not generate an executable, the compilation output is compared to the expect file. If the compilation succeeds and generates an executable, the executable is run and its output is compared to the expect file. To run the tests, execute the test script @test.py@ under the @tests/@ directory, with a list of test names to be run, or @--all@ (or @make all-tests@) to run all tests. The test script reports test cases fail/success, compilation time and program run time. To see all the options available for @test.py@ using the @--help@ option. \subsection{Performance Reports} To turn on performance reports, pass the @-XCFA -S@ flag to the compiler. Three kinds of performance reports are available: \begin{enumerate} \item Time, reports time spent in each compilation step \item Heap, reports number of dynamic memory allocations, total bytes allocated, and maximum heap memory usage \item Counters, for certain predefined statistics; counters can be registered anywhere in the compiler as a static object, and the interface can be found at @Common/Stats/Counter.h@. \end{enumerate} It is suggested to run performance tests with optimization (@g++@ flag @-O3@). \appendix \section{Appendix} \subsection{Kinds of Type Parameters} \label{s:KindsTypeParameters} A type parameter in a @forall@ clause has 3 kinds: \begin{enumerate}[listparindent=0pt] \item @dtype@: any data type (built-in or user defined) that is not a concrete type. A non-concrete type is an incomplete type such as an opaque type or pointer/reference with an implicit (pointer) size and implicitly generated reference and dereference operations. \item @otype@: any data type (built-in or user defined) that is concrete type. A concrete type is a complete type, \ie types that can be used to create a variable, which also implicitly asserts the existence of default and copy constructors, assignment, and destructor\footnote{\CFA implements the same automatic resource management (RAII) semantics as \CC.}. % \item % @ftype@: any function type. % % @ftype@ provides two purposes: % \begin{itemize} % \item % Differentiate function pointer from data pointer because (in theory) some systems have different sizes for these pointers. % \item % Disallow a function pointer to match an overloaded data pointer, since variables and functions can have the same names. % \end{itemize} \item @ttype@: tuple (variadic) type. Restricted to the type for the last parameter in a function, it provides a type-safe way to implement variadic functions. Note however, that it has certain restrictions, as described in the implementation section below. \end{enumerate} \subsection{GNU C Nested Functions} \CFA is designed to be mostly compatible with GNU C, an extension to ISO C99 and C11 standards. The \CFA compiler also implements some language features by GCC extensions, most notably nested functions. In ISO C, function definitions are not allowed to be nested. GCC allows nested functions with full lexical scoping. The following example is taken from GCC documentation\footnote{\url{https://gcc.gnu.org/onlinedocs/gcc/Nested-Functions.html}}: \begin{C++} void bar( int * array, int offset, int size ) { int access( int * array, int index ) { return array[index + offset]; } int i; /* ... */ for ( i = 0; i < size; i++ ) /* ... */ access (array, i) /* ... */ } \end{C++} GCC nested functions behave identically to \CC lambda functions with default by-reference capture (stack-allocated, lifetime ends upon exiting the declared block), while also possible to be passed as arguments with standard function pointer types. \subsection{Implementation of Parametric Functions} \label{s:ImplementationParametricFunctions} \CFA implements parametric functions using the implicit parameter approach: required assertions are passed to the callee by function pointers; size of a parametric type must also be known if referenced directly (\ie not as a pointer). The implementation is similar to the one from Scala\footnote{\url{https://www.scala-lang.org/files/archive/spec/2.13/07-implicits.html}}, with some notable differences in resolution: \begin{enumerate} \item All types, variables, and functions are candidates of implicit parameters \item The parameter (assertion) name must match the actual declarations. \end{enumerate} For example, the \CFA function declaration \begin{cfa} forall( otype T | { int foo( T, int ); } ) int bar(T); \end{cfa} after implicit parameter expansion, has the actual signature\footnote{\textbf{otype} also requires the type to have constructor and destructor, which are the first two function pointers preceding the one for \textbf{foo}.} \begin{C++} int bar( T, size_t, void (*)(T&), void (*)(T&), int (*)(T, int) ); \end{C++} The implicit parameter approach has an apparent issue: when the satisfying declaration is also parametric, it may require its own implicit parameters too. That also causes the supplied implicit parameter to have a different \textbf{actual} type than the \textbf{nominal} type, so it cannot be passed directly. Therefore, a wrapper with matching actual type must be created, and it is here where GCC nested functions are used internally by the compiler. Consider the following program: \begin{cfa} int assertion(int); forall( otype T | { int assertion(T); } ) void foo(T); forall(otype T | { void foo(T); } ) void bar(T t) { foo(t); } \end{cfa} The \CFA compiler translates the program to non-parametric form\footnote{In the final code output, \lstinline@T@ needs to be replaced by an opaque type, and arguments must be accessed by a frame pointer offset table, due to the unknown sizes. The presented code here is simplified for better understanding.} \begin{C++} // ctor, dtor and size arguments are omitted void foo(T, int (*)(T)); void bar(T t, void (*foo)(T)) { foo(t); } \end{C++} However, when @bar(1)@ is called, @foo@ cannot be directly provided as an argument: \begin{C++} bar(1, foo); // WRONG: foo has different actual type \end{C++} and an additional step is required: \begin{C++} { void _foo_wrapper(int t) { foo( t, assertion ); } bar( 1, _foo_wrapper ); } \end{C++} Nested assertions and implicit parameter creation may continue indefinitely. This issue is a limitation of implicit parameter implementation. In particular, polymorphic variadic recursion must be structural (\ie the number of arguments decreases in any possible recursive calls), otherwise code generation gets into an infinite loop. The \CFA compiler sets a limit on assertion depth and reports an error if assertion resolution does not terminate within the limit (as for \lstinline[language=C++]@templates@ in \CC). \addcontentsline{toc}{section}{\refname} \bibliographystyle{plain} \bibliography{pl} \end{document} % Local Variables: % % tab-width: 4 % % fill-column: 100 % % compile-command: "make" % % End: %