Index: doc/theses/fangren_yu_MMath/intro.tex
===================================================================
--- doc/theses/fangren_yu_MMath/intro.tex	(revision 262a8640f3e6326f4901f564082c318c28cc6943)
+++ doc/theses/fangren_yu_MMath/intro.tex	(revision 2980ccb81bba22ad3dcfcfd13e3857a6f2909ee3)
@@ -12,5 +12,5 @@
 Therefore, one of the key goals in \CFA is to push the boundary on overloading, and hence, overload resolution.
 As well, \CFA follows the current trend of replacing nominal inheritance with traits.
-Together, the resulting \CFA type-system has a number of unique features making it different from all other programming languages.
+Together, the resulting \CFA type-system has a number of unique features making it different from other programming languages with expressive, static type-systems.
 
 
@@ -23,5 +23,5 @@
 All computers have multiple types because computer architects optimize the hardware around a few basic types with well defined (mathematical) operations: boolean, integral, floating-point, and occasionally strings.
 A programming language and its compiler present ways to declare types that ultimately map into the ones provided by the underlying hardware.
-These language types are thrust upon programmers, with their syntactic and semantic rules and restrictions.
+These language types are \emph{thrust} upon programmers, with their syntactic and semantic rules and restrictions.
 These rules are used to transform a language expression to a hardware expression.
 Modern programming-languages allow user-defined types and generalize across multiple types using polymorphism.
@@ -34,5 +34,5 @@
 Virtually all programming languages overload the arithmetic operators across the basic computational types using the number and type of parameters and returns.
 Like \CC, \CFA also allows these operators to be overloaded with user-defined types.
-The syntax for operator names uses the @'?'@ character to denote a parameter, \eg unary left and right operators: @?++@ and @++?@, and binary operators @?+?@ and @?<=?@.
+The syntax for operator names uses the @'?'@ character to denote a parameter, \eg left and right unary operators: @?++@ and @++?@, and binary operators @?+?@ and @?<=?@.
 Here, a user-defined type is extended with an addition operation with the same syntax as builtin types.
 \begin{cfa}
@@ -44,5 +44,5 @@
 \end{cfa}
 The type system examines each call site and selects the best matching overloaded function based on the number and types of arguments.
-If there are mixed-mode operands, @2 + 3.5@, the type system, like in C/\CC, attempts (safe) conversions, converting the argument type(s) to the parameter type(s).
+If there are mixed-mode operands, @2 + 3.5@, the type system attempts (safe) conversions, like in C/\CC, converting the argument type(s) to the parameter type(s).
 Conversions are necessary because the hardware rarely supports mix-mode operations, so both operands must be the same type.
 Note, without implicit conversions, programmers must write an exponential number of functions covering all possible exact-match cases among all possible types.
@@ -71,5 +71,5 @@
 double d = f( 3 );		$\C{// select (2)}\CRT$
 \end{cfa}
-Alternatively, if the type system looks at the return type, there is an exact match for each call, which matches with programmer intuition and expectation.
+Alternatively, if the type system looks at the return type, there is an exact match for each call, which again matches with programmer intuition and expectation.
 This capability can be taken to the extreme, where there are no function parameters.
 \begin{cfa}
@@ -80,5 +80,5 @@
 \end{cfa}
 Again, there is an exact match for each call.
-If there is no exact match, a set of minimal conversions can be added to find a best match, as for operator overloading.
+If there is no exact match, a set of minimal, safe conversions can be added to find a best match, as for operator overloading.
 
 
@@ -99,6 +99,6 @@
 \end{cfa}
 It is interesting that shadow overloading is considered a normal programming-language feature with only slight software-engineering problems.
-However, variable overloading within a scope is considered extremely dangerous, without any evidence to corroborate this claim.
-Similarly, function overloading occurs silently within the global scope in \CC from @#include@ files all the time without problems.
+However, variable overloading within a scope is often considered extremely dangerous, without any evidence to corroborate this claim.
+In contrast, function overloading in \CC occurs silently within the global scope from @#include@ files all the time without problems.
 
 In \CFA, the type system simply treats overloaded variables as an overloaded function returning a value with no parameters.
@@ -114,5 +114,13 @@
 Hence, the name @MAX@ can replace all the C type-specific names, \eg @INT_MAX@, @LONG_MAX@, @DBL_MAX@, \etc.
 The result is a significant reduction in names to access typed constants.
-% Paraphrasing Shakespeare, ``The \emph{name} is the thing.''.
+
+As an aside, C has a separate namespace for type and variables allowing overloading between the namespaces, using @struct@ (qualification) to disambiguate.
+\begin{cfa}
+void S() {
+	struct @S@ { int S; };
+	@struct S@ S;
+	void S( @struct S@ S ) { S.S = 1; };
+}
+\end{cfa}
 
 
@@ -120,10 +128,11 @@
 
 \CFA is unique in providing restricted constant overloading for the values @0@ and @1@, which have special status in C, \eg the value @0@ is both an integer and a pointer literal, so its meaning depends on context.
-In addition, several operations are defined in terms of values @0@ and @1@, \eg:
-\begin{cfa}
-if ( x ) ++x			$\C{// if ( x != 0 ) x += 1;}$
-\end{cfa}
-Every @if@ and iteration statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result.
-These two constants are given types @zero_t@ and @one_t@ in \CFA, which allows overloading various operations for new types that seamlessly connect to all special @0@ and @1@ contexts.
+In addition, several operations are defined in terms of values @0@ and @1@.
+For example, every @if@ and iteration statement in C compares the condition with @0@, and every increment and decrement operator is semantically equivalent to adding or subtracting the value @1@ and storing the result.
+\begin{cfa}
+if ( x ) ++x;        =>    if ( x @!= 0@ ) x @+= 1@;
+for ( ; x; --x )   =>    for ( ; x @!= 0@; x @-= 1@ )
+\end{cfa}
+To generalize this feature, both constants are given types @zero_t@ and @one_t@ in \CFA, which allows overloading various operations for new types that seamlessly work with the special @0@ and @1@ contexts.
 The types @zero_t@ and @one_t@ have special builtin implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work.
 \begin{cfa}
@@ -133,13 +142,13 @@
 S ?=?( S & dst, zero_t ) { dst.[i,j] = 0; return dst; } $\C{// assignment}$
 S ?=?( S & dst, one_t ) { dst.[i,j] = 1; return dst; }
-S ?+=?( S & s, one_t ) { s.[i,j] += 1; return s; } $\C{// increment/decrement}$
+S ?+=?( S & s, one_t ) { s.[i,j] += 1; return s; } $\C{// increment/decrement each field}$
 S ?-=?( S & s, one_t ) { s.[i,j] -= 1; return s; }
 int ?!=?( S s, zero_t ) { return s.i != 0 && s.j != 0; } $\C{// comparison}$
-S s = @0@;
-s = @0@;
+S s = @0@;			$\C{// initialization}$
+s = @0@;			$\C{// assignments}$
 s = @1@;
-if ( @s@ ) @++s@;	$\C{// unary ++/-\,- come from +=/-=}$
-\end{cfa}
-Hence, type @S@ is first-class with respect to the basic types, working with all existing implicit C mechanisms.
+if ( @s@ ) @++s@;	$\C{// special, unary ++/-\,- come implicitly from +=/-=}$
+\end{cfa}
+Here, type @S@ is first-class with respect to the basic types, working with all existing implicit C mechanisms.
 
 
@@ -180,5 +189,5 @@
 \end{cfa}
 In both overloads of @f@, the type system works from the constant initializations inwards and/or outwards to determine the types of all variables and functions.
-Note, like template meta-programming, there could be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of @f@.
+Note, like template meta programming, there could be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of @f@.
 Inferring type constraints, by analysing the body of @f@ is possible, and these constraints must be satisfied at each call site by the argument types;
 in this case, parametric polymorphism can allow separate compilation.
@@ -200,5 +209,5 @@
 \begin{cfa}
 
-auto x = 3.0 * 4;
+auto x = 3.0 * i;
 int y;
 auto z = y;
@@ -223,5 +232,5 @@
 This issue is exaggerated with \CC templates, where type names are 100s of characters long, resulting in unreadable error messages.
 \item
-Ensuring the type of secondary variables, always match a primary variable.
+Ensuring the type of secondary variables, match a primary variable(s).
 \begin{cfa}
 int x; $\C{// primary variable}$
@@ -284,6 +293,6 @@
 There are full-time Java consultants, who are hired to find memory-management problems in large Java programs.}
 The entire area of Computer-Science data-structures is obsessed with time and space, and that obsession should continue into regular programming.
-Understanding space and time issues are an essential part of the programming craft.
-Given @typedef@ and @typeof@ in \CFA, and the strong need to use the left-hand type in resolution, implicit type-inferencing is unsupported.
+Understanding space and time issues is an essential part of the programming craft.
+Given @typedef@ and @typeof@ in \CFA, and the strong desire to use the left-hand type in resolution, implicit type-inferencing is unsupported.
 Should a significant need arise, this feature can be revisited.
 
@@ -291,97 +300,84 @@
 \section{Polymorphism}
 
-\CFA provides polymorphic functions and types, where the polymorphic function can be the constraints types using assertions based on traits.
-
-\subsection{\texorpdfstring{\protect\lstinline{forall} functions}{forall functions}}
-\label{sec:poly-fns}
-
-The signature feature of \CFA is parametric-polymorphic functions~\cite{forceone:impl,Cormack90,Duggan96} with functions generalized using a @forall@ clause (giving the language its name).
+\CFA provides polymorphic functions and types, where a polymorphic function can constrain types using assertions based on traits.
+
+
+\subsection{Polymorphic Function}
+
+The signature feature of the \CFA type-system is parametric-polymorphic functions~\cite{forceone:impl,Cormack90,Duggan96}, generalized using a @forall@ clause (giving the language its name).
 \begin{cfa}
 @forall( T )@ T identity( T val ) { return val; }
 int forty_two = identity( 42 );		$\C{// T is bound to int, forty\_two == 42}$
 \end{cfa}
-This @identity@ function can be applied to any complete \newterm{object type} (or @otype@).
-The type variable @T@ is transformed into a set of additional implicit parameters encoding sufficient information about @T@ to create and return a variable of that type.
-The \CFA implementation passes the size and alignment of the type represented by an @otype@ parameter, as well as an assignment operator, constructor, copy constructor, and destructor.
-If this extra information is not needed, for instance, for a pointer, the type parameter can be declared as a \newterm{data type} (or @dtype@).
-
-In \CFA, the polymorphic runtime cost is spread over each polymorphic call, because more arguments are passed to polymorphic functions;
-the experiments in Section~\ref{sec:eval} show this overhead is similar to \CC virtual function calls.
-A design advantage is that, unlike \CC template functions, \CFA polymorphic functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat.
-
-Since bare polymorphic types provide a restricted set of available operations, \CFA provides a \newterm{type assertion}~\cite[pp.~37-44]{Alphard} mechanism to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type variable.
-For example, the function @twice@ can be defined using the \CFA syntax for operator overloading.
-\begin{cfa}
-forall( T @| { T ?+?(T, T); }@ ) T twice( T x ) { return x @+@ x; }  $\C{// ? denotes operands}$
-int val = twice( twice( 3.7 ) );  $\C{// val == 14}$
-\end{cfa}
-This works for any type @T@ with a matching addition operator.
-The polymorphism is achieved by creating a wrapper function for calling @+@ with the @T@ bound to @double@ and then passing this function to the first call of @twice@.
-There is now the option of using the same @twice@ and converting the result into @int@ on assignment or creating another @twice@ with the type parameter @T@ bound to @int@ because \CFA uses the return type~\cite{Cormack81,Baker82,Ada} in its type analysis.
-The first approach has a late conversion from @double@ to @int@ on the final assignment, whereas the second has an early conversion to @int@.
-\CFA minimizes the number of conversions and their potential to lose information;
-hence, it selects the first approach, which corresponds with C programmer intuition.
-
-Crucial to the design of a new programming language are the libraries to access thousands of external software features.
-Like \CC, \CFA inherits a massive compatible library base, where other programming languages must rewrite or provide fragile interlanguage communication with C.
-A simple example is leveraging the existing type-unsafe (@void *@) C @bsearch@ to binary search a sorted float array.
-\begin{cfa}
-void * bsearch( const void * key, const void * base, size_t nmemb, size_t size,
-				int (* compar)( const void *, const void * ));
-int comp( const void * t1, const void * t2 ) {
-	 return *(double *)t1 < *(double *)t2 ? -1 : *(double *)t2 < *(double *)t1 ? 1 : 0;
-}
-double key = 5.0, vals[10] = { /* 10 sorted float values */ };
-double * val = (double *)bsearch( &key, vals, 10, sizeof(vals[0]), comp ); $\C{// search sorted array}$
-\end{cfa}
-This can be augmented simply with generalized, type-safe, \CFA-overloaded wrappers.
-\begin{cfa}
-forall( T | { int ?<?( T, T ); } ) T * bsearch( T key, const T * arr, size_t size ) {
-	int comp( const void * t1, const void * t2 ) { /* as above with double changed to T */ }
-	return (T *)bsearch( &key, arr, size, sizeof(T), comp );
-}
-forall( T | { int ?<?( T, T ); } ) unsigned int bsearch( T key, const T * arr, size_t size ) {
-	T * result = bsearch( key, arr, size );	$\C{// call first version}$
-	return result ? result - arr : size; $\C{// pointer subtraction includes sizeof(T)}$
-}
-double * val = bsearch( 5.0, vals, 10 ); $\C{// selection based on return type}$
-int posn = bsearch( 5.0, vals, 10 );
-\end{cfa}
-The nested function @comp@ provides the hidden interface from typed \CFA to untyped (@void *@) C, plus the cast of the result.
-% FIX
-Providing a hidden @comp@ function in \CC is awkward as lambdas do not use C calling conventions and template declarations cannot appear in block scope.
-In addition, an alternate kind of return is made available: position versus pointer to found element.
-\CC's type system cannot disambiguate between the two versions of @bsearch@ because it does not use the return type in overload resolution, nor can \CC separately compile a template @bsearch@.
-
-\CFA has replacement libraries condensing hundreds of existing C functions into tens of \CFA overloaded functions, all without rewriting the actual computations (see Section~\ref{sec:libraries}).
-For example, it is possible to write a type-safe \CFA wrapper @malloc@ based on the C @malloc@, where the return type supplies the type/size of the allocation, which is impossible in most type systems.
-\begin{cfa}
-forall( T & | sized(T) ) T * malloc( void ) { return (T *)malloc( sizeof(T) ); }
+This @identity@ function can be applied to an \newterm{object type}, \ie a type with a known size and alignment, which is sufficient to stack allocate, default or copy initialize, assign, and delete.
+The \CFA implementation passes the size and alignment for each type parameter, as well as any implicit/explicit constructor, copy constructor, assignment operator, and destructor.
+For an incomplete \newterm{data type}, \eg pointer/reference types, this information is not needed.
+\begin{cfa}
+forall( T * ) T * identity( T * val ) { return val; }
+int i, * ip = identity( &i );
+\end{cfa}
+Unlike \CC template functions, \CFA polymorphic functions are compatible with C \emph{separate compilation}, preventing compilation and code bloat.
+
+To constrain polymorphic types, \CFA uses \newterm{type assertions}~\cite[pp.~37-44]{Alphard} to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type variable.
+For example, the function @twice@ works for any type @T@ with a matching addition operator.
+\begin{cfa}
+forall( T @| { T ?+?(T, T); }@ ) T twice( T x ) { return x @+@ x; }
+int val = twice( twice( 3 ) );  $\C{// val == 12}$
+\end{cfa}
+For example. parametric polymorphism and assertions occurs in existing type-unsafe (@void *@) C @qsort@ to sort an array.
+\begin{cfa}
+void qsort( void * base, size_t nmemb, size_t size, int (*cmp)( const void *, const void * ) );
+\end{cfa}
+Here, the polymorphism is type-erasure, and the parametric assertion is the comparison routine, which is explicitly passed.
+\begin{cfa}
+enum { N = 5 };
+double val[N] = { 5.1, 4.1, 3.1, 2.1, 1.1 };
+int cmp( const void * v1, const void * v2 ) { $\C{// compare two doubles}$
+	return *(double *)v1 < *(double *)v2 ? -1 : *(double *)v2 < *(double *)v1 ? 1 : 0;
+}
+qsort( val, N, sizeof( double ), cmp );
+\end{cfa}
+The equivalent type-safe version in \CFA is a wrapper over the C version.
+\begin{cfa}
+forall( ET | { int @?<?@( ET, ET ); } ) $\C{// type must have < operator}$
+void qsort( ET * vals, size_t dim ) {
+	int cmp( const void * t1, const void * t2 ) { $\C{// nested function}$
+		return *(ET *)t1 @<@ *(ET *)t2 ? -1 : *(ET *)t2 @<@ *(ET *)t1 ? 1 : 0;
+	}
+	qsort( vals, dim, sizeof(ET), cmp ); $\C{// call C version}$
+}
+qsort( val, N );  $\C{// deduct type double, and pass builtin < for double}$
+\end{cfa}
+The nested function @cmp@ is implicitly built and provides the interface from typed \CFA to untyped (@void *@) C.
+Providing a hidden @cmp@ function in \CC is awkward as lambdas do not use C calling conventions and template declarations cannot appear in block scope.
+% In addition, an alternate kind of return is made available: position versus pointer to found element.
+% \CC's type system cannot disambiguate between the two versions of @bsearch@ because it does not use the return type in overload resolution, nor can \CC separately compile a template @bsearch@.
+Call-site inferencing and nested functions provide a localized form of inheritance.
+For example, the \CFA @qsort@ can be made to sort in descending order by locally changing the behaviour of @<@.
+\begin{cfa}
+{
+	int ?<?( double x, double y ) { return x @>@ y; } $\C{// locally override behaviour}$
+	qsort( vals, 10 );							$\C{// descending sort}$
+}
+\end{cfa}
+The local version of @?<?@ overrides the built-in @?<?@ so it is passed to @qsort@.
+The local version performs @?>?@, making @qsort@ sort in descending order.
+Hence, any number of assertion functions can be overridden locally to maximize the reuse of existing functions and types, without the construction of a named inheritance hierarchy.
+A final example is a type-safe wrapper for C @malloc@, where the return type supplies the type/size of the allocation, which is impossible in most type systems.
+\begin{cfa}
+static inline forall( T & | sized(T) )
+T * malloc( void ) {
+	if ( _Alignof(T) <= __BIGGEST_ALIGNMENT__ ) return (T *)malloc( sizeof(T) ); // C allocation
+	else return (T *)memalign( _Alignof(T), sizeof(T) );
+}
 // select type and size from left-hand side
-int * ip = malloc();  double * dp = malloc();  struct S {...} * sp = malloc();
-\end{cfa}
-
-Call site inferencing and nested functions provide a localized form of inheritance.
-For example, the \CFA @qsort@ only sorts in ascending order using @<@.
-However, it is trivial to locally change this behavior.
-\begin{cfa}
-forall( T | { int ?<?( T, T ); } ) void qsort( const T * arr, size_t size ) { /* use C qsort */ }
-int main() {
-	int ?<?( double x, double y ) { return x @>@ y; } $\C{// locally override behavior}$
-	qsort( vals, 10 );							$\C{// descending sort}$
-}
-\end{cfa}
-The local version of @?<?@ performs @?>?@ overriding the built-in @?<?@ so it is passed to @qsort@.
-Therefore, programmers can easily form local environments, adding and modifying appropriate functions, to maximize the reuse of other existing functions and types.
-
-To reduce duplication, it is possible to distribute a group of @forall@ (and storage-class qualifiers) over functions/types, such that each block declaration is prefixed by the group (see the example in Appendix~\ref{s:CforallStack}).
-\begin{cfa}
-forall( @T@ ) {							$\C{// distribution block, add forall qualifier to declarations}$
-	struct stack { stack_node(@T@) * head; };	$\C{// generic type}$
-	inline {									$\C{// nested distribution block, add forall/inline to declarations}$
-		void push( stack(@T@) & s, @T@ value ) ...	$\C{// generic operations}$
-	}
-}
-\end{cfa}
+int * ip = malloc();  double * dp = malloc();  $@$[aligned(64)] struct S {...} * sp = malloc();
+\end{cfa}
+The @sized@ assertion passes size and alignment as a data object has no implicit assertions.
+Both assertions are used in @malloc@ via @sizeof@ and @_Alignof@.
+
+These mechanism are used to construct type-safe wrapper-libraries condensing hundreds of existing C functions into tens of \CFA overloaded functions.
+Hence, existing C legacy code is leveraged as much as possible;
+other programming languages must build supporting libraries from scratch, even in \CC.
 
 
@@ -390,10 +386,9 @@
 \CFA provides \newterm{traits} to name a group of type assertions, where the trait name allows specifying the same set of assertions in multiple locations, preventing repetition mistakes at each function declaration.
 \begin{cquote}
-\lstDeleteShortInline@%
-\begin{tabular}{@{}l@{\hspace{\parindentlnth}}|@{\hspace{\parindentlnth}}l@{}}
+\begin{tabular}{@{}l|@{\hspace{10pt}}l@{}}
 \begin{cfa}
 trait @sumable@( T ) {
 	void @?{}@( T &, zero_t ); // 0 literal constructor
-	T ?+?( T, T );			 // assortment of additions
+	T ?+?( T, T );		 // assortment of additions
 	T @?+=?@( T &, T );
 	T ++?( T & );
@@ -412,11 +407,10 @@
 \end{cfa}
 \end{tabular}
-\lstMakeShortInline@%
 \end{cquote}
-
-Note that the @sumable@ trait does not include a copy constructor needed for the right side of @?+=?@ and return;
-it is provided by @otype@, which is syntactic sugar for the following trait.
-\begin{cfa}
-trait otype( T & | sized(T) ) {  // sized is a pseudo-trait for types with known size and alignment
+Traits are simply flatten at the use point, as if written in full by the programmer, where traits often contain overlapping assertions, \eg operator @+@.
+Hence, trait names play no part in type equivalence.
+Note, the type @T@ is an object type, and hence, has the implicit internal trait @otype@.
+\begin{cfa}
+trait otype( T & | sized(T) ) {
 	void ?{}( T & );						$\C{// default constructor}$
 	void ?{}( T &, T );						$\C{// copy constructor}$
@@ -425,38 +419,5 @@
 };
 \end{cfa}
-Given the information provided for an @otype@, variables of polymorphic type can be treated as if they were a complete type: stack allocatable, default or copy initialized, assigned, and deleted.
-
-In summation, the \CFA type system uses \newterm{nominal typing} for concrete types, matching with the C type system, and \newterm{structural typing} for polymorphic types.
-Hence, trait names play no part in type equivalence;
-the names are simply macros for a list of polymorphic assertions, which are expanded at usage sites.
-Nevertheless, trait names form a logical subtype hierarchy with @dtype@ at the top, where traits often contain overlapping assertions, \eg operator @+@.
-Traits are used like interfaces in Java or abstract base classes in \CC, but without the nominal inheritance relationships.
-Instead, each polymorphic function (or generic type) defines the structural type needed for its execution (polymorphic type key), and this key is fulfilled at each call site from the lexical environment, which is similar to the Go~\cite{Go} interfaces.
-Hence, new lexical scopes and nested functions are used extensively to create local subtypes, as in the @qsort@ example, without having to manage a nominal inheritance hierarchy.
-% (Nominal inheritance can be approximated with traits using marker variables or functions, as is done in Go.)
-
-% Nominal inheritance can be simulated with traits using marker variables or functions:
-% \begin{cfa}
-% trait nominal(T) {
-%     T is_nominal;
-% };
-% int is_nominal;								$\C{// int now satisfies the nominal trait}$
-% \end{cfa}
-%
-% Traits, however, are significantly more powerful than nominal-inheritance interfaces; most notably, traits may be used to declare a relationship \emph{among} multiple types, a property that may be difficult or impossible to represent in nominal-inheritance type systems:
-% \begin{cfa}
-% trait pointer_like(Ptr, El) {
-%     lvalue El *?(Ptr);						$\C{// Ptr can be dereferenced into a modifiable value of type El}$
-% }
-% struct list {
-%     int value;
-%     list * next;								$\C{// may omit "struct" on type names as in \CC}$
-% };
-% typedef list * list_iterator;
-%
-% lvalue int *?( list_iterator it ) { return it->value; }
-% \end{cfa}
-% In the example above, @(list_iterator, int)@ satisfies @pointer_like@ by the user-defined dereference function, and @(list_iterator, list)@ also satisfies @pointer_like@ by the built-in dereference operator for pointers. Given a declaration @list_iterator it@, @*it@ can be either an @int@ or a @list@, with the meaning disambiguated by context (\eg @int x = *it;@ interprets @*it@ as an @int@, while @(*it).value = 42;@ interprets @*it@ as a @list@).
-% While a nominal-inheritance system with associated types could model one of those two relationships by making @El@ an associated type of @Ptr@ in the @pointer_like@ implementation, few such systems could model both relationships simultaneously.
+The implicit routines are used by the @sumable@ operator @?+=?@ for the right side of @?+=?@ and return.
 
 
@@ -465,12 +426,17 @@
 A significant shortcoming of standard C is the lack of reusable type-safe abstractions for generic data structures and algorithms.
 Broadly speaking, there are three approaches to implement abstract data structures in C.
-One approach is to write bespoke data structures for each context in which they are needed.
-While this approach is flexible and supports integration with the C type checker and tooling, it is also tedious and error prone, especially for more complex data structures.
-A second approach is to use @void *@-based polymorphism, \eg the C standard library functions @bsearch@ and @qsort@, which allow for the reuse of code with common functionality.
-However, basing all polymorphism on @void *@ eliminates the type checker's ability to ensure that argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that is otherwise not needed.
-A third approach to generic code is to use preprocessor macros, which does allow the generated code to be both generic and type checked, but errors may be difficult to interpret.
-Furthermore, writing and using preprocessor macros is unnatural and inflexible.
-
-\CC, Java, and other languages use \newterm{generic types} to produce type-safe abstract data types.
+\begin{enumerate}[leftmargin=*]
+\item
+Write bespoke data structures for each context they are needed.
+While this approach is flexible and supports integration with the C type checker and tooling, it is tedious and error prone, especially for more complex data structures.
+\item
+Use @void *@-based polymorphism, \eg the C standard library functions @bsearch@ and @qsort@, which allow for the reuse of code with common functionality.
+However, this approach eliminates the type checker's ability to ensure argument types are properly matched, often requiring a number of extra function parameters, pointer indirection, and dynamic allocation that is otherwise unnecessary.
+\item
+Use preprocessor macros, similar to \CC @templates@, to generate code that is both generic and type checked, but errors may be difficult to interpret.
+Furthermore, writing and using preprocessor macros is difficult and inflexible.
+\end{enumerate}
+
+\CC, Java, and other languages use \newterm{generic types} to produce type-safe abstract data-types.
 \CFA generic types integrate efficiently and naturally with the existing polymorphic functions, while retaining backward compatibility with C and providing separate compilation.
 However, for known concrete parameters, the generic-type definition can be inlined, like \CC templates.
@@ -478,107 +444,65 @@
 A generic type can be declared by placing a @forall@ specifier on a @struct@ or @union@ declaration and instantiated using a parenthesized list of types after the type name.
 \begin{cquote}
-\lstDeleteShortInline@%
-\begin{tabular}{@{}l|@{\hspace{\parindentlnth}}l@{}}
-\begin{cfa}
-@forall( R, S )@ struct pair {
-	R first;	S second;
+\begin{tabular}{@{}l|@{\hspace{10pt}}l@{}}
+\begin{cfa}
+@forall( F, S )@ struct pair {
+	F first;	S second;
 };
-@forall( T )@ // dynamic
-T value( pair(const char *, T) p ) { return p.second; }
-@forall( dtype F, T )@ // dtype-static (concrete)
-T value( pair(F *, T * ) p) { return *p.second; }
+@forall( F, S )@ // object
+S second( pair( F, S ) p ) { return p.second; }
+@forall( F *, S * )@ // sized
+S * second( pair( F *, S * ) p ) { return p.second; }
 \end{cfa}
 &
 \begin{cfa}
-pair(const char *, int) p = {"magic", 42}; // concrete
-int i = value( p );
-pair(void *, int *) q = { 0, &p.second }; // concrete
-i = value( q );
+pair( double, int ) dpr = { 3.5, 42 };
+int i = second( dpr );
+pair( void *, int * ) vipr = { 0p, &i };
+int * ip = second( vipr );
 double d = 1.0;
-pair(double *, double *) r = { &d, &d }; // concrete
-d = value( r );
+pair( int *, double * ) idpr = { &i, &d };
+double * dp = second( idpr );
 \end{cfa}
 \end{tabular}
-\lstMakeShortInline@%
 \end{cquote}
-
-\CFA classifies generic types as either \newterm{concrete} or \newterm{dynamic}.
-Concrete types have a fixed memory layout regardless of type parameters, whereas dynamic types vary in memory layout depending on their type parameters.
-A \newterm{dtype-static} type has polymorphic parameters but is still concrete.
-Polymorphic pointers are an example of dtype-static types;
-given some type variable @T@, @T@ is a polymorphic type, as is @T *@, but @T *@ has a fixed size and can, therefore, be represented by @void *@ in code generation.
-
-\CFA generic types also allow checked argument constraints.
-For example, the following declaration of a sorted set type ensures the set key supports equality and relational comparison.
-\begin{cfa}
-forall( Key | { _Bool ?==?(Key, Key); _Bool ?<?(Key, Key); } ) struct sorted_set;
-\end{cfa}
-
-
-\subsection{Concrete generic types}
-
-The \CFA translator template expands concrete generic types into new structure types, affording maximal inlining.
-To enable interoperation among equivalent instantiations of a generic type, the translator saves the set of instantiations currently in scope and reuses the generated structure declarations where appropriate.
-A function declaration that accepts or returns a concrete generic type produces a declaration for the instantiated structure in the same scope, which all callers may reuse.
-For example, the concrete instantiation for @pair( const char *, int )@ is
-\begin{cfa}
-struct _pair_conc0 {
-	const char * first;  int second;
-};
-\end{cfa}
-
-A concrete generic type with dtype-static parameters is also expanded to a structure type, but this type is used for all matching instantiations.
-In the above example, the @pair( F *, T * )@ parameter to @value@ is such a type; its expansion is below, and it is used as the type of the variables @q@ and @r@ as well, with casts for member access where appropriate.
-\begin{cfa}
-struct _pair_conc1 {
-	void * first, * second;
-};
-\end{cfa}
-
-
-\subsection{Dynamic generic types}
-
-Though \CFA implements concrete generic types efficiently, it also has a fully general system for dynamic generic types.
-As mentioned in Section~\ref{sec:poly-fns}, @otype@ function parameters (in fact, all @sized@ polymorphic parameters) come with implicit size and alignment parameters provided by the caller.
-Dynamic generic types also have an \newterm{offset array} containing structure-member offsets.
-A dynamic generic @union@ needs no such offset array, as all members are at offset 0, but size and alignment are still necessary.
-Access to members of a dynamic structure is provided at runtime via base displacement addressing
-% FIX
-using the structure pointer and the member offset (similar to the @offsetof@ macro), moving a compile-time offset calculation to runtime.
-
-The offset arrays are statically generated where possible.
-If a dynamic generic type is declared to be passed or returned by value from a polymorphic function, the translator can safely assume that the generic type is complete (\ie has a known layout) at any call site, and the offset array is passed from the caller;
-if the generic type is concrete at the call site, the elements of this offset array can even be statically generated using the C @offsetof@ macro.
-As an example, the body of the second @value@ function is implemented as
-\begin{cfa}
-_assign_T( _retval, p + _offsetof_pair[1] ); $\C{// return *p.second}$
-\end{cfa}
-\newpage
-\noindent
-Here, @_assign_T@ is passed in as an implicit parameter from @T@, and takes two @T *@ (@void *@ in the generated code), a destination and a source, and @_retval@ is the pointer to a caller-allocated buffer for the return value, the usual \CFA method to handle dynamically sized return types.
-@_offsetof_pair@ is the offset array passed into @value@;
-this array is generated at the call site as
-\begin{cfa}
-size_t _offsetof_pair[] = { offsetof( _pair_conc0, first ), offsetof( _pair_conc0, second ) }
-\end{cfa}
-
-In some cases, the offset arrays cannot be statically generated.
-For instance, modularity is generally provided in C by including an opaque forward declaration of a structure and associated accessor and mutator functions in a header file, with the actual implementations in a separately compiled @.c@ file.
-\CFA supports this pattern for generic types, but the caller does not know the actual layout or size of the dynamic generic type and only holds it by a pointer.
-The \CFA translator automatically generates \newterm{layout functions} for cases where the size, alignment, and offset array of a generic struct cannot be passed into a function from that function's caller.
-These layout functions take as arguments pointers to size and alignment variables and a caller-allocated array of member offsets, as well as the size and alignment of all @sized@ parameters to the generic structure (un@sized@ parameters are forbidden from being used in a context that affects layout).
-Results of these layout functions are cached so that they are only computed once per type per function. %, as in the example below for @pair@.
-Layout functions also allow generic types to be used in a function definition without reflecting them in the function signature.
-For instance, a function that strips duplicate values from an unsorted @vector(T)@ likely has a pointer to the vector as its only explicit parameter, but uses some sort of @set(T)@ internally to test for duplicate values.
-This function could acquire the layout for @set(T)@ by calling its layout function with the layout of @T@ implicitly passed into the function.
-
-Whether a type is concrete, dtype-static, or dynamic is decided solely on the @forall@'s type parameters.
-This design allows opaque forward declarations of generic types, \eg @forall(T)@ @struct Box@ -- like in C, all uses of @Box(T)@ can be separately compiled, and callers from other translation units know the proper calling conventions to use.
-If the definition of a structure type is included in deciding whether a generic type is dynamic or concrete, some further types may be recognized as dtype-static (\eg @forall(T)@ @struct unique_ptr { T * p }@ does not depend on @T@ for its layout, but the existence of an @otype@ parameter means that it \emph{could}.);
-however, preserving separate compilation (and the associated C compatibility) in the existing design is judged to be an appropriate trade-off.
+\CFA generic types are \newterm{fixed} or \newterm{dynamic} sized.
+Fixed-size types have a fixed memory layout regardless of type parameters, whereas dynamic types vary in memory layout depending on their type parameters.
+For example, the type variable @T *@ is fixed size and is represented by @void *@ in code generation;
+whereas, the type variable @T@ is dynamic and set at the point of instantiation.
+The difference between fixed and dynamic is the complexity and cost of field access.
+For fixed, field offsets are computed (known) at compile time and embedded as displacements in instructions.
+For dynamic, field offsets are computed at compile time at the call site, stored in an array of offset values, passed as a polymorphic parameter, and added to the structure address for each field dereference within a polymorphic routine.
+See~\cite[\S~3.2]{Moss19} for complete implementation details.
+
+Currently, \CFA generic types allow assertion.
+For example, the following declaration of a sorted set-type ensures the set key supports equality and relational comparison.
+\begin{cfa}
+forall( Elem, @Key@ | { _Bool ?==?( Key, Key ); _Bool ?<?( Key, Key ); } )
+struct Sorted_Set { Elem elem; @Key@ key; ... };
+\end{cfa}
+However, the operations that insert/remove elements from the set should not appear as part of the generic-types assertions.
+\begin{cfa}
+forall( @Elem@ | /* any assertions on element type */ ) {
+	void insert( Sorted_Set set, @Elem@ elem ) { ... }
+	bool remove( Sorted_Set set, @Elem@ elem ) { ... } // false => element not present
+	... // more set operations
+} // distribution
+\end{cfa}
+(Note, the @forall@ clause can be distributed across multiple functions.)
+For software-engineering reasons, the set assertions would be refactored into a trait to allow alternative implementations, like a Java \lstinline[language=java]{interface}.
+
+In summation, the \CFA type system inherits \newterm{nominal typing} for concrete types from C, and adds \newterm{structural typing} for polymorphic types.
+Traits are used like interfaces in Java or abstract base-classes in \CC, but without the nominal inheritance relationships.
+Instead, each polymorphic function or generic type defines the structural type needed for its execution, which is fulfilled at each call site from the lexical environment, like Go~\cite{Go} or Rust~\cite{Rust} interfaces.
+Hence, new lexical scopes and nested functions are used extensively to create local subtypes, as in the @qsort@ example, without having to manage a nominal inheritance hierarchy.
 
 
 \section{Contributions}
 
+\begin{enumerate}
+\item
+\item
+\item
+\end{enumerate}
 
 
