Index: doc/theses/fangren_yu_MMath/content1.tex
===================================================================
--- doc/theses/fangren_yu_MMath/content1.tex	(revision b723b630250eb742c4fe13ca63afabe6f0ad2e3c)
+++ doc/theses/fangren_yu_MMath/content1.tex	(revision c82dad4e91592787d98be7b164b82f68e16bb70d)
@@ -15,12 +15,13 @@
 Alternatively, use a reference when its primary purpose is to alias a value, \eg a function parameter that does not copy the argument (performance reason).
 Here, manipulating the value is the primary operation, while changing the pointer address is the secondary operation.
-Succinctly, if the address often changes, use a pointer;
-if the value often changes, use a reference.
-Note, \CC made the reference address immutable starting a \emph{belief} that immutability is a fundamental aspect of a reference's pointer, resulting in a semantic asymmetry between the pointer and reference.
-\CFA adopts a uniform policy between pointers and references where mutability is a settable property at the point of declaration.
+Succinctly, if the address changes often, use a pointer;
+if the value changes often, use a reference.
+Note, \CC made its reference address immutable starting a \emph{belief} that immutability is a fundamental aspect of a reference's pointer.
+The results is asymmetry semantics between the pointer and reference.
+\CFA adopts a uniform policy between pointers and references where mutability is a separate property made at the declaration.
 
 The following examples shows how pointers and references are treated uniformly in \CFA.
 \begin{cfa}[numbers=left,numberblanklines=false]
-int x = 1, y = 2, z = 3;
+int x = 1, y = 2, z = 3;$\label{p:refexamples}$
 int * p1 = &x, ** p2 = &p1,  *** p3 = &p2,	$\C{// pointers to x}$
 	@&@ r1 = x,  @&&@ r2 = r1,   @&&&@ r3 = r2;	$\C{// references to x}$
@@ -34,5 +35,5 @@
 \end{cfa}
 Like pointers, reference can be cascaded, \ie a reference to a reference, \eg @&& r2@.\footnote{
-\CC uses \lstinline{&&} for rvalue reference a feature for move semantics and handling the \lstinline{const} Hell problem.}
+\CC uses \lstinline{&&} for rvalue reference, a feature for move semantics and handling the \lstinline{const} Hell problem.}
 Usage of a reference variable automatically performs the same number of dereferences as the number of references in its declaration, \eg @r3@ becomes @***r3@.
 Finally, to reassign a reference's address needs a mechanism to stop the auto-referencing, which is accomplished by using a single reference to cancel all the auto-dereferencing, \eg @&r3 = &y@ resets @r3@'s address to point to @y@.
@@ -46,7 +47,9 @@
     \item For an expression $e$ with type $T\ \&_1...\&_n$, \ie $T$ followed by $n$ references, where $T$ is not a reference type, the expression $\&T$ (address of $T$) has type $T *$ followed by $n - 1$ references.
     \item For an expression $e$ with type $T *\&_1...\&_n$, \ie $T *$  followed by $n$ references, the expression $* T$ (dereference $T$) has type $T$ followed by $n + 1$ references.
-	This is the reverse of the previous rule, such that address-of and dereference operators are perfect inverses.
-    \item When matching parameter and argument types in a function call context, the number of references on the argument type is stripped off to match the number of references on the parameter type.\footnote{
-	\CFA handles the \lstinline{const} Hell problem by allowing rvalue expressions to be converted to reference values by implicitly creating a temporary variable, with some restrictions.}
+	This rule is the reverse of the previous rule, such that address-of and dereference operators are perfect inverses.
+    \item When matching argument and parameter types at a function call, the number of references on the argument type is stripped off to match the number of references on the parameter type.\footnote{
+	\CFA handles the \lstinline{const} Hell problem by allowing rvalue expressions to be converted to reference values by implicitly creating a temporary variable, with some restrictions.
+	As well, there is a warning that the output nature of the reference is lost.
+	Hence, a single function handles \lstinline{const} and non-\lstinline{const} as constness is handled at the call site.}
 	In an assignment context, the left-hand-side operand-type is always reduced to a single reference.
 \end{enumerate}
@@ -90,12 +93,12 @@
 \end{cfa}
 
-In the initial \CFA reference design, the goal was to make the reference type a \emph{real} data type \vs a restricted \CC reference, which is mostly used for choosing the argument-passing method, by-value or by-reference.
+In the initial \CFA reference design, the goal was to make the reference type a \emph{real} data type \vs a restricted \CC reference, which is mostly used for choosing the argument-passing method, \ie by-value or by-reference.
 However, there is an inherent ambiguity for auto-dereferencing: every argument expression involving a reference variable can potentially mean passing the reference's value or address.
 Without any restrictions, this ambiguity limits the behaviour of reference types in \CFA polymorphic functions, where a type @T@ can bind to a reference or non-reference type.
 This ambiguity prevents the type system treating reference types the same way as other types in many cases even if type variables could be bound to reference types.
-The reason is that \CFA uses a common \emph{object} trait (constructor, destructor and assignment operators) to handle passing dynamic concrete type arguments into polymorphic functions, and the reference types are handled differently in these contexts so they do not satisfy this common interface.
+The reason is that \CFA uses a common \emph{object trait}\label{p:objecttrait} (constructor, destructor and assignment operators) to handle passing dynamic concrete type arguments into polymorphic functions, and the reference types are handled differently in these contexts so they do not satisfy this common interface.
 
 Moreover, there is also some discrepancy in how the reference types are treated in initialization and assignment expressions.
-For example, in line 3 of the previous example code:
+For example, in line 3 of the previous example code \see{\VPageref{p:refexamples}}:
 \begin{cfa}
 int @&@ r1 = x,  @&&@ r2 = r1,   @&&&@ r3 = r2;	$\C{// references to x}$
@@ -105,5 +108,5 @@
 For lines 6 and 9 of the previous example code:
 \begin{cfa}
- r1 =  3;	r2 = 3;		r3 = 3;				$\C{// change x: implicit dereference *r1, **r2, ***r3}$
+ r1 =  3;	r2 = 3;   r3 = 3;				$\C{// change x: implicit dereference *r1, **r2, ***r3}$
 @&@r3 = @&@y; @&&@r3 = @&&@r4;				$\C{// change r1, r2}$
 \end{cfa}
@@ -113,5 +116,5 @@
 Finally, there is an annoying issue (although purely syntactic) for setting a mutable reference to a specific address like null, @int & r1 = *0p@, which looks like dereferencing a null pointer.
 Here, the expression is rewritten as @int & r1 = &(*0p)@, like the variable dereference of @x@ above.
-However, the implicit @&@ needs to be cancelled for an address, which is done with the @*@, i.e., @&*@ cancel each other, giving @0p@.
+However, the implicit @&@ needs to be cancelled for an address, which is done with the @*@, \ie @&*@ cancel each other, giving @0p@.
 Therefore, the dereferencing operation does not actually happen and the expression is translated into directly initializing the reference variable with the address.
 Note, the same explicit reference is used in \CC to set a reference variable to null.
@@ -123,19 +126,19 @@
 When generic types were introduced to \CFA~\cite{Moss19}, some thought was given to allow reference types as type arguments.
 \begin{cfa}
-forall( T )
-struct vector { T t; };
+forall( T ) struct vector { T t; }; $\C{// generic type}$
 vector( int @&@ ) vec; $\C{// vector of references to ints}$
 \end{cfa}
-While it is possible to write a reference type as the argument to a generic type, it is disallowed in assertion checking, if the generic type requires the object trait for the type argument (a fairly common use case).
+While it is possible to write a reference type as the argument to a generic type, it is disallowed in assertion checking, if the generic type requires the object trait \see{\VPageref{p:objecttrait}} for the type argument (a fairly common use case).
 Even if the object trait can be made optional, the current type system often misbehaves by adding undesirable auto-dereference on the referenced-to value rather than the reference variable itself, as intended.
 Some tweaks are necessary to accommodate reference types in polymorphic contexts and it is unclear what can or cannot be achieved.
-Currently, there are contexts where \CFA programmer must use pointer types, giving up the benefits of auto-dereference operations and better syntax from reference types.
+Currently, there are contexts where \CFA programmer must use pointer types, giving up the benefits of auto-dereference operations and better syntax with reference types.
 
 
 \section{Tuple Types}
 
-The addition of tuple types to \CFA can be traced back to the original design by David Till in \mbox{K-W C}~\cite{Till89,Buhr94a}, a predecessor project of \CFA.
+The addition of tuples to \CFA can be traced back to the original design by David Till in \mbox{K-W C}~\cite{Till89,Buhr94a}, a predecessor project of \CFA.
 The primary purpose of tuples is to eliminate output parameters or creating an aggregate type to return multiple values from a function, called a multiple-value-returning (MVR) function.
-The following examples shows the two techniques for a function returning three values.
+Traditionally, returning multiple values is accomplished via (in/)output parameters or packing the results in a structure.
+The following examples show these two techniques for a function returning three values.
 \begin{cquote}
 \begin{tabular}{@{}l@{\hspace{20pt}}l@{}}
@@ -155,18 +158,18 @@
 \end{tabular}
 \end{cquote}
-where K-W C allows direct return of multiple values.
+K-W C allows direct return of multiple values into a tuple.
 \begin{cfa}
 @[int, int, int]@ foo( int p2, int p3 );
 @[x, y, z]@ = foo( y, z );  // return 3 values into a tuple
 \end{cfa}
-Along with simplifying returning multiple values, tuples can be extended to simplify a number of other common context that normally require multiple statements and/or additional declarations, all of which reduces coding time and errors.
+Along with making returning multiple values a first-class feature, tuples were extended to simplify a number of other common context that normally require multiple statements and/or additional declarations, all of which reduces coding time and errors.
 \begin{cfa}
 [x, y, z] = 3; $\C[2in]{// x = 3; y = 3; z = 3, where types are different}$
 [x, y] = [y, x]; $\C{// int tmp = x; x = y; y = tmp;}$
 void bar( int, int, int );
-@bar@( foo( 3, 4 ) ); $\C{// int t0, t1, t2; [t0, t1, t2] = foo( 3, 4 ); bar( t0, t1, t2 );}$
+bar( foo( 3, 4 ) ); $\C{// int t0, t1, t2; [t0, t1, t2] = foo( 3, 4 ); bar( t0, t1, t2 );}$
 x = foo( 3, 4 )@.1@; $\C{//  int t0, t1, t2; [t0, t1, t2] = foo( 3, 4 ); x = t1;}\CRT$
 \end{cfa}
-For the call to @bar@, the three results from @foo@ are \newterm{flattened} into individual arguments.
+For the call to @bar@, the three results (tuple value) from @foo@ are \newterm{flattened} into individual arguments.
 Flattening is how tuples interact with parameter and subscript lists, and with other tuples, \eg:
 \begin{cfa}
@@ -174,11 +177,13 @@
 [ x, y, z, a, b, c] = [2, 3, 4, foo( 3, 4) ]  $\C{// flattened, where foo results are t0, t1, t2}$
 \end{cfa}
-
-Note, the \CFA type-system supports complex composition of tuples and C type conversions using a costing scheme giving lower cost to widening conversions that do not truncate a value.
+Note, in most cases, a tuple is just compile-time syntactic-sugar for a number of individual assignments statements and possibly temporary variables.
+Only when returning a tuple from a function is there the notion of a tuple value.
+
+Overloading in the \CFA type-system must support complex composition of tuples and C type conversions using a costing scheme giving lower cost to widening conversions that do not truncate a value.
 \begin{cfa}
 [ int, int ] foo$\(_1\)$( int );			$\C{// overloaded foo functions}$
 [ double ] foo$\(_2\)$( int );
 void bar( int, double, double );
-bar( foo( 3 ), foo( 3 ) );
+bar( @foo@( 3 ), @foo@( 3 ) );
 \end{cfa}
 The type resolver only has the tuple return types to resolve the call to @bar@ as the @foo@ parameters are identical, which involves unifying the flattened @foo@ return values with @bar@'s parameter list.
@@ -188,8 +193,86 @@
 The programming language Go provides a similar but simplier tuple mechanism, as it does not have overloaded functions.
 
-The K-W C tuples are merely syntactic sugar, as there is no mechanism to define a variable with tuple type.
-For the tuple-returning implementation, an implicit @struct@ type is created for the returning tuple and the values are extracted by field access operations.
-For the tuple-assignment implementation, the left-hand tuple expression is expanded into assignments of each component, creating temporary variables to avoid unexpected side effects.
-For example, a structure is returned from @foo@ and its fields are individually assigned to the left-hand variables, @x@, @y@, @z@.
+K-W C also supported tuple variables, but with a strong distinction between tuples and tuple values/variables.
+\begin{quote}
+Note that tuple variables are not themselves tuples.
+Tuple variables reference contiguous areas of storage, in which tuple values are stored;
+tuple variables and tuple values are entities which appear during program execution.
+Tuples, on the other hand, are compile-time constructs;
+they are lists of expressions, whose values may not be stored contiguously.~\cite[p.~30]{Till89}
+\end{quote}
+Fundamentally, a tuple value/variable is just a structure (contiguous areas) with an alternate (tuple) interface.
+A tuple value/variable is assignable (like structures), its fields can be accessed using position rather than name qualification, and it can interact with regular tuples.
+\begin{cfa}
+[ int, int, int ] t1, t2;
+t1 = t2;  			$\C{// tuple assignment}$
+t1@.1@ = t2@.0@;  	$\C{// position qualification}$
+int x, y, z;
+t1 = [ x, y, z ];  	$\C{// interact with regular tuples}$
+[ x, y, z ] = t1;
+bar( t2 );			$\C{// bar defined above}$
+\end{cfa}
+\VRef[Figure]{f:Nesting} shows The difference is nesting of structures and tuples.
+The left \CC nested-structure is named so it is not flattened.
+The middle C/\CC nested-structure is unnamed and flattened, causing an error because @i@ and @j@ are duplication names.
+The right \CFA nested tuple cannot be named and is flattened.
+C allows named nested-structures, but they have issues \see{\VRef{s:inlineSubstructure}}.
+Note, it is common in C to have an unnamed @union@ so its fields do not require qualification.
+
+\begin{figure}
+\setlength{\tabcolsep}{15pt}
+\begin{tabular}{@{}ll@{\hspace{90pt}}l@{}}
+\multicolumn{1}{c}{\CC} & \multicolumn{1}{c}{C/\CC} & \multicolumn{1}{c}{tuple} \\
+\begin{cfa}
+struct S {
+	struct @T@ { // not flattened
+		int i, j;
+	};
+	int i, j;
+};
+\end{cfa}
+&
+\begin{cfa}
+struct S2 {
+	struct ${\color{red}/* unnamed */}$ { // flatten
+		int i, j;
+	};
+	int i, j;
+};
+\end{cfa}
+&
+\begin{cfa}
+[
+	[ // flatten
+		1, 2
+	]
+	1, 2
+]
+\end{cfa}
+\end{tabular}
+\caption{Nesting}
+\label{f:Nesting}
+\end{figure}
+
+The primary issues for tuples in the \CFA type system are polymorphism and conversions.
+Specifically, does it make sense to have a generic (polymorphic) tuple type, as is possible for a structure?
+\begin{cfa}
+forall( T, S ) [ T, S ] GT; // polymorphic tuple type
+GT( int, char ) @gt@;
+GT( int, double ) @gt@;
+@gt@ = [ 3, 'a' ];  // select correct gt
+@gt@ = [ 3, 3.5 ];
+\end{cfa}
+and what is the cost model for C conversions across multiple values?
+\begin{cfa}
+gt = [ 'a', 3L ];  // select correct gt
+\end{cfa}
+
+
+\section{Tuple Implementation}
+
+As noted, tradition languages manipulate multiple values by in/out parameters and/or structures.
+K-W C adopted the structure for tuple values or variables, and as needed, the fields are extracted by field access operations.
+As well, For the tuple-assignment implementation, the left-hand tuple expression is expanded into assignments of each component, creating temporary variables to avoid unexpected side effects.
+For example, the tuple value returned from @foo@ is a structure, and its fields are individually assigned to a left-hand tuple, @x@, @y@, @z@, or copied directly into a corresponding tuple variable.
 
 In the second implementation of \CFA tuples by Rodolfo Gabriel Esteves~\cite{Esteves04}, a different strategy is taken to handle MVR functions.
@@ -203,13 +286,13 @@
 [x, y] = gives_two();
 \end{cfa}
-The K-W C implementation translates the program to:
+The Till K-W C implementation translates the program to:
 \begin{cfa}
 struct _tuple2 { int _0; int _1; }
-struct _tuple2 gives_two();
+struct _tuple2 gives_two() { ... struct _tuple2 ret = { r1, r2 }, return ret; }
 int x, y;
 struct _tuple2 _tmp = gives_two();
 x = _tmp._0; y = _tmp._1;
 \end{cfa}
-While the Rodolfo implementation translates it to:
+while the Rodolfo implementation translates it to:
 \begin{cfa}
 void gives_two( int * r1, int * r2 ) { ... *r1 = ...; *r2 = ...; return; }
@@ -218,5 +301,5 @@
 \end{cfa}
 and inside the body of the function @gives_two@, the return statement is rewritten as assignments into the passed-in argument addresses.
-This implementation looks more concise, and in the case of returning values having nontrivial types (\eg aggregates), this implementation saves unnecessary copying.
+This implementation looks more concise, and in the case of returning values having nontrivial types, \eg aggregates, this implementation saves unnecessary copying.
 For example,
 \begin{cfa}
@@ -234,7 +317,8 @@
 The downside is indirection within @gives_two@ to access values, unless values get hoisted into registers for some period of time, which is common.
 
-Interestingly, in the third implementation of \CFA tuples by Robert Schluntz~\cite[\S~3]{Schluntz17}, the tuple type reverts back to structure based, where it remains in the current version of the cfa-cc translator.
-The reason for the reversion was to make tuples become first-class types in \CFA, \ie allow tuple variables.
+Interestingly, in the third implementation of \CFA tuples by Robert Schluntz~\cite[\S~3]{Schluntz17}, the MVR functions revert back to structure based, where it remains in the current version of \CFA.
+The reason for the reversion was to have a uniform approach for tuple values/variables making tuples first-class types in \CFA, \ie allow tuples with corresponding tuple variables.
 This extension was possible, because in parallel with Schluntz's work, generic types were being added independently by Moss~\cite{Moss19}, and the tuple variables leveraged the same implementation techniques as the generic variables.
+\PAB{I'm not sure about the connection here. Do you have an example of what you mean?}
 
 However, after experience gained building the \CFA runtime system, making tuple-types first-class seems to add little benefit.
@@ -277,9 +361,9 @@
 
 Finally, a type-safe variadic argument signature was added by Robert Schluntz~\cite[\S~4.1.2]{Schluntz17} using @forall@ and a new tuple parameter-type, denoted by the keyword @ttype @ in Schluntz's implementation, but changed to the ellipsis syntax similar to \CC's template parameter pack.
-For C variadics, the number and types of the arguments must be conveyed in some way, \eg @printf@ uses a format string indicating the number and types of the arguments.
+For C variadics, \eg @va_list@, the number and types of the arguments must be conveyed in some way, \eg @printf@ uses a format string indicating the number and types of the arguments.
 \VRef[Figure]{f:CVariadicMaxFunction} shows an $N$ argument @maxd@ function using the C untyped @va_list@ interface.
 In the example, the first argument is the number of following arguments, and the following arguments are assumed to be @double@;
 looping is used to traverse the argument pack from left to right.
-The @va_list@ interface is walking up (by address) the stack looking at the arguments pushed by the caller.
+The @va_list@ interface is walking up the stack (by address) looking at the arguments pushed by the caller.
 (Magic knowledge is needed for arguments pushed using registers.)
 
@@ -415,6 +499,7 @@
 
 \section{\lstinline{inline} Substructure}
-
-C allows an anonymous aggregate type (@struct@ or @union@) to be embedded (nested) within another one, \eg a tagged union.
+\label{s:inlineSubstructure}
+
+As mentioned \see{\VRef[Figure]{f:Nesting}}, C allows an anonymous aggregate type (@struct@ or @union@) to be embedded (nested) within another one, \eg a tagged union.
 \begin{cfa}
 struct S {
@@ -448,5 +533,5 @@
 
 As an aside, C nested \emph{named} aggregates behave in a (mysterious) way because the nesting is allowed but there is no ability to use qualification to access an inner type, like the \CC type operator `@::@'.
-In fact, all named nested aggregates are hoisted to global scope, regardless of the nesting depth.
+\emph{In fact, all named nested aggregates are hoisted to global scope, regardless of the nesting depth.}
 \begin{cquote}
 \begin{tabular}{@{}l@{\hspace{35pt}}l@{}}
@@ -498,18 +583,5 @@
 struct S s;  @s::T@.i;  @s::U@.k;
 \end{cfa}
-
-As an aside, \CFA also provides a backwards compatible \CC nested-type.
-\begin{cfa}
-struct S {
-	@auto@ struct T {
-		int i, j;
-	};
-	@auto@ struct U {
-		int k, l;
-	};
-};
-\end{cfa}
-The keyword @auto@ denotes a local (scoped) declaration, and here, it implies a local (scoped) type, using dot as the type qualifier, \eg @S.T t@.
-Alternatively, \CFA could adopt the \CC non-compatible change for nested types, since it may have already forced certain coding changes in C libraries that must be parsed by \CC.
+\CFA chose to adopt the \CC non-compatible change for nested types, since \CC's change has already forced certain coding changes in C libraries that must be parsed by \CC.
 
 % https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html
@@ -531,5 +603,5 @@
 } s;
 \end{cfa}
-Note, the position of the substructure is normally unimportant.
+Note, the position of the substructure is normally unimportant, unless there is some form of memory or @union@ overlay.
 Like the anonymous nested types, the aggregate field names are hoisted into @struct S@, so there is direct access, \eg @s.x@ and @s.i@.
 However, like the implicit C hoisting of nested structures, the field names must be unique and the type names are now at a different scope level, unlike type nesting in \CC.
Index: doc/theses/fangren_yu_MMath/intro.tex
===================================================================
--- doc/theses/fangren_yu_MMath/intro.tex	(revision b723b630250eb742c4fe13ca63afabe6f0ad2e3c)
+++ doc/theses/fangren_yu_MMath/intro.tex	(revision c82dad4e91592787d98be7b164b82f68e16bb70d)
@@ -34,5 +34,5 @@
 Virtually all programming languages overload the arithmetic operators across the basic computational types using the number and type of parameters and returns.
 Like \CC, \CFA also allows these operators to be overloaded with user-defined types.
-The syntax for operator names uses the @'?'@ character to denote a function parameter, \eg prefix, postfix, and infix increment operators: @++?@, @?++@, and @?+?@.
+The syntax for operator names uses the @'?'@ character to denote a parameter, \eg unary operators: @?++@, @++?@, binary operator @?+?@.
 Here, a user-defined type is extended with an addition operation with the same syntax as builtin types.
 \begin{cfa}
@@ -41,8 +41,8 @@
 S s1, s2;
 s1 = s1 @+@ s2;		 	$\C[1.75in]{// infix call}$
-s1 = @?+?@( s1, s2 );	$\C{// direct call using operator name}\CRT$
-\end{cfa}
-The type system examines each call size and selects the best matching overloaded function based on the number and types of the arguments.
-If there are mixed-mode operands, @2 + 3.5@, the \CFA type system, like C/\CC, attempts (safe) conversions, converting one or more of the argument type(s) to the parameter type(s).
+s1 = @?+?@( s1, s2 );	$\C{// direct call}\CRT$
+\end{cfa}
+The type system examines each call size and selects the best matching overloaded function based on the number and types of arguments.
+If there are mixed-mode operands, @2 + 3.5@, the type system, like in C/\CC, attempts (safe) conversions, converting the argument type(s) to the parameter type(s).
 Conversions are necessary because the hardware rarely supports mix-mode operations, so both operands must be the same type.
 Note, without implicit conversions, programmers must write an exponential number of functions covering all possible exact-match cases among all possible types.
@@ -113,4 +113,19 @@
 \section{Type Inferencing}
 
+Every variable has a type, but association between them can occur in different ways:
+at the point where the variable comes into existence (declaration) and/or on each assignment to the variable.
+\begin{cfa}
+double x;				$\C{// type only}$
+float y = 3.1D;			$\C{// type and initialization}$
+auto z = y;				$\C{// initialization only}$
+z = "abc";				$\C{// assignment}$
+\end{cfa}
+For type-and-initialization, the specified and initialization types may not agree.
+Similarity, for assignment the current variable and expression types may not agree.
+For type-only, the programmer specifies the initial type, which remains fixed for the variable's lifetime in statically typed languages.
+In the other cases, the compiler may select the type by melding programmer and context information.
+When the compiler participates in type selection, it is called \newterm{type inferencing}.
+Note, type inferencing is different from type conversion: type inferencing \emph{discovers} a variable's type before setting its value, whereas conversion has two typed values and performs a (possibly lossy) action to convert one value to the type of the other variable.
+
 One of the first and powerful type-inferencing system is Hindley--Milner~\cite{Damas82}.
 Here, the type resolver starts with the types of the program constants used for initialization and these constant types flow throughout the program, setting all variable and expression types.
@@ -132,5 +147,5 @@
 \end{cfa}
 In both overloads of @f@, the type system works from the constant initializations inwards and/or outwards to determine the types of all variables and functions.
-Note, like template meta-programming, there can be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of the @f@.
+Note, like template meta-programming, there could be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of the @f@.
 Inferring type constraints, by analysing the body of @f@ is possible, and these constraints must be satisfied at each call site by the argument types;
 in this case, parametric polymorphism can allow separate compilation.
@@ -170,6 +185,6 @@
 Not determining or writing long generic types, \eg, given deeply nested generic types.
 \begin{cfa}
-typedef T1(int).T2(float).T3(char).T ST;  $\C{// \CFA nested type declaration}$
-ST x, y, x;
+typedef T1(int).T2(float).T3(char).T @ST@;  $\C{// \CFA nested type declaration}$
+@ST@ x, y, x;
 \end{cfa}
 This issue is exaggerated with \CC templates, where type names are 100s of characters long, resulting in unreadable error messages.
@@ -193,30 +208,13 @@
 \section{Type-Inferencing Issues}
 
-Each kind of type-inferencing system has its own set of issues that flow onto the programmer in the form of restrictions and/or confusions.
-\begin{enumerate}[leftmargin=*]
-\item
-There can be large blocks of code where all declarations are @auto@.
-As a result, understanding and changing the code becomes almost impossible.
-Types provide important clues as to the behaviour of the code, and correspondingly to correctly change or add new code.
-In these cases, a programmer is forced to re-engineer types, which is fragile, or rely on a fancy IDE that can re-engineer types.
-\item
-The problem of unknown types becomes acute when the concrete type must be used, \eg, given:
-\begin{cfa}
-auto x = @...@
-\end{cfa}
-and the need to write a routine to compute using @x@
-\begin{cfa}
-void rtn( @...@ parm );
-rtn( x );
-\end{cfa}
-A programmer must re-engineer the type of @x@'s initialization expression, reconstructing the possibly long generic type-name.
-In this situation, having the type name or its short alias is essential.
-\item
-There is the conundrum in type inferencing of when to \emph{brand} a type.
+Each kind of type-inferencing system has its own set of issues that flow onto the programmer in the form of convenience, restrictions or confusions.
+
+A convenience is having the compiler use its overarching program knowledge to select the best type for each variable based on some notion of \emph{best}, which simplifies the programming experience.
+
+A restriction is the conundrum in type inferencing of when to \emph{brand} a type.
 That is, when is the type of the variable/function more important than the type of its initialization expression.
 For example, if a change is made in an initialization expression, it can cause cascading type changes and/or errors.
-At some point, a variable's type needs to remain constant and the expression needs to be modified or in error when it changes.
-Often type-inferencing systems allow \newterm{branding} a variable or function type;
-now the complier can report a mismatch on the constant.
+At some point, a variable's type needs to remain constant and the initializing expression needs to be modified or in error when it changes.
+Often type-inferencing systems allow restricting (\newterm{branding}) a variable or function type, so the complier can report a mismatch with the constant initialization.
 \begin{cfa}
 void f( @int@ x, @int@ y ) {  // brand function prototype
@@ -228,12 +226,27 @@
 \end{cfa}
 In Haskell, it is common for programmers to brand (type) function parameters.
-\end{enumerate}
-
-\CFA's type system is trying to prevent type-resolution mistakes by relying heavily on the type of the left-hand side of assignment to pinpoint the right types for an expression computation.
+
+A confusion is large blocks of code where all declarations are @auto@.
+As a result, understanding and changing the code becomes almost impossible.
+Types provide important clues as to the behaviour of the code, and correspondingly to correctly change or add new code.
+In these cases, a programmer is forced to re-engineer types, which is fragile, or rely on a fancy IDE that can re-engineer types.
+For example, given:
+\begin{cfa}
+auto x = @...@
+\end{cfa}
+and the need to write a routine to compute using @x@
+\begin{cfa}
+void rtn( @type of x@ parm );
+rtn( x );
+\end{cfa}
+A programmer must re-engineer the type of @x@'s initialization expression, reconstructing the possibly long generic type-name.
+In this situation, having the type name or its short alias is essential.
+
+\CFA's type system tries to prevent type-resolution mistakes by relying heavily on the type of the left-hand side of assignment to pinpoint the right types within an expression.
 Type inferencing defeats this goal because there is no left-hand type.
-Fundamentally, type inferencing tries to magic away types from the programmer.
+Fundamentally, type inferencing tries to magic away variable types from the programmer.
 However, this results in lazy programming with the potential for poor performance and safety concerns.
-Types are as important as control-flow, and should not be masked, even if it requires the programmer to think!
-A similar example is garbage collection, where storage management is masked, resulting in poor program design and performance.
+Types are as important as control-flow in writing a good program, and should not be masked, even if it requires the programmer to think!
+A similar issue is garbage collection, where storage management is masked, resulting in poor program design and performance.
 The entire area of Computer-Science data-structures is obsessed with time and space, and that obsession should continue into regular programming.
 Understanding space and time issues are an essential part of the programming craft.
