Index: doc/theses/fangren_yu_MMath/intro.tex
===================================================================
--- doc/theses/fangren_yu_MMath/intro.tex	(revision 8c3516cd9c2c6334491d550e4db2856fc9f71fc2)
+++ doc/theses/fangren_yu_MMath/intro.tex	(revision 0f070a45d2767ae371eb4be4dd8bac7a57fd2846)
@@ -6,7 +6,7 @@
 \begin{cfa}
 T sum( T a[$\,$], size_t size ) {
-	@T@ total = { @0@ };  // size, 0 for type T
+	@T@ total = { @0@ };  $\C[1.75in]{// size, 0 for type T}$
 	for ( size_t i = 0; i < size; i += 1 )
-		total @+=@ a@[@i@]@; // + and subscript for T
+		total @+=@ a@[@i@]@; $\C{// + and subscript for T}\CRT$
 	return total;
 }
@@ -22,6 +22,6 @@
 All computers have multiple types because computer architects optimize the hardware around a few basic types with well defined (mathematical) operations: boolean, integral, floating-point, and occasionally strings.
 A programming language and its compiler present ways to declare types that ultimately map into the ones provided by the underlying hardware.
-These language types are thrust upon programmers, with their syntactic and semantic rules and restrictions.
-These rules are used to transform a language expression to a hardware expression.
+These language types are thrust upon programmers with their syntactic/semantic rules and restrictions.
+These rules are then used to transform a language expression to a hardware expression.
 Modern programming-languages allow user-defined types and generalize across multiple types using polymorphism.
 Type systems can be static, where each variable has a fixed type during execution and an expression's type is determined once at compile time, or dynamic, where each variable can change type during execution and so an expression's type is reconstructed on each evaluation.
@@ -31,11 +31,11 @@
 \section{Overloading}
 
-Overloading allows programmers to use the most meaningful names without fear of name clashes within a program or from external sources, like include files.
 \begin{quote}
 There are only two hard things in Computer Science: cache invalidation and \emph{naming things}. --- Phil Karlton
 \end{quote}
+Overloading allows programmers to use the most meaningful names without fear of name clashes within a program or from external sources, like include files.
 Experience from \CC and \CFA developers is that the type system implicitly and correctly disambiguates the majority of overloaded names, \ie it is rare to get an incorrect selection or ambiguity, even among hundreds of overloaded (variables and) functions.
 In many cases, a programmer has no idea there are name clashes, as they are silently resolved, simplifying the development process.
-Depending on the language, any ambiguous cases are resolved using some form of qualification and/or casting.
+Depending on the language, any ambiguous cases are resolved explicitly using some form of qualification and/or cast.
 
 
@@ -45,5 +45,5 @@
 Like \CC, \CFA maps operators to named functions and allows these operators to be overloaded with user-defined types.
 The syntax for operator names uses the @'?'@ character to denote a parameter, \eg left and right unary operators: @?++@ and @++?@, and binary operators @?+?@ and @?<=?@.
-Here, a user-defined type is extended with an addition operation with the same syntax as builtin types.
+Here, a user-defined type is extended with an addition operation with the same syntax as a builtin type.
 \begin{cfa}
 struct S { int i, j };
@@ -55,7 +55,9 @@
 The type system examines each call site and selects the best matching overloaded function based on the number and types of arguments.
 If there are mixed-mode operands, @2 + 3.5@, the type system attempts (safe) conversions, like in C/\CC, converting the argument type(s) to the parameter type(s).
-Conversions are necessary because the hardware rarely supports mix-mode operations, so both operands must be the same type.
-Note, without implicit conversions, programmers must write an exponential number of functions covering all possible exact-match cases among all possible types.
+Conversions are necessary because the hardware rarely supports mix-mode operations, so both operands must be converted to a common type.
+Like overloading, the majority of mixed-mode conversions are silently resolved, simplifying the development process.
+Without implicit conversions, programmers must write an exponential number of functions covering all possible exact-match cases among all possible types.
 This approach does not match with programmer intuition and expectation, regardless of any \emph{safety} issues resulting from converted values.
+Depending on the language, mix-mode conversions can be explicitly controlled using some form of cast.
 
 
@@ -81,6 +83,6 @@
 double d = f( 3 );		$\C{// select (2)}\CRT$
 \end{cfa}
-Alternatively, if the type system looks at the return type, there is an exact match for each call, which again matches with programmer intuition and expectation.
-This capability can be taken to the extreme, where there are no function parameters.
+Alternatively, if the type system uses the return type, there is an exact match for each call, which again matches with programmer intuition and expectation.
+This capability can be taken to the extreme, where the only differentiating factor is the return type.
 \begin{cfa}
 int random( void );		$\C[2in]{// (1); overloaded on return type}$
@@ -90,5 +92,9 @@
 \end{cfa}
 Again, there is an exact match for each call.
-If there is no exact match, a set of minimal, safe conversions can be added to find a best match, as for operator overloading.
+As for operator overloading, if there is no exact match, a set of minimal, an implicit conversion can be added to find a best match.
+\begin{cfa}
+short int = random();	$\C[2in]{// select (1), unsafe}$
+long double = random();	$\C{// select (2), safe}\CRT$
+\end{cfa}
 
 
@@ -96,5 +102,5 @@
 
 Unlike most programming languages, \CFA has variable overloading within a scope, along with shadow overloading in nested scopes.
-(Shadow overloading is also possible for functions, if a language supports nested function declarations, \eg \CC named, nested, lambda functions.)
+Shadow overloading is also possible for functions, in languages supporting nested-function declarations, \eg \CC named, nested, lambda functions.
 \begin{cfa}
 void foo( double d );
@@ -109,9 +115,9 @@
 \end{cfa}
 It is interesting that shadow overloading is considered a normal programming-language feature with only slight software-engineering problems.
-However, variable overloading within a scope is often considered extremely dangerous, without any evidence to corroborate this claim.
+However, variable overloading within a scope is considered extremely dangerous, without any evidence to corroborate this claim.
 In contrast, function overloading in \CC occurs silently within the global scope from @#include@ files all the time without problems.
 
-In \CFA, the type system simply treats overloaded variables as an overloaded function returning a value with no parameters.
-Hence, no significant effort is required to support this feature by leveraging the return type to disambiguate as variables have no parameters.
+In \CFA, the type system simply treats an overloaded variable as an overloaded function returning a value with no parameters.
+Hence, no effort is required to support this feature as it is available for differentiating among overloaded functions with no parameters.
 \begin{cfa}
 int MAX = 2147483647;	$\C[2in]{// (1); overloaded on return type}$
@@ -125,5 +131,5 @@
 The result is a significant reduction in names to access typed constants.
 
-As an aside, C has a separate namespace for type and variables allowing overloading between the namespaces, using @struct@ (qualification) to disambiguate.
+As an aside, C has a separate namespace for types and variables allowing overloading between the namespaces, using @struct@ (qualification) to disambiguate.
 \begin{cfa}
 void S() {
@@ -133,4 +139,5 @@
 }
 \end{cfa}
+Here the name @S@ is an aggregate type and field, and a variable and parameter of type @S@.
 
 
@@ -145,5 +152,5 @@
 for ( ; x; --x )   =>    for ( ; x @!= 0@; x @-= 1@ )
 \end{cfa}
-To generalize this feature, both constants are given types @zero_t@ and @one_t@ in \CFA, which allows overloading various operations for new types that seamlessly work with the special @0@ and @1@ contexts.
+To generalize this feature, both constants are given types @zero_t@ and @one_t@ in \CFA, which allows overloading various operations for new types that seamlessly work within the special @0@ and @1@ contexts.
 The types @zero_t@ and @one_t@ have special builtin implicit conversions to the various integral types, and a conversion to pointer types for @0@, which allows standard C code involving @0@ and @1@ to work.
 \begin{cfa}
@@ -176,11 +183,11 @@
 \end{cfa}
 For type-only, the programmer specifies the initial type, which remains fixed for the variable's lifetime in statically typed languages.
-For type-and-initialization, the specified and initialization types may not agree.
+For type-and-initialization, the specified and initialization types may not agree requiring an implicit/explicit conversion.
 For initialization-only, the compiler may select the type by melding programmer and context information.
 When the compiler participates in type selection, it is called \newterm{type inferencing}.
-Note, type inferencing is different from type conversion: type inferencing \emph{discovers} a variable's type before setting its value, whereas conversion has two typed values and performs a (possibly lossy) action to convert one value to the type of the other variable.
+Note, type inferencing is different from type conversion: type inferencing \emph{discovers} a variable's type before setting its value, whereas conversion has two typed variables and performs a (possibly lossy) value conversion from one type to the other.
 Finally, for assignment, the current variable and expression types may not agree.
 Discovering a variable or function type is complex and has limitations.
-The following covers these issues, and why some schemes are not amenable with the \CFA type system.
+The following covers these issues, and why this scheme is not amenable with the \CFA type system.
 
 One of the first and powerful type-inferencing system is Hindley--Milner~\cite{Damas82}.
@@ -203,5 +210,5 @@
 \end{cfa}
 In both overloads of @f@, the type system works from the constant initializations inwards and/or outwards to determine the types of all variables and functions.
-Note, like template meta programming, there could be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of @f@.
+Like template meta-programming, there can be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of @f@.
 Inferring type constraints, by analysing the body of @f@ is possible, and these constraints must be satisfied at each call site by the argument types;
 in this case, parametric polymorphism can allow separate compilation.
@@ -246,5 +253,5 @@
 This issue is exaggerated with \CC templates, where type names are 100s of characters long, resulting in unreadable error messages.
 \item
-Ensuring the type of secondary variables, match a primary variable(s).
+Ensuring the type of secondary variables, match a primary variable.
 \begin{cfa}
 int x; $\C{// primary variable}$
@@ -252,4 +259,5 @@
 \end{cfa}
 If the type of @x@ changes, the type of the secondary variables correspondingly updates.
+There can be strong software-engineering reasons for binding the types of these variables.
 \end{itemize}
 Note, the use of @typeof@ is more restrictive, and possibly safer, than general type-inferencing.
@@ -269,7 +277,7 @@
 
 A restriction is the conundrum in type inferencing of when to \emph{brand} a type.
-That is, when is the type of the variable/function more important than the type of its initialization expression.
-For example, if a change is made in an initialization expression, it can cause cascading type changes and/or errors.
-At some point, a variable's type needs to remain constant and the initializing expression needs to be modified or in error when it changes.
+That is, when is the type of the variable/function more important than the type of its initialization expression(s).
+For example, if a change is made in an initialization expression, it can cascade type changes producing many other changes and/or errors.
+At some point, a variable's type needs to remain constant and the initializing expression needs to be modified or be in error when it changes.
 Often type-inferencing systems allow restricting (\newterm{branding}) a variable or function type, so the complier can report a mismatch with the constant initialization.
 \begin{cfa}
@@ -283,5 +291,5 @@
 In Haskell, it is common for programmers to brand (type) function parameters.
 
-A confusion is large blocks of code where all declarations are @auto@, as is now common in \CC.
+A confusion is blocks of code where all declarations are @auto@, as is now common in \CC.
 As a result, understanding and changing the code becomes almost impossible.
 Types provide important clues as to the behaviour of the code, and correspondingly to correctly change or add new code.
@@ -299,5 +307,5 @@
 In this situation, having the type name or its short alias is essential.
 
-The \CFA's type system tries to prevent type-resolution mistakes by relying heavily on the type of the left-hand side of assignment to pinpoint the right types within an expression.
+\CFA's type system tries to prevent type-resolution mistakes by relying heavily on the type of the left-hand side of assignment to pinpoint the right types within an expression.
 Type inferencing defeats this goal because there is no left-hand type.
 Fundamentally, type inferencing tries to magic away variable types from the programmer.
@@ -308,6 +316,6 @@
 The entire area of Computer-Science data-structures is obsessed with time and space, and that obsession should continue into regular programming.
 Understanding space and time issues is an essential part of the programming craft.
-Given @typedef@ and @typeof@ in \CFA, and the strong desire to use the left-hand type in resolution, implicit type-inferencing is unsupported.
-Should a significant need arise, this feature can be revisited.
+Given @typedef@ and @typeof@ in \CFA, and the strong desire to use the left-hand type in resolution, the decision was made not to support implicit type-inferencing in the type system.
+Should a significant need arise, this decision can be revisited.
 
 
@@ -334,10 +342,10 @@
 
 To constrain polymorphic types, \CFA uses \newterm{type assertions}~\cite[pp.~37-44]{Alphard} to provide further type information, where type assertions may be variable or function declarations that depend on a polymorphic type variable.
-For example, the function @twice@ works for any type @T@ with a matching addition operator.
+Here, the function @twice@ works for any type @T@ with a matching addition operator.
 \begin{cfa}
 forall( T @| { T ?+?(T, T); }@ ) T twice( T x ) { return x @+@ x; }
 int val = twice( twice( 3 ) );  $\C{// val == 12}$
 \end{cfa}
-For example. parametric polymorphism and assertions occurs in existing type-unsafe (@void *@) C @qsort@ to sort an array.
+Parametric polymorphism and assertions occur in existing type-unsafe (@void *@) C functions, like @qsort@ for sorting an array of unknown values.
 \begin{cfa}
 void qsort( void * base, size_t nmemb, size_t size, int (*cmp)( const void *, const void * ) );
@@ -386,11 +394,12 @@
 }
 // select type and size from left-hand side
-int * ip = malloc();  double * dp = malloc();  $@$[aligned(64)] struct S {...} * sp = malloc();
+int * ip = malloc();  double * dp = malloc();  [[aligned(64)]] struct S {...} * sp = malloc();
 \end{cfa}
 The @sized@ assertion passes size and alignment as a data object has no implicit assertions.
 Both assertions are used in @malloc@ via @sizeof@ and @_Alignof@.
-
-These mechanism are used to construct type-safe wrapper-libraries condensing hundreds of existing C functions into tens of \CFA overloaded functions.
-Hence, existing C legacy code is leveraged as much as possible;
+In practise, this polymorphic @malloc@ is unwrapped by the C compiler and the @if@ statement is elided producing a type-safe call to @malloc@ or @memalign@.
+
+This mechanism is used to construct type-safe wrapper-libraries condensing hundreds of existing C functions into tens of \CFA overloaded functions.
+Here, existing C legacy code is leveraged as much as possible;
 other programming languages must build supporting libraries from scratch, even in \CC.
 
@@ -422,7 +431,8 @@
 \end{tabular}
 \end{cquote}
-Traits are simply flatten at the use point, as if written in full by the programmer, where traits often contain overlapping assertions, \eg operator @+@.
+Traits are implemented by flatten them at use points, as if written in full by the programmer.
+Flattening often results in overlapping assertions, \eg operator @+@.
 Hence, trait names play no part in type equivalence.
-Note, the type @T@ is an object type, and hence, has the implicit internal trait @otype@.
+In the example, type @T@ is an object type, and hence, has the implicit internal trait @otype@.
 \begin{cfa}
 trait otype( T & | sized(T) ) {
@@ -433,5 +443,5 @@
 };
 \end{cfa}
-The implicit routines are used by the @sumable@ operator @?+=?@ for the right side of @?+=?@ and return.
+These implicit routines are used by the @sumable@ operator @?+=?@ for the right side of @?+=?@ and return.
 
 If the array type is not a builtin type, an extra type parameter and assertions are required, like subscripting.
@@ -445,5 +455,5 @@
 \begin{enumerate}[leftmargin=*]
 \item
-Write bespoke data structures for each context they are needed.
+Write bespoke data structures for each context.
 While this approach is flexible and supports integration with the C type checker and tooling, it is tedious and error prone, especially for more complex data structures.
 \item
@@ -452,10 +462,10 @@
 \item
 Use preprocessor macros, similar to \CC @templates@, to generate code that is both generic and type checked, but errors may be difficult to interpret.
-Furthermore, writing and using preprocessor macros is difficult and inflexible.
+Furthermore, writing and using complex preprocessor macros is difficult and inflexible.
 \end{enumerate}
 
 \CC, Java, and other languages use \newterm{generic types} to produce type-safe abstract data-types.
 \CFA generic types integrate efficiently and naturally with the existing polymorphic functions, while retaining backward compatibility with C and providing separate compilation.
-However, for known concrete parameters, the generic-type definition can be inlined, like \CC templates.
+For concrete parameters, the generic-type definition can be inlined, like \CC templates, if its definition appears in a header file (\eg @static inline@).
 
 A generic type can be declared by placing a @forall@ specifier on a @struct@ or @union@ declaration and instantiated using a parenthesized list of types after the type name.
@@ -484,10 +494,10 @@
 \end{cquote}
 \CFA generic types are \newterm{fixed} or \newterm{dynamic} sized.
-Fixed-size types have a fixed memory layout regardless of type parameters, whereas dynamic types vary in memory layout depending on their type parameters.
+Fixed-size types have a fixed memory layout regardless of type parameters, whereas dynamic types vary in memory layout depending on the type parameters.
 For example, the type variable @T *@ is fixed size and is represented by @void *@ in code generation;
 whereas, the type variable @T@ is dynamic and set at the point of instantiation.
 The difference between fixed and dynamic is the complexity and cost of field access.
 For fixed, field offsets are computed (known) at compile time and embedded as displacements in instructions.
-For dynamic, field offsets are computed at compile time at the call site, stored in an array of offset values, passed as a polymorphic parameter, and added to the structure address for each field dereference within a polymorphic routine.
+For dynamic, field offsets are compile-time computed at the call site, stored in an array of offset values, passed as a polymorphic parameter, and added to the structure address for each field dereference within a polymorphic routine.
 See~\cite[\S~3.2]{Moss19} for complete implementation details.
 
@@ -517,8 +527,16 @@
 \section{Contributions}
 
+The \CFA compiler performance and type capability have been greatly improved through my development work.
 \begin{enumerate}
-\item The \CFA compiler performance and capability have been greatly improved through recent development. The compilation time of various \CFA library units and test programs have been reduced from the order of minutes down to 10-20 seconds, which made it possible to develop and test more complicated \CFA programs that utilize sophisticated type system features. The details of compiler optimization work are covered in a previous technical report.
-\item The thesis presents a systematic review of the new features that have been added to the \CFA language and its type system. Some of the more recent inclusions to \CFA such as tuples and generic structure types were not well tested when they were being developed, due to the limitation of compiler performance. Several issues coming from the interactions of various language features are identified and discussed in this thesis; some of them are now fully resolved, while others are given temporary fixes and need to be reworked in the future.
-\item Finally, this thesis provides constructive ideas of fixing the outstanding issues in \CFA language design and implementation, and gives a path for future improvements to \CFA language and compiler.
+\item
+The compilation time of various \CFA library units and test programs has been reduced by an order of magnitude, from minutes to seconds \see{\VRef[Table]{t:SelectedFileByCompilerBuild}}, which made it possible to develop and test more complicated \CFA programs that utilize sophisticated type system features.
+The details of compiler optimization work are covered in a previous technical report~\cite{Yu20}, which essentially forms part of this thesis.
+\item
+The thesis presents a systematic review of the new features added to the \CFA language and its type system.
+Some of the more recent inclusions to \CFA, such as tuples and generic structure types, were not well tested during development due to the limitation of compiler performance.
+Several issues coming from the interactions of various language features are identified and discussed in this thesis;
+some of them I have resolved, while others are given temporary fixes and need to be reworked in the future.
+\item
+Finally, this thesis provides constructive ideas for fixing a number of high-level issues in the \CFA language design and implementation, and gives a path for future improvements to the language and compiler.
 \end{enumerate}
 
