Index: doc/papers/general/Paper.tex
===================================================================
--- doc/papers/general/Paper.tex	(revision 7d94d805fda9a7adff66f41535cded2b01ce85e4)
+++ doc/papers/general/Paper.tex	(revision 10f814274819855875989c6dbf86fcb1e3ddf61d)
@@ -1081,15 +1081,15 @@
 
 % In object-oriented programming, there is an implicit first parameter, often names @self@ or @this@, which is elided.
-% In any programming language, some functions have a naturally close relationship with a particular data type. 
+% In any programming language, some functions have a naturally close relationship with a particular data type.
 % Object-oriented programming allows this close relationship to be codified in the language by making such functions \emph{class methods} of their related data type.
-% Class methods have certain privileges with respect to their associated data type, notably un-prefixed access to the fields of that data type. 
-% When writing C functions in an object-oriented style, this un-prefixed access is swiftly missed, as access to fields of a @Foo* f@ requires an extra three characters @f->@ every time, which disrupts coding flow and clutters the produced code. 
-% 
+% Class methods have certain privileges with respect to their associated data type, notably un-prefixed access to the fields of that data type.
+% When writing C functions in an object-oriented style, this un-prefixed access is swiftly missed, as access to fields of a @Foo* f@ requires an extra three characters @f->@ every time, which disrupts coding flow and clutters the produced code.
+%
 % \TODO{Fill out section. Be sure to mention arbitrary expressions in with-blocks, recent change driven by Thierry to prioritize field name over parameters.}
 
-\CFA provides a @with@ clause/statement (see Pascal~\cite[\S~4.F]{Pascal}) to elided aggregate qualification to fields by opening a scope containing field identifiers.
-Hence, the qualified fields become variables, and making it easier to optimizing field references in a block.
+\CFA provides a @with@ clause/statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate qualification to fields by opening a scope containing field identifiers.
+Hence, the qualified fields become variables, and making it easier to optimize field references in a block.
 \begin{cfa}
-void f( S s ) `with s` {				$\C{// with clause}$
+void f( S s ) `with( s )` {				$\C{// with clause}$
 	c; i; d;							$\C{\color{red}// s.c, s.i, s.d}$
 }
@@ -1097,5 +1097,5 @@
 and the equivalence for object-style programming is:
 \begin{cfa}
-int mem( S & this ) `with this` {		$\C{// with clause}$
+int mem( S & this ) `with( this )` {		$\C{// with clause}$
 	c; i; d;							$\C{\color{red}// this.c, this.i, this.d}$
 }
@@ -1104,5 +1104,5 @@
 \begin{cfa}
 struct T { double m, n; };
-int mem( S & s, T & t ) `with s, t` {	$\C{// multiple aggregate parameters}$
+int mem( S & s, T & t ) `with( s, t )` {	$\C{// multiple aggregate parameters}$
 	c; i; d;							$\C{\color{red}// s.c, s.i, s.d}$
 	m; n;								$\C{\color{red}// t.m, t.n}$
@@ -1122,11 +1122,11 @@
 	struct S1 { ... } s1;
 	struct S2 { ... } s2;
-	`with s1` {						$\C{// with statement}$
+	`with( s1 )` {						$\C{// with statement}$
 		// access fields of s1 without qualification
-		`with s2` {					$\C{// nesting}$
+		`with( s2 )` {					$\C{// nesting}$
 			// access fields of s1 and s2 without qualification
 		}
 	}
-	`with s1, s2` {
+	`with( s1, s2 )` {
 		// access unambiguous fields of s1 and s2 without qualification
 	}
@@ -1139,5 +1139,5 @@
 struct S { int i; int j; double m; } a, c;
 struct T { int i; int k; int m } b, c;
-`with a, b` {
+`with( a, b )` {
 	j + k;							$\C{// unambiguous, unique names define unique types}$
 	i;								$\C{// ambiguous, same name and type}$
@@ -1147,11 +1147,11 @@
 	m = 1;							$\C{// unambiguous, same name and context defines unique type}$
 }
-`with c` { ... }					$\C{// ambiguous, same name and no context}$
-`with (S)c` { ... }					$\C{// unambiguous, same name and cast defines unique type}$
+`with( c )` { ... }					$\C{// ambiguous, same name and no context}$
+`with( (S)c )` { ... }					$\C{// unambiguous, same name and cast defines unique type}$
 \end{cfa}
 
 The components in the "with" clause
 
-  with a, b, c { ... }
+  with ( a, b, c ) { ... }
 
 serve 2 purposes: each component provides a type and object. The type must be a
@@ -1182,7 +1182,7 @@
 \section{Declarations}
 
-It is important to the design team that \CFA subjectively ``feel like'' C to user programmers. 
-An important part of this subjective feel is maintaining C's procedural programming paradigm, as opposed to the object-oriented paradigm of other systems languages such as \CC and Rust. 
-Maintaining this procedural paradigm means that coding patterns that work in C will remain not only functional but idiomatic in \CFA, reducing the mental burden of retraining C programmers and switching between C and \CFA development. 
+It is important to the design team that \CFA subjectively ``feel like'' C to user programmers.
+An important part of this subjective feel is maintaining C's procedural programming paradigm, as opposed to the object-oriented paradigm of other systems languages such as \CC and Rust.
+Maintaining this procedural paradigm means that coding patterns that work in C will remain not only functional but idiomatic in \CFA, reducing the mental burden of retraining C programmers and switching between C and \CFA development.
 Nonetheless, some features of object-oriented languages are undeniably convienient, and the \CFA design team has attempted to adapt them to a procedural paradigm so as to incorporate their benefits into \CFA; two of these features are resource management and name scoping.
 
@@ -1193,9 +1193,9 @@
 \subsection{References}
 
-All variables in C have an \emph{address}, a \emph{value}, and a \emph{type}; at the position in the program's memory denoted by the address, there exists a sequence of bits (the value), with the length and semantic meaning of this bit sequence defined by the type. 
+All variables in C have an \emph{address}, a \emph{value}, and a \emph{type}; at the position in the program's memory denoted by the address, there exists a sequence of bits (the value), with the length and semantic meaning of this bit sequence defined by the type.
 The C type system does not always track the relationship between a value and its address; a value that does not have a corresponding address is called a \emph{rvalue} (for ``right-hand value''), while a value that does have an address is called a \emph{lvalue} (for ``left-hand value''); in @int x; x = 42;@ the variable expression @x@ on the left-hand-side of the assignment is a lvalue, while the constant expression @42@ on the right-hand-side of the assignment is a rvalue.
-Which address a value is located at is sometimes significant; the imperative programming paradigm of C relies on the mutation of values at specific addresses. 
+Which address a value is located at is sometimes significant; the imperative programming paradigm of C relies on the mutation of values at specific addresses.
 Within a lexical scope, lvalue exressions can be used in either their \emph{address interpretation} to determine where a mutated value should be stored or in their \emph{value interpretation} to refer to their stored value; in @x = y;@ in @{ int x, y = 7; x = y; }@, @x@ is used in its address interpretation, while y is used in its value interpretation.
-Though this duality of interpretation is useful, C lacks a direct mechanism to pass lvalues between contexts, instead relying on \emph{pointer types} to serve a similar purpose. 
+Though this duality of interpretation is useful, C lacks a direct mechanism to pass lvalues between contexts, instead relying on \emph{pointer types} to serve a similar purpose.
 In C, for any type @T@ there is a pointer type @T*@, the value of which is the address of a value of type @T@; a pointer rvalue can be explicitly \emph{dereferenced} to the pointed-to lvalue with the dereference operator @*?@, while the rvalue representing the address of a lvalue can be obtained with the address-of operator @&?@.
 
@@ -1207,5 +1207,5 @@
 \end{cfa}
 
-Unfortunately, the dereference and address-of operators introduce a great deal of syntactic noise when dealing with pointed-to values rather than pointers, as well as the potential for subtle bugs. 
+Unfortunately, the dereference and address-of operators introduce a great deal of syntactic noise when dealing with pointed-to values rather than pointers, as well as the potential for subtle bugs.
 It would be desirable to have the compiler figure out how to elide the dereference operators in a complex expression such as @*p2 = ((*p1 + *p2) * (**p3 - *p1)) / (**p3 - 15);@, for both brevity and clarity.
 However, since C defines a number of forms of \emph{pointer arithmetic}, two similar expressions involving pointers to arithmetic types (\eg @*p1 + x@ and @p1 + x@) may each have well-defined but distinct semantics, introducing the possibility that a user programmer may write one when they mean the other, and precluding any simple algorithm for elision of dereference operators.
@@ -1220,6 +1220,6 @@
 \end{cfa}
 
-Except for auto-dereferencing by the compiler, this reference example is exactly the same as the previous pointer example. 
-Hence, a reference behaves like a variable name -- an lvalue expression which is interpreted as a value, but also has the type system track the address of that value. 
+Except for auto-dereferencing by the compiler, this reference example is exactly the same as the previous pointer example.
+Hence, a reference behaves like a variable name -- an lvalue expression which is interpreted as a value, but also has the type system track the address of that value.
 One way to conceptualize a reference is via a rewrite rule, where the compiler inserts a dereference operator before the reference variable for each reference qualifier in the reference variable declaration, so the previous example implicitly acts like:
 
@@ -1228,18 +1228,18 @@
 \end{cfa}
 
-References in \CFA are similar to those in \CC, but with a couple important improvements, both of which can be seen in the example above. 
-Firstly, \CFA does not forbid references to references, unlike \CC. 
-This provides a much more orthogonal design for library implementors, obviating the need for workarounds such as @std::reference_wrapper@. 
-
-Secondly, unlike the references in \CC which always point to a fixed address, \CFA references are rebindable. 
-This allows \CFA references to be default-initialized (to a null pointer), and also to point to different addresses throughout their lifetime. 
-This rebinding is accomplished without adding any new syntax to \CFA, but simply by extending the existing semantics of the address-of operator in C. 
-In C, the address of a lvalue is always a rvalue, as in general that address is not stored anywhere in memory, and does not itself have an address. 
-In \CFA, the address of a @T&@ is a lvalue @T*@, as the address of the underlying @T@ is stored in the reference, and can thus be mutated there. 
-The result of this rule is that any reference can be rebound using the existing pointer assignment semantics by assigning a compatible pointer into the address of the reference, \eg @&r1 = &x;@ above. 
+References in \CFA are similar to those in \CC, but with a couple important improvements, both of which can be seen in the example above.
+Firstly, \CFA does not forbid references to references, unlike \CC.
+This provides a much more orthogonal design for library implementors, obviating the need for workarounds such as @std::reference_wrapper@.
+
+Secondly, unlike the references in \CC which always point to a fixed address, \CFA references are rebindable.
+This allows \CFA references to be default-initialized (to a null pointer), and also to point to different addresses throughout their lifetime.
+This rebinding is accomplished without adding any new syntax to \CFA, but simply by extending the existing semantics of the address-of operator in C.
+In C, the address of a lvalue is always a rvalue, as in general that address is not stored anywhere in memory, and does not itself have an address.
+In \CFA, the address of a @T&@ is a lvalue @T*@, as the address of the underlying @T@ is stored in the reference, and can thus be mutated there.
+The result of this rule is that any reference can be rebound using the existing pointer assignment semantics by assigning a compatible pointer into the address of the reference, \eg @&r1 = &x;@ above.
 This rebinding can occur to an arbitrary depth of reference nesting; $n$ address-of operators applied to a reference nested $m$ times will produce an lvalue pointer nested $n$ times if $n \le m$ (note that $n = m+1$ is simply the usual C rvalue address-of operator applied to the $n = m$ case).
 The explicit address-of operators can be thought of as ``cancelling out'' the implicit dereference operators, \eg @(&`*`)r1 = &x@ or @(&(&`*`)`*`)r3 = &(&`*`)r1@ or even @(&`*`)r2 = (&`*`)`*`r3@ for @&r2 = &r3@.
 
-Since pointers and references share the same internal representation, code using either is equally performant; in fact the \CFA compiler converts references to pointers internally, and the choice between them in user code can be made based solely on convenience. 
+Since pointers and references share the same internal representation, code using either is equally performant; in fact the \CFA compiler converts references to pointers internally, and the choice between them in user code can be made based solely on convenience.
 By analogy to pointers, \CFA references also allow cv-qualifiers:
 
@@ -1254,19 +1254,19 @@
 \end{cfa}
 
-Given that a reference is meant to represent a lvalue, \CFA provides some syntactic shortcuts when initializing references. 
-There are three initialization contexts in \CFA: declaration initialization, argument/parameter binding, and return/temporary binding. 
-In each of these contexts, the address-of operator on the target lvalue may (in fact, must) be elided. 
-The syntactic motivation for this is clearest when considering overloaded operator-assignment, \eg @int ?+=?(int &, int)@; given @int x, y@, the expected call syntax is @x += y@, not @&x += y@. 
-
-This initialization of references from lvalues rather than pointers can be considered a ``lvalue-to-reference'' conversion rather than an elision of the address-of operator; similarly, use of a the value pointed to by a reference in an rvalue context can be thought of as a ``reference-to-rvalue'' conversion. 
-\CFA includes one more reference conversion, an ``rvalue-to-reference'' conversion, implemented by means of an implicit temporary. 
-When an rvalue is used to initialize a reference, it is instead used to initialize a hidden temporary value with the same lexical scope as the reference, and the reference is initialized to the address of this temporary. 
-This allows complex values to be succinctly and efficiently passed to functions, without the syntactic overhead of explicit definition of a temporary variable or the runtime cost of pass-by-value. 
+Given that a reference is meant to represent a lvalue, \CFA provides some syntactic shortcuts when initializing references.
+There are three initialization contexts in \CFA: declaration initialization, argument/parameter binding, and return/temporary binding.
+In each of these contexts, the address-of operator on the target lvalue may (in fact, must) be elided.
+The syntactic motivation for this is clearest when considering overloaded operator-assignment, \eg @int ?+=?(int &, int)@; given @int x, y@, the expected call syntax is @x += y@, not @&x += y@.
+
+This initialization of references from lvalues rather than pointers can be considered a ``lvalue-to-reference'' conversion rather than an elision of the address-of operator; similarly, use of a the value pointed to by a reference in an rvalue context can be thought of as a ``reference-to-rvalue'' conversion.
+\CFA includes one more reference conversion, an ``rvalue-to-reference'' conversion, implemented by means of an implicit temporary.
+When an rvalue is used to initialize a reference, it is instead used to initialize a hidden temporary value with the same lexical scope as the reference, and the reference is initialized to the address of this temporary.
+This allows complex values to be succinctly and efficiently passed to functions, without the syntactic overhead of explicit definition of a temporary variable or the runtime cost of pass-by-value.
 \CC allows a similar binding, but only for @const@ references; the more general semantics of \CFA are an attempt to avoid the \emph{const hell} problem, in which addition of a @const@ qualifier to one reference requires a cascading chain of added qualifiers.
 
 \subsection{Constructors and Destructors}
 
-One of the strengths of C is the control over memory management it gives programmers, allowing resource release to be more consistent and precisely timed than is possible with garbage-collected memory management. 
-However, this manual approach to memory management is often verbose, and it is useful to manage resources other than memory (\eg file handles) using the same mechanism as memory. 
+One of the strengths of C is the control over memory management it gives programmers, allowing resource release to be more consistent and precisely timed than is possible with garbage-collected memory management.
+However, this manual approach to memory management is often verbose, and it is useful to manage resources other than memory (\eg file handles) using the same mechanism as memory.
 \CC is well-known for an approach to manual memory management that addresses both these issues, Resource Allocation Is Initialization (RAII), implemented by means of special \emph{constructor} and \emph{destructor} functions; we have implemented a similar feature in \CFA.
 
@@ -1336,5 +1336,5 @@
 	TIMED( "copy_int", ti = si; )
 	TIMED( "clear_int", clear( &si ); )
-	REPEAT_TIMED( "pop_int", N, 
+	REPEAT_TIMED( "pop_int", N,
 		int xi = pop( &ti ); if ( xi > maxi ) { maxi = xi; } )
 	REPEAT_TIMED( "print_int", N/2, print( out, vali, ":", vali, "\n" ); )
@@ -1346,5 +1346,5 @@
 	TIMED( "copy_pair", tp = sp; )
 	TIMED( "clear_pair", clear( &sp ); )
-	REPEAT_TIMED( "pop_pair", N, 
+	REPEAT_TIMED( "pop_pair", N,
 		pair(_Bool, char) xp = pop( &tp ); if ( xp > maxp ) { maxp = xp; } )
 	REPEAT_TIMED( "print_pair", N/2, print( out, valp, ":", valp, "\n" ); )
@@ -1363,5 +1363,5 @@
 Note, the C benchmark uses unchecked casts as there is no runtime mechanism to perform such checks, while \CFA and \CC provide type-safety statically.
 
-Figure~\ref{fig:eval} and Table~\ref{tab:eval} show the results of running the benchmark in Figure~\ref{fig:BenchmarkTest} and its C, \CC, and \CCV equivalents. 
+Figure~\ref{fig:eval} and Table~\ref{tab:eval} show the results of running the benchmark in Figure~\ref{fig:BenchmarkTest} and its C, \CC, and \CCV equivalents.
 The graph plots the median of 5 consecutive runs of each program, with an initial warm-up run omitted.
 All code is compiled at \texttt{-O2} by GCC or G++ 6.2.0, with all \CC code compiled as \CCfourteen.
@@ -1397,7 +1397,7 @@
 Finally, the binary size for \CFA is larger because of static linking with the \CFA libraries.
 
-\CFA is also competitive in terms of source code size, measured as a proxy for programmer effort. The line counts in Table~\ref{tab:eval} include implementations of @pair@ and @stack@ types for all four languages for purposes of direct comparison, though it should be noted that \CFA and \CC have pre-written data structures in their standard libraries that programmers would generally use instead. Use of these standard library types has minimal impact on the performance benchmarks, but shrinks the \CFA and \CC benchmarks to 73 and 54 lines, respectively. 
+\CFA is also competitive in terms of source code size, measured as a proxy for programmer effort. The line counts in Table~\ref{tab:eval} include implementations of @pair@ and @stack@ types for all four languages for purposes of direct comparison, though it should be noted that \CFA and \CC have pre-written data structures in their standard libraries that programmers would generally use instead. Use of these standard library types has minimal impact on the performance benchmarks, but shrinks the \CFA and \CC benchmarks to 73 and 54 lines, respectively.
 On the other hand, C does not have a generic collections-library in its standard distribution, resulting in frequent reimplementation of such collection types by C programmers.
-\CCV does not use the \CC standard template library by construction, and in fact includes the definition of @object@ and wrapper classes for @bool@, @char@, @int@, and @const char *@ in its line count, which inflates this count somewhat, as an actual object-oriented language would include these in the standard library; 
+\CCV does not use the \CC standard template library by construction, and in fact includes the definition of @object@ and wrapper classes for @bool@, @char@, @int@, and @const char *@ in its line count, which inflates this count somewhat, as an actual object-oriented language would include these in the standard library;
 with their omission, the \CCV line count is similar to C.
 We justify the given line count by noting that many object-oriented languages do not allow implementing new interfaces on library types without subclassing or wrapper types, which may be similarly verbose.
@@ -1492,5 +1492,5 @@
 In addition, there are interesting future directions for the polymorphism design.
 Notably, \CC template functions trade compile time and code bloat for optimal runtime of individual instantiations of polymorphic functions.
-\CFA polymorphic functions use dynamic virtual-dispatch; 
+\CFA polymorphic functions use dynamic virtual-dispatch;
 the runtime overhead of this approach is low, but not as low as inlining, and it may be beneficial to provide a mechanism for performance-sensitive code.
 Two promising approaches are an @inline@ annotation at polymorphic function call sites to create a template-specialization of the function (provided the code is visible) or placing an @inline@ annotation on polymorphic function-definitions to instantiate a specialized version for some set of types (\CC template specialization).
