source: doc/theses/fangren_yu_MMath/intro.tex @ e1107ec

Last change on this file since e1107ec was c82dad4, checked in by Peter A. Buhr <pabuhr@…>, 3 months ago

more proofreading of intro and content1 chapters

  • Property mode set to 100644
File size: 16.9 KB
Line 
1\chapter{Introduction}
2
3This thesis is exploratory work I did to understand, fix, and extend the \CFA type-system, specifically, the type resolver used to select polymorphic types among overloaded names.
4The \CFA type-system has a number of unique features making it different from all other programming languages.
5
6Overloading allows programmers to use the most meaningful names without fear of name clashes within a program or from external sources, like include files.
7\begin{quote}
8There are only two hard things in Computer Science: cache invalidation and \emph{naming things}. --- Phil Karlton
9\end{quote}
10Experience from \CC and \CFA developers is that the type system implicitly and correctly disambiguates the majority of overloaded names, \ie it is rare to get an incorrect selection or ambiguity, even among hundreds of overloaded (variables and) functions.
11In many cases, a programmer has no idea there are name clashes, as they are silently resolved, simplifying the development process.
12Depending on the language, ambiguous cases are resolved using some form of qualification and/or casting.
13
14One of the key goals in \CFA is to push the boundary on overloading, and hence, overload resolution.
15
16
17\section{Types}
18
19\begin{quote}
20Some are born great, some achieve greatness, and some have greatness thrust upon them. Twelfth Night, Act II Scene 5, William Shakespeare
21\end{quote}
22
23All computers have multiple types because computer architects optimize the hardware around a few basic types with well defined (mathematical) operations: boolean, integral, floating-point, and occasionally strings.
24A programming language and its compiler present ways to declare types that ultimately map into those provided by the underlying hardware.
25These language types are thrust upon programmers, with their syntactic and semantic rules, and resulting restrictions.
26A language type-system defines these rules and uses them to understand how an expression is to be evaluated by the hardware.
27Modern programming-languages allow user-defined types and generalize across multiple types using polymorphism.
28Type systems can be static, where each variable has a fixed type during execution and an expression's type is determined once at compile time, or dynamic, where each variable can change type during execution and so an expression's type is reconstructed on each evaluation.
29Expressibility, generalization, and safety are all bound up in a language's type system, and hence, directly affect the capability, build time, and correctness of program development.
30
31
32\section{Operator Overloading}
33
34Virtually all programming languages overload the arithmetic operators across the basic computational types using the number and type of parameters and returns.
35Like \CC, \CFA also allows these operators to be overloaded with user-defined types.
36The syntax for operator names uses the @'?'@ character to denote a parameter, \eg unary operators: @?++@, @++?@, binary operator @?+?@.
37Here, a user-defined type is extended with an addition operation with the same syntax as builtin types.
38\begin{cfa}
39struct S { int i, j };
40S @?+?@( S op1, S op2 ) { return (S){ op1.i + op2.i, op1.j + op2.j }; }
41S s1, s2;
42s1 = s1 @+@ s2;                 $\C[1.75in]{// infix call}$
43s1 = @?+?@( s1, s2 );   $\C{// direct call}\CRT$
44\end{cfa}
45The type system examines each call size and selects the best matching overloaded function based on the number and types of arguments.
46If there are mixed-mode operands, @2 + 3.5@, the type system, like in C/\CC, attempts (safe) conversions, converting the argument type(s) to the parameter type(s).
47Conversions are necessary because the hardware rarely supports mix-mode operations, so both operands must be the same type.
48Note, without implicit conversions, programmers must write an exponential number of functions covering all possible exact-match cases among all possible types.
49This approach does not match with programmer intuition and expectation, regardless of any \emph{safety} issues resulting from converted values.
50
51
52\section{Function Overloading}
53
54Both \CFA and \CC allow function names to be overloaded, as long as their prototypes differ in the number and type of parameters and returns.
55\begin{cfa}
56void f( void );                 $\C[2in]{// (1): no parameter}$
57void f( char );                 $\C{// (2): overloaded on the number and parameter type}$
58void f( int, int );             $\C{// (3): overloaded on the number and parameter type}$
59f( 'A' );                               $\C{// select (2)}\CRT$
60\end{cfa}
61In this case, the name @f@ is overloaded depending on the number and parameter types.
62The type system examines each call size and selects the best match based on the number and types of the arguments.
63Here, there is a perfect match for the call, @f( 'A' )@ with the number and parameter type of function (2).
64
65Ada, Scala, and \CFA type-systems also use the return type in resolving a call, to pinpoint the best overloaded name.
66For example, in many programming languages with overloading, the following functions are ambiguous without using the return type.
67\begin{cfa}
68int f( int );                   $\C[2in]{// (1); overloaded on return type and parameter}$
69double f( int );                $\C{// (2); overloaded on return type and parameter}$
70int i = f( 3 );                 $\C{// select (1)}$
71double d = f( 3 );              $\C{// select (2)}\CRT$
72\end{cfa}
73However, if the type system looks at the return type, there is an exact match for each call, which matches with programmer intuition and expectation.
74This capability can be taken to the extreme, where there are no function parameters.
75\begin{cfa}
76int random( void );             $\C[2in]{// (1); overloaded on return type}$
77double random( void );  $\C{// (2); overloaded on return type}$
78int i = random();               $\C{// select (1)}$
79double d = random();    $\C{// select (2)}\CRT$
80\end{cfa}
81Again, there is an exact match for each call.
82If there is no exact match, a set of minimal conversions can be added to find a best match, as for operator overloading.
83
84
85\section{Variable Overloading}
86
87Unlike most programming languages, \CFA has variable overloading within a scope, along with shadow overloading in nested scopes.
88(Shadow overloading is also possible for functions, if a language supports nested function declarations, \eg \CC named, nested, lambda functions.)
89\begin{cfa}
90void foo( double d );
91int v;                              $\C[2in]{// (1)}$
92double v;                               $\C{// (2) variable overloading}$
93foo( v );                               $\C{// select (2)}$
94{
95        int v;                          $\C{// (3) shadow overloading}$
96        double v;                       $\C{// (4) and variable overloading}$
97        foo( v );                       $\C{// select (4)}\CRT$
98}
99\end{cfa}
100It is interesting that shadow overloading is considered a normal programming-language feature with only slight software-engineering problems, but variable overloading within a scope is often considered extremely dangerous.
101
102In \CFA, the type system simply treats overloaded variables as an overloaded function returning a value with no parameters.
103Hence, no significant effort is required to support this feature.
104Leveraging the return type to disambiguate is essential because variables have no parameters.
105\begin{cfa}
106int MAX = 2147483647;   $\C[2in]{// (1); overloaded on return type}$
107double MAX = ...;               $\C{// (2); overloaded on return type}$
108int i = MAX;                    $\C{// select (1)}$
109double d = MAX;                 $\C{// select (2)}\CRT$
110\end{cfa}
111
112
113\section{Type Inferencing}
114
115Every variable has a type, but association between them can occur in different ways:
116at the point where the variable comes into existence (declaration) and/or on each assignment to the variable.
117\begin{cfa}
118double x;                               $\C{// type only}$
119float y = 3.1D;                 $\C{// type and initialization}$
120auto z = y;                             $\C{// initialization only}$
121z = "abc";                              $\C{// assignment}$
122\end{cfa}
123For type-and-initialization, the specified and initialization types may not agree.
124Similarity, for assignment the current variable and expression types may not agree.
125For type-only, the programmer specifies the initial type, which remains fixed for the variable's lifetime in statically typed languages.
126In the other cases, the compiler may select the type by melding programmer and context information.
127When the compiler participates in type selection, it is called \newterm{type inferencing}.
128Note, type inferencing is different from type conversion: type inferencing \emph{discovers} a variable's type before setting its value, whereas conversion has two typed values and performs a (possibly lossy) action to convert one value to the type of the other variable.
129
130One of the first and powerful type-inferencing system is Hindley--Milner~\cite{Damas82}.
131Here, the type resolver starts with the types of the program constants used for initialization and these constant types flow throughout the program, setting all variable and expression types.
132\begin{cfa}
133auto f() {
134        x = 1;   y = 3.5;       $\C{// set types from constants}$
135        x = // expression involving x, y and other local initialized variables
136        y = // expression involving x, y and other local initialized variables
137        return x, y;
138}
139auto w = f();                   $\C{// typing flows outwards}$
140
141void f( auto x, auto y ) {
142        x = // expression involving x, y and other local initialized variables
143        y = // expression involving x, y and other local initialized variables
144}
145s = 1;   t = 3.5;               $\C{// set types from constants}$
146f( s, t );                              $\C{// typing flows inwards}$
147\end{cfa}
148In both overloads of @f@, the type system works from the constant initializations inwards and/or outwards to determine the types of all variables and functions.
149Note, like template meta-programming, there could be a new function generated for the second @f@ depending on the types of the arguments, assuming these types are meaningful in the body of the @f@.
150Inferring type constraints, by analysing the body of @f@ is possible, and these constraints must be satisfied at each call site by the argument types;
151in this case, parametric polymorphism can allow separate compilation.
152In languages with type inferencing, there is often limited overloading to reduce the search space, which introduces the naming problem.
153Return-type inferencing goes in the opposite direction to Hindley--Milner: knowing the type of the result and flowing back through an expression to help select the best possible overloads, and possibly converting the constants for a best match.
154
155In simpler type inferencing systems, such as C/\CC/\CFA, there are more specific usages.
156\begin{cquote}
157\setlength{\tabcolsep}{10pt}
158\begin{tabular}{@{}lll@{}}
159\multicolumn{1}{c}{\textbf{gcc / \CFA}} & \multicolumn{1}{c}{\textbf{\CC}} \\
160\begin{cfa}
161#define expr 3.0 * i
162typeof(expr) x = expr;
163int y;
164typeof(y) z = y;
165\end{cfa}
166&
167\begin{cfa}
168
169auto x = 3.0 * 4;
170int y;
171auto z = y;
172\end{cfa}
173&
174\begin{cfa}
175
176// use type of initialization expression
177
178// use type of initialization expression
179\end{cfa}
180\end{tabular}
181\end{cquote}
182The two important capabilities are:
183\begin{itemize}[topsep=0pt]
184\item
185Not determining or writing long generic types, \eg, given deeply nested generic types.
186\begin{cfa}
187typedef T1(int).T2(float).T3(char).T @ST@;  $\C{// \CFA nested type declaration}$
188@ST@ x, y, x;
189\end{cfa}
190This issue is exaggerated with \CC templates, where type names are 100s of characters long, resulting in unreadable error messages.
191\item
192Ensuring the type of secondary variables, always matches a primary variable.
193\begin{cfa}
194int x; $\C{// primary variable}$
195typeof(x) y, z, w; $\C{// secondary variables match x's type}$
196\end{cfa}
197If the type of @x@ changes, the types of the secondary variables correspondingly update.
198\end{itemize}
199Note, the use of @typeof@ is more restrictive, and possibly safer, than general type-inferencing.
200\begin{cfa}
201int x;
202type(x) y = ... // complex expression
203type(x) z = ... // complex expression
204\end{cfa}
205Here, the types of @y@ and @z@ are fixed (branded), whereas with type inferencing, the types of @y@ and @z@ are potentially unknown.
206
207
208\section{Type-Inferencing Issues}
209
210Each kind of type-inferencing system has its own set of issues that flow onto the programmer in the form of convenience, restrictions or confusions.
211
212A convenience is having the compiler use its overarching program knowledge to select the best type for each variable based on some notion of \emph{best}, which simplifies the programming experience.
213
214A restriction is the conundrum in type inferencing of when to \emph{brand} a type.
215That is, when is the type of the variable/function more important than the type of its initialization expression.
216For example, if a change is made in an initialization expression, it can cause cascading type changes and/or errors.
217At some point, a variable's type needs to remain constant and the initializing expression needs to be modified or in error when it changes.
218Often type-inferencing systems allow restricting (\newterm{branding}) a variable or function type, so the complier can report a mismatch with the constant initialization.
219\begin{cfa}
220void f( @int@ x, @int@ y ) {  // brand function prototype
221        x = // expression involving x, y and other local initialized variables
222        y = // expression involving x, y and other local initialized variables
223}
224s = 1;   t = 3.5;
225f( s, @t@ ); // type mismatch
226\end{cfa}
227In Haskell, it is common for programmers to brand (type) function parameters.
228
229A confusion is large blocks of code where all declarations are @auto@.
230As a result, understanding and changing the code becomes almost impossible.
231Types provide important clues as to the behaviour of the code, and correspondingly to correctly change or add new code.
232In these cases, a programmer is forced to re-engineer types, which is fragile, or rely on a fancy IDE that can re-engineer types.
233For example, given:
234\begin{cfa}
235auto x = @...@
236\end{cfa}
237and the need to write a routine to compute using @x@
238\begin{cfa}
239void rtn( @type of x@ parm );
240rtn( x );
241\end{cfa}
242A programmer must re-engineer the type of @x@'s initialization expression, reconstructing the possibly long generic type-name.
243In this situation, having the type name or its short alias is essential.
244
245\CFA's type system tries to prevent type-resolution mistakes by relying heavily on the type of the left-hand side of assignment to pinpoint the right types within an expression.
246Type inferencing defeats this goal because there is no left-hand type.
247Fundamentally, type inferencing tries to magic away variable types from the programmer.
248However, this results in lazy programming with the potential for poor performance and safety concerns.
249Types are as important as control-flow in writing a good program, and should not be masked, even if it requires the programmer to think!
250A similar issue is garbage collection, where storage management is masked, resulting in poor program design and performance.
251The entire area of Computer-Science data-structures is obsessed with time and space, and that obsession should continue into regular programming.
252Understanding space and time issues are an essential part of the programming craft.
253Given @typedef@ and @typeof@ in \CFA, and the strong need to use the left-hand type in resolution, implicit type-inferencing is unsupported.
254Should a significant need arise, this feature can be revisited.
255
256
257\section{Polymorphism}
258
259
260
261\section{Contributions}
262
263
264
265\begin{comment}
266From: Andrew James Beach <ajbeach@uwaterloo.ca>
267To: Peter Buhr <pabuhr@uwaterloo.ca>, Michael Leslie Brooks <mlbrooks@uwaterloo.ca>,
268    Fangren Yu <f37yu@uwaterloo.ca>, Jiada Liang <j82liang@uwaterloo.ca>
269Subject: Re: Haskell
270Date: Fri, 30 Aug 2024 16:09:06 +0000
271
272Do you mean:
273
274one = 1
275
276And then write a bunch of code that assumes it is an Int or Integer (which are roughly int and Int in Cforall) and then replace it with:
277
278one = 1.0
279
280And have that crash? That is actually enough, for some reason Haskell is happy to narrow the type of the first literal (Num a => a) down to Integer but will not do the same for (Fractional a => a) and Rational (which is roughly Integer for real numbers). Possibly a compatibility thing since before Haskell had polymorphic literals.
281
282Now, writing even the first version will fire a -Wmissing-signatures warning, because it does appear to be understood that just from a documentation perspective, people want to know what types are being used. Now, if you have the original case and start updating the signatures (adding one :: Fractional a => a), you can eventually get into issues, for example:
283
284import Data.Array (Array, Ix, (!))
285atOne :: (Ix a, Frational a) => Array a b -> b - - In CFA: forall(a | Ix(a) | Frational(a), b) b atOne(Array(a, b) const & array)
286atOne = (! one)
287
288Which compiles and is fine except for the slightly awkward fact that I don't know of any types that are both Ix and Fractional types. So you might never be able to find a way to actually use that function. If that is good enough you can reduce that to three lines and use it.
289
290Something that just occurred to me, after I did the above examples, is: Are there any classic examples in literature I could adapt to Haskell?
291
292Andrew
293
294PS, I think it is too obvious of a significant change to work as a good example but I did mock up the structure of what I am thinking you are thinking about with a function. If this helps here it is.
295
296doubleInt :: Int -> Int
297doubleInt x = x * 2
298
299doubleStr :: String -> String
300doubleStr x = x ++ x
301
302-- Missing Signature
303action = doubleInt - replace with doubleStr
304
305main :: IO ()
306main = print $ action 4
307\end{comment}
Note: See TracBrowser for help on using the repository browser.