Changeset f6664bf2
- Timestamp:
- Feb 16, 2021, 1:32:24 PM (4 years ago)
- Branches:
- ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, master, new-ast-unique-expr, pthread-emulation, qualifiedEnum
- Children:
- feacef9
- Parents:
- 14533d4 (diff), 1830a86 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Location:
- doc
- Files:
-
- 8 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/LaTeXmacros/common.tex
r14533d4 rf6664bf2 11 11 %% Created On : Sat Apr 9 10:06:17 2016 12 12 %% Last Modified By : Peter A. Buhr 13 %% Last Modified On : Mon Feb 8 21:45:41202114 %% Update Count : 52 213 %% Last Modified On : Sun Feb 14 15:52:46 2021 14 %% Update Count : 524 15 15 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 16 16 … … 146 146 % The star version does not lowercase the index information, e.g., \newterm*{IBM}. 147 147 \newcommand{\newtermFontInline}{\emph} 148 \newcommand{\newterm}{\ @ifstar\@snewterm\@newterm}148 \newcommand{\newterm}{\protect\@ifstar\@snewterm\@newterm} 149 149 \newcommand{\@newterm}[2][\@empty]{\lowercase{\def\temp{#2}}{\newtermFontInline{#2}}\ifx#1\@empty\index{\temp}\else\index{#1@{\protect#2}}\fi} 150 150 \newcommand{\@snewterm}[2][\@empty]{{\newtermFontInline{#2}}\ifx#1\@empty\index{#2}\else\index{#1@{\protect#2}}\fi} … … 294 294 295 295 \ifdefined\CFALatin% extra Latin-1 escape characters 296 \lstnewenvironment{cfa}[1][]{ 296 \lstnewenvironment{cfa}[1][]{% necessary 297 297 \lstset{ 298 298 language=CFA, … … 303 303 %moredelim=[is][\lstset{keywords={}}]{¶}{¶}, % keyword escape ¶...¶ (pilcrow symbol) emacs: C-q M-^ 304 304 }% lstset 305 \lstset{#1} 305 \lstset{#1}% necessary 306 306 }{} 307 307 % inline code ©...© (copyright symbol) emacs: C-q M-) 308 308 \lstMakeShortInline© % single-character for \lstinline 309 309 \else% regular ASCI characters 310 \lstnewenvironment{cfa}[1][]{ 310 \lstnewenvironment{cfa}[1][]{% necessary 311 311 \lstset{ 312 312 language=CFA, … … 315 315 moredelim=**[is][\color{red}]{@}{@}, % red highlighting @...@ 316 316 }% lstset 317 \lstset{#1} 317 \lstset{#1}% necessary 318 318 }{} 319 319 % inline code @...@ (at symbol) -
doc/papers/concurrency/mail2
r14533d4 rf6664bf2 1288 1288 1289 1289 1290 From: "Wiley Online Proofing" <onlineproofing@eproofing.in> 1291 To: pabuhr@uwaterloo.ca 1292 Reply-To: eproofing@wiley.com 1293 Date: 3 Nov 2020 08:25:06 +0000 1294 Subject: Action: Proof of SPE_EV_SPE2925 for Software: Practice And Experience ready for review 1295 1296 Dear Dr. Peter Buhr, 1297 1298 The proof of your Software: Practice And Experience article Advanced control-flow in Cforall is now available for review: 1299 1300 Edit Article https://wiley.eproofing.in/Proof.aspx?token=ab7739d5678447fbbe5036f3bcba2445081500061 1301 1302 To review your article, please complete the following steps, ideally within 48 hours*, so we can publish your article as quickly as possible. 1303 1304 1. Open your proof in the online proofing system using the button above. 1305 2. Check the article for correctness and respond to all queries.For instructions on using the system, please see the "Help" menu in the upper right corner. 1306 3. Submit your changes by clicking the "Submit" button in the proofing system. 1307 1308 Helpful Tips 1309 1310 * Your manuscript has been formatted following the style requirements for the journal. Any requested changes that go against journal style will not be made. 1311 * Your proof will include queries. These must be replied to using the system before the proof can be submitted. 1312 * The only acceptable changes at this stage are corrections to grammatical errors or data accuracy, or to provide higher resolution figure files (if requested by the typesetter). 1313 * Any changes to scientific content or authorship will require editorial review and approval. 1314 * Once your changes are complete, submit the article after which no additional corrections can be requested. 1315 * Most authors complete their corrections within 48 hours. Returning any corrections promptly will accelerate publication of your article. 1316 1317 If you encounter any problems or have questions, please contact the production office at (SPEproofs@wiley.com). For the quickest response, include the journal name and your article ID (found in the subject line) in all correspondence. 1318 1319 Best regards, 1320 Software: Practice And Experience Production Office 1321 1322 * We appreciate that the COVID-19 pandemic may create conditions for you that make it difficult for you to review your proof within standard timeframes. If you have any problems keeping to this schedule, please reach out to me at (SPEproofs@wiley.com) to discuss alternatives. 1323 1324 1325 1290 1326 From: "Pacaanas, Joel -" <jpacaanas@wiley.com> 1291 1327 To: "Peter A. Buhr" <pabuhr@uwaterloo.ca> … … 1345 1381 1346 1382 Since the proof was reset, your added corrections before has also been removed. Please add them back. 1347 1348 1383 Please return your corrections at your earliest convenience. 1349 1384 … … 1384 1419 Best regards, 1385 1420 Joel Pacaanas 1421 1422 1423 1424 Date: Wed, 2 Dec 2020 08:49:52 +0000 1425 From: <cs-author@wiley.com> 1426 To: <pabuhr@uwaterloo.ca> 1427 Subject: Published: Your article is now published in Early View! 1428 1429 Dear Peter Buhr, 1430 1431 Your article Advanced Control-flow and Concurrency in C A in Software: Practice and Experience has the following publication status: Published as Early View 1432 1433 To access your article, please click the following link to register or log in: 1434 1435 https://authorservices.wiley.com/index.html#register 1436 1437 You can also access your published article via this link: http://dx.doi.org/10.1002/spe.2925 1438 1439 If you need any assistance, please click here https://hub.wiley.com/community/support/authorservices to view our Help section. 1440 1441 Sincerely, 1442 Wiley Author Services 1443 1444 1445 Date: Wed, 2 Dec 2020 02:16:23 -0500 1446 From: <no-reply@copyright.com> 1447 To: <pabuhr@uwaterloo.ca> 1448 CC: <SPEproofs@wiley.com> 1449 Subject: Please submit your publication fee(s) SPE2925 1450 1451 John Wiley and Sons 1452 Please submit your selection and payment for publication fee(s). 1453 1454 Dear Peter A. Buhr, 1455 1456 Congratulations, your article in Software: Practice and Experience has published online: 1457 1458 Manuscript DOI: 10.1002/spe.2925 1459 Manuscript ID: SPE2925 1460 Manuscript Title: Advanced control-flow in Cforall 1461 Published by: John Wiley and Sons 1462 1463 Please carefully review your publication options. If you wish your colour 1464 figures to be printed in colour, you must select and pay for that option now 1465 using the RightsLink e-commerce solution from CCC. 1466 1467 Review my options & pay charges 1468 https://oa.copyright.com/apc-payment-ui/overview?id=f46ba36a-2565-4c8d-8865-693bb94d87e5&chargeset=CHARGES 1469 1470 To review and pay your charge(s), please click here 1471 <https://oa.copyright.com/apc-payment-ui/overview?id=f46ba36a-2565-4c8d-8865-693bb94d87e5&chargeset=CHARGES>. You 1472 can also forward this link to another party for processing. 1473 1474 To complete a secure transaction, you will need a RightsLink account 1475 <https://oa.copyright.com/apc-payment-ui/registration?id=f46ba36a-2565-4c8d-8865-693bb94d87e5&chargeset=CHARGES>. If 1476 you do not have one already, you will be prompted to register as you are 1477 checking out your author charges. This is a very quick process; the majority of 1478 your registration form will be pre-populated automatically with information we 1479 have already supplied to RightsLink. 1480 1481 If you have any questions about these charges, please contact CCC Customer 1482 Service <wileysupport@copyright.com> using the information below. Please do not 1483 reply directly to this email as this is an automated email notification sent 1484 from an unmonitored account. 1485 1486 Sincerely, 1487 John Wiley and Sons 1488 1489 Tel.: +1-877-622-5543 / +1-978-646-2777 1490 wileysupport@copyright.com 1491 www.copyright.com 1492 1493 Copyright Clearance Center 1494 RightsLink 1495 1496 This message (including attachments) is confidential, unless marked 1497 otherwise. It is intended for the addressee(s) only. If you are not an intended 1498 recipient, please delete it without further distribution and reply to the 1499 sender that you have received the message in error. 1500 1501 1502 1503 From: "Pacaanas, Joel -" <jpacaanas@wiley.com> 1504 To: "Peter A. Buhr" <pabuhr@uwaterloo.ca> 1505 Subject: RE: Please submit your publication fee(s) SPE2925 1506 Date: Thu, 3 Dec 2020 08:45:10 +0000 1507 1508 Dear Dr Buhr, 1509 1510 Thank you for your email and concern with regard to the RightsLink account. As 1511 you have mentioned that all figures will be printed as black and white, then I 1512 have selected it manually from the system to proceed further. 1513 1514 Best regards, 1515 Joel 1516 1517 Joel Q. Pacaanas 1518 Production Editor 1519 On behalf of Wiley 1520 Manila 1521 We partner with global experts to further innovative research. 1522 1523 E-mail: jpacaanas@wiley.com 1524 Tel: +632 88558618 1525 Fax: +632 5325 0768 1526 1527 -----Original Message----- 1528 From: Peter A. Buhr [mailto:pabuhr@uwaterloo.ca] 1529 Sent: Thursday, December 3, 2020 12:28 AM 1530 To: SPE Proofs <speproofs@wiley.com> 1531 Subject: Re: Please submit your publication fee(s) SPE2925 1532 1533 I am trying to complete the forms to submit my publication fee. 1534 1535 I clicked all the boxs to print in Black and White, so there is no fee. 1536 1537 I then am asked to create RightsLink account, which I did. 1538 1539 However, it requires that I click a box agreeing to: 1540 1541 I consent to have my contact information shared with my publisher and/or 1542 funding organization, as needed, to facilitate APC payment(s), reporting and 1543 customer care. 1544 1545 I do not agree to this sharing and will not click this button. 1546 1547 How would you like to proceed? 1548 1549 1550 1551 From: "Pacaanas, Joel -" <jpacaanas@wiley.com> 1552 To: "Peter A. Buhr" <pabuhr@uwaterloo.ca> 1553 Subject: RE: Please submit your publication fee(s) SPE2925 1554 Date: Fri, 4 Dec 2020 07:55:59 +0000 1555 1556 Dear Peter, 1557 1558 Yes, you are now done with this selection. 1559 1560 Thank you. 1561 1562 Best regards, 1563 Joel 1564 1565 Joel Q. Pacaanas 1566 Production Editor 1567 On behalf of Wiley 1568 Manila 1569 We partner with global experts to further innovative research. 1570 1571 E-mail: jpacaanas@wiley.com 1572 Tel: +632 88558618 1573 Fax: +632 5325 0768 1574 1575 -----Original Message----- 1576 From: Peter A. Buhr [mailto:pabuhr@uwaterloo.ca] 1577 Sent: Thursday, December 3, 2020 10:29 PM 1578 To: Pacaanas, Joel - <jpacaanas@wiley.com> 1579 Subject: Re: Please submit your publication fee(s) SPE2925 1580 1581 Thank you for your email and concern with regard to the RightsLink 1582 account. As you have mentioned that all figures will be printed as black and 1583 white, then I have selected it manually from the system to proceed further. 1584 1585 Just be clear, am I done? Meaning I do not have to go back to that web-page again. -
doc/theses/andrew_beach_MMath/features.tex
r14533d4 rf6664bf2 113 113 virtual table type; which usually has a mangled name. 114 114 % Also \CFA's trait system handles functions better than constants and doing 115 % it this way 115 % it this way reduce the amount of boiler plate we need. 116 116 117 117 % I did have a note about how it is the programmer's responsibility to make … … 119 119 % similar system I know of (except Agda's I guess) so I took it out. 120 120 121 \section{Raise} 122 \CFA provides two kinds of exception raise: termination 123 \see{\VRef{s:Termination}} and resumption \see{\VRef{s:Resumption}}, which are 124 specified with the following traits. 121 There are two more traits for exceptions @is_termination_exception@ and 122 @is_resumption_exception@. They are defined as follows: 123 125 124 \begin{cfa} 126 125 trait is_termination_exception( … … 128 127 void defaultTerminationHandler(exceptT &); 129 128 }; 130 \end{cfa} 131 The function is required to allow a termination raise, but is only called if a 132 termination raise does not find an appropriate handler. 133 134 Allowing a resumption raise is similar. 135 \begin{cfa} 129 136 130 trait is_resumption_exception( 137 131 exceptT &, virtualT & | is_exception(exceptT, virtualT)) { … … 139 133 }; 140 134 \end{cfa} 141 The function is required to allow a resumption raise, but is only called if a 142 resumption raise does not find an appropriate handler. 143 144 Finally there are three convenience macros for referring to the these traits: 135 136 In other words they make sure that a given type and virtual type is an 137 exception and defines one of the two default handlers. These default handlers 138 are used in the main exception handling operations \see{Exception Handling} 139 and their use will be detailed there. 140 141 However all three of these traits can be trickly to use directly. 142 There is a bit of repetition required but 143 the largest issue is that the virtual table type is mangled and not in a user 144 facing way. So there are three macros that can be used to wrap these traits 145 when you need to refer to the names: 145 146 @IS_EXCEPTION@, @IS_TERMINATION_EXCEPTION@ and @IS_RESUMPTION_EXCEPTION@. 146 All three traits are hard to use while naming the virtual table as it has an 147 internal mangled name. These macros take the exception name as their first 148 argument and do the mangling. They all take a second argument for polymorphic 149 types which is the parenthesized list of polymorphic arguments. These 150 arguments are passed to both the exception type and the virtual table type as 151 the arguments do have to match. 147 148 All take one or two arguments. The first argument is the name of the 149 exception type. Its unmangled and mangled form are passed to the trait. 150 The second (optional) argument is a parenthesized list of polymorphic 151 arguments. This argument should only with polymorphic exceptions and the 152 list will be passed to both types. 153 In the current set-up the base name and the polymorphic arguments have to 154 match so these macros can be used without losing flexability. 152 155 153 156 For example consider a function that is polymorphic over types that have a … … 158 161 \end{cfa} 159 162 163 \section{Exception Handling} 164 \CFA provides two kinds of exception handling, termination and resumption. 165 These twin operations are the core of the exception handling mechanism and 166 are the reason for the features of exceptions. 167 This section will cover the general patterns shared by the two operations and 168 then go on to cover the details each individual operation. 169 170 Both operations follow the same set of steps to do their operation. They both 171 start with the user preforming a throw on an exception. 172 Then there is the search for a handler, if one is found than the exception 173 is caught and the handler is run. After that control returns to normal 174 execution. 175 176 If the search fails a default handler is run and then control 177 returns to normal execution immediately. That is where the default handlers 178 @defaultTermiationHandler@ and @defaultResumptionHandler@ are used. 179 160 180 \subsection{Termination} 161 181 \label{s:Termination} 162 182 163 Termination raise, called ``throw'', is familiar and used in most programming 164 languages with exception handling. The semantics of termination is: search the 165 stack for a matching handler, unwind the stack frames to the matching handler, 166 execute the handler, and continue execution after the handler. Termination is 167 used when execution \emph{cannot} return to the throw. To continue execution, 168 the program must \emph{recover} in the handler from the failed (unwound) 169 execution at the raise to safely proceed after the handler. 170 171 A termination raise is started with the @throw@ statement: 183 Termination handling is more familiar kind and used in most programming 184 languages with exception handling. 185 It is dynamic, non-local goto. If a throw is successful then the stack will 186 be unwound and control will (usually) continue in a different function on 187 the call stack. They are commonly used when an error has occured and recovery 188 is impossible in the current function. 189 190 % (usually) Control can continue in the current function but then a different 191 % control flow construct should be used. 192 193 A termination throw is started with the @throw@ statement: 172 194 \begin{cfa} 173 195 throw EXPRESSION; … … 180 202 change the throw's behavior (see below). 181 203 182 At runtime, the exception returned by the expression 183 is copied into managed memory (heap) to ensure it remains in 184 scope during unwinding. It is the user's responsibility to ensure the original 185 exception object at the throw is freed when it goes out of scope. Being 186 allocated on the stack is sufficient for this. 187 188 Then the exception system searches the stack starting from the throw and 189 proceeding towards the base of the stack, from callee to caller. At each stack 190 frame, a check is made for termination handlers defined by the @catch@ clauses 191 of a @try@ statement. 204 The throw will copy the provided exception into managed memory. It is the 205 user's responcibility to ensure the original exception is cleaned up if the 206 stack is unwound (allocating it on the stack should be sufficient). 207 208 Then the exception system searches the stack using the copied exception. 209 It starts starts from the throw and proceeds to the base of the stack, 210 from callee to caller. 211 At each stack frame, a check is made for resumption handlers defined by the 212 @catch@ clauses of a @try@ statement. 192 213 \begin{cfa} 193 214 try { 194 215 GUARDED_BLOCK 195 } catch (EXCEPTION_TYPE$\(_1\)$ * NAME$\(_1\)$) { // termination handler 1216 } catch (EXCEPTION_TYPE$\(_1\)$ * NAME$\(_1\)$) { 196 217 HANDLER_BLOCK$\(_1\)$ 197 } catch (EXCEPTION_TYPE$\(_2\)$ * NAME$\(_2\)$) { // termination handler 2218 } catch (EXCEPTION_TYPE$\(_2\)$ * NAME$\(_2\)$) { 198 219 HANDLER_BLOCK$\(_2\)$ 199 220 } 200 221 \end{cfa} 201 The statements in the @GUARDED_BLOCK@ are executed. If those statements, or any 202 functions invoked from those statements, throws an exception, and the exception 222 When viewed on its own a try statement will simply exceute the statements in 223 @GUARDED_BLOCK@ and when those are finished the try statement finishes. 224 225 However, while the guarded statements are being executed, including any 226 functions they invoke, all the handlers following the try block are now 227 or any functions invoked from those 228 statements, throws an exception, and the exception 203 229 is not handled by a try statement further up the stack, the termination 204 230 handlers are searched for a matching exception type from top to bottom. … … 211 237 freed and control continues after the try statement. 212 238 213 The default handler visible at the throw statement is used if no matching 214 termination handler is found after the entire stack is searched. At that point, 215 the default handler is called with a reference to the exception object216 generated at the throw. If the default handler returns, control continues 217 from after the throw statement. This feature allows 218 each exception type to define its own action, such as printing an informative 219 error message, when an exception is not handled in the program. 220 However the default handler for all exception types triggers a cancellation 221 using the exception.239 If no handler is found during the search then the default handler is run. 240 Through \CFA's trait system the best match at the throw sight will be used. 241 This function is run and is passed the copied exception. After the default 242 handler is run control continues after the throw statement. 243 244 There is a global @defaultTerminationHandler@ that cancels the current stack 245 with the copied exception. However it is generic over all exception types so 246 new default handlers can be defined for different exception types and so 247 different exception types can have different default handlers. 222 248 223 249 \subsection{Resumption} 224 250 \label{s:Resumption} 225 251 226 Resumption raise, called ``resume'', is as old as termination 227 raise~\cite{Goodenough75} but is less popular. In many ways, resumption is 228 simpler and easier to understand, as it is simply a dynamic call. 229 The semantics of resumption is: search the stack for a matching handler, 230 execute the handler, and continue execution after the resume. Notice, the stack 231 cannot be unwound because execution returns to the raise point. Resumption is 232 used used when execution \emph{can} return to the resume. To continue 233 execution, the program must \emph{correct} in the handler for the failed 234 execution at the raise so execution can safely continue after the resume. 252 Resumption exception handling is a less common form than termination but is 253 just as old~\cite{Goodenough75} and is in some sense simpler. 254 It is a dynamic, non-local function call. If the throw is successful a 255 closure will be taken from up the stack and executed, after which the throwing 256 function will continue executing. 257 These are most often used when an error occured and if the error is repaired 258 then the function can continue. 235 259 236 260 A resumption raise is started with the @throwResume@ statement: … … 240 264 The semantics of the @throwResume@ statement are like the @throw@, but the 241 265 expression has return a reference a type that satifies the trait 242 @is_resumption_exception@. Like with termination the exception system can243 use these assertions while (throwing/raising/handling)the exception.266 @is_resumption_exception@. The assertions from this trait are available to 267 the exception system while handling the exception. 244 268 245 269 At runtime, no copies are made. As the stack is not unwound the exception and 246 270 any values on the stack will remain in scope while the resumption is handled. 247 271 248 Then the exception system searches the stack starting from the resume and 249 proceeding to the base of the stack, from callee to caller. At each stack 250 frame, a check is made for resumption handlers defined by the @catchResume@ 251 clauses of a @try@ statement. 272 Then the exception system searches the stack using the provided exception. 273 It starts starts from the throw and proceeds to the base of the stack, 274 from callee to caller. 275 At each stack frame, a check is made for resumption handlers defined by the 276 @catchResume@ clauses of a @try@ statement. 252 277 \begin{cfa} 253 278 try { … … 259 284 } 260 285 \end{cfa} 261 The statements in the @GUARDED_BLOCK@ are executed. If those statements, or any 262 functions invoked from those statements, resumes an exception, and the 263 exception is not handled by a try statement further up the stack, the 264 resumption handlers are searched for a matching exception type from top to 265 bottom. (Note, termination and resumption handlers may be intermixed in a @try@ 266 statement but the kind of raise (throw/resume) only matches with the 267 corresponding kind of handler clause.) 268 269 The exception search and matching for resumption is the same as for 270 termination, including exception inheritance. The difference is when control 271 reaches the end of the handler: the resumption handler returns after the resume 272 rather than after the try statement. The resume point assumes the handler has273 corrected the problem so execution can safely continue.286 If the handlers are not involved in a search this will simply execute the 287 @GUARDED_BLOCK@ and then continue to the next statement. 288 Its purpose is to add handlers onto the stack. 289 (Note, termination and resumption handlers may be intermixed in a @try@ 290 statement but the kind of throw must be the same as the handler for it to be 291 considered as a possible match.) 292 293 If a search for a resumption handler reaches a try block it will check each 294 @catchResume@ clause, top-to-bottom. 295 At each handler if the thrown exception is or is a child type of 296 @EXCEPTION_TYPE@$_i$ then the a pointer to the exception is bound to 297 @NAME@$_i$ and then @HANDLER_BLOCK@$_i$ is executed. After the block is 298 finished control will return to the @throwResume@ statement. 274 299 275 300 Like termination, if no resumption handler is found, the default handler 276 visible at the resume statement is called, and the system default action is 277 executed. 278 279 For resumption, the exception system uses stack marking to partition the 280 resumption search. If another resumption exception is raised in a resumption 281 handler, the second exception search does not start at the point of the 282 original raise. (Remember the stack is not unwound and the current handler is 283 at the top of the stack.) The search for the second resumption starts at the 284 current point on the stack because new try statements may have been pushed by 285 the handler or functions called from the handler. If there is no match back to 286 the point of the current handler, the search skips\label{p:searchskip} the 287 stack frames already searched by the first resume and continues after 288 the try statement. The default handler always continues from default 289 handler associated with the point where the exception is created. 301 visible at the throw statement is called. It will use the best match at the 302 call sight according to \CFA's overloading rules. The default handler is 303 passed the exception given to the throw. When the default handler finishes 304 execution continues after the throw statement. 305 306 There is a global @defaultResumptionHandler@ is polymorphic over all 307 termination exceptions and preforms a termination throw on the exception. 308 The @defaultTerminationHandler@ for that throw is matched at the original 309 throw statement (the resumption @throwResume@) and it can be customized by 310 introducing a new or better match as well. 311 312 % \subsubsection? 313 314 A key difference between resumption and termination is that resumption does 315 not unwind the stack. A side effect that is that when a handler is matched 316 and run it's try block (the guarded statements) and every try statement 317 searched before it are still on the stack. This can lead to the recursive 318 resumption problem. 319 320 The recursive resumption problem is any situation where a resumption handler 321 ends up being called while it is running. 322 Consider a trivial case: 323 \begin{cfa} 324 try { 325 throwResume (E &){}; 326 } catchResume(E *) { 327 throwResume (E &){}; 328 } 329 \end{cfa} 330 When this code is executed the guarded @throwResume@ will throw, start a 331 search and match the handler in the @catchResume@ clause. This will be 332 call and placed on the stack on top of the try-block. The second throw then 333 throws and will seach the same try block and put call another instance of the 334 same handler leading to an infinite loop. 335 336 This situation is trivial and easy to avoid, but much more complex cycles 337 can form with multiple handlers and different exception types. 338 339 To prevent all of these cases we mask sections of the stack, or equvilantly 340 the try statements on the stack, so that the resumption seach skips over 341 them and continues with the next unmasked section of the stack. 342 343 A section of the stack is marked when it is searched to see if it contains 344 a handler for an exception and unmarked when that exception has been handled 345 or the search was completed without finding a handler. 290 346 291 347 % This might need a diagram. But it is an important part of the justification … … 306 362 \end{verbatim} 307 363 308 This resumption search pattern reflects the one for termination, and so 309 should come naturally to most programmers. 310 However, it avoids the \emph{recursive resumption} problem. 311 If parts of the stack are searched multiple times, loops 312 can easily form resulting in infinite recursion. 313 314 Consider the trivial case: 315 \begin{cfa} 316 try { 317 throwResume (E &){}; // first 318 } catchResume(E *) { 319 throwResume (E &){}; // second 320 } 321 \end{cfa} 322 If this handler is ever used it will be placed on top of the stack above the 323 try statement. If the stack was not masked than the @throwResume@ in the 324 handler would always be caught by the handler, leading to an infinite loop. 325 Masking avoids this problem and other more complex versions of it involving 326 multiple handlers and exception types. 327 328 Other masking stratagies could be used; such as masking the handlers that 329 have caught an exception. This one was choosen because it creates a symmetry 330 with termination (masked sections of the stack would be unwound with 331 termination) and having only one pattern to learn is easier. 364 The rules can be remembered as thinking about what would be searched in 365 termination. So when a throw happens in a handler; a termination handler 366 skips everything from the original throw to the original catch because that 367 part of the stack has been unwound, a resumption handler skips the same 368 section of stack because it has been masked. 369 A throw in a default handler will preform the same search as the original 370 throw because; for termination nothing has been unwound, for resumption 371 the mask will be the same. 372 373 The symmetry with termination is why this pattern was picked. Other patterns, 374 such as marking just the handlers that caught, also work but lack the 375 symmetry whih means there is more to remember. 332 376 333 377 \section{Conditional Catch} … … 335 379 condition to further control which exceptions they handle: 336 380 \begin{cfa} 337 catch (EXCEPTION_TYPE * NAME ; @CONDITION@)381 catch (EXCEPTION_TYPE * NAME ; CONDITION) 338 382 \end{cfa} 339 383 First, the same semantics is used to match the exception type. Second, if the … … 341 385 reference all names in scope at the beginning of the try block and @NAME@ 342 386 introduced in the handler clause. If the condition is true, then the handler 343 matches. Otherwise, the exception search continues a t the next appropriate kind344 of handler clause in the try block.387 matches. Otherwise, the exception search continues as if the exception type 388 did not match. 345 389 \begin{cfa} 346 390 try { … … 356 400 remaining handlers in the current try statement. 357 401 358 \section{Re raise}359 \colo r{red}{From Andrew: I recomend we talk about why the language doesn't402 \section{Rethrowing} 403 \colour{red}{From Andrew: I recomend we talk about why the language doesn't 360 404 have rethrows/reraises instead.} 361 405 362 \label{s:Re raise}406 \label{s:Rethrowing} 363 407 Within the handler block or functions called from the handler block, it is 364 408 possible to reraise the most recently caught exception with @throw@ or 365 @throwResume@, respective .409 @throwResume@, respectively. 366 410 \begin{cfa} 367 411 try { 368 412 ... 369 413 } catch( ... ) { 370 ... throw; // rethrow414 ... throw; 371 415 } catchResume( ... ) { 372 ... throwResume; // reresume416 ... throwResume; 373 417 } 374 418 \end{cfa} … … 381 425 382 426 \section{Finally Clauses} 383 A @finally@ clause may be placed at the end of a @try@ statement. 427 Finally clauses are used to preform unconditional clean-up when leaving a 428 scope. They are placed at the end of a try statement: 384 429 \begin{cfa} 385 430 try { … … 391 436 \end{cfa} 392 437 The @FINALLY_BLOCK@ is executed when the try statement is removed from the 393 stack, including when the @GUARDED_BLOCK@ or any handler clause finishes or394 during an unwind.438 stack, including when the @GUARDED_BLOCK@ finishes, any termination handler 439 finishes or during an unwind. 395 440 The only time the block is not executed is if the program is exited before 396 th at happens.441 the stack is unwound. 397 442 398 443 Execution of the finally block should always finish, meaning control runs off … … 403 448 @return@ that causes control to leave the finally block. Other ways to leave 404 449 the finally block, such as a long jump or termination are much harder to check, 405 and at best requiring additional run-time overhead, and so are discouraged. 450 and at best requiring additional run-time overhead, and so are mearly 451 discouraged. 452 453 Not all languages with exceptions have finally clauses. Notably \Cpp does 454 without it as descructors serve a similar role. Although destructors and 455 finally clauses can be used in many of the same areas they have their own 456 use cases like top-level functions and lambda functions with closures. 457 Destructors take a bit more work to set up but are much easier to reuse while 458 finally clauses are good for once offs and can include local information. 406 459 407 460 \section{Cancellation} … … 413 466 There is no special statement for starting a cancellation; instead the standard 414 467 library function @cancel_stack@ is called passing an exception. Unlike a 415 raise, this exception is not used in matching only to pass information about468 throw, this exception is not used in matching only to pass information about 416 469 the cause of the cancellation. 417 418 Handling of a cancellation depends on which stack is being cancelled. 470 (This also means matching cannot fail so there is no default handler either.) 471 472 After @cancel_stack@ is called the exception is copied into the exception 473 handling mechanism's memory. Then the entirety of the current stack is 474 unwound. After that it depends one which stack is being cancelled. 419 475 \begin{description} 420 476 \item[Main Stack:] … … 447 503 happen in an implicate join inside a destructor. So there is an error message 448 504 and an abort instead. 505 \todo{Perhaps have a more general disucssion of unwind collisions before 506 this point.} 449 507 450 508 The recommended way to avoid the abort is to handle the intial resumption … … 455 513 \item[Coroutine Stack:] A coroutine stack is created for a @coroutine@ object 456 514 or object that satisfies the @is_coroutine@ trait. A coroutine only knows of 457 two other coroutines, its starter and its last resumer. The last resumer has 458 the tightest coupling to the coroutine it activated. Hence, cancellation of 459 the active coroutine is forwarded to the last resumer after the stack is 460 unwound, as the last resumer has the most precise knowledge about the current 461 execution. When the resumer restarts, it resumes exception 515 two other coroutines, its starter and its last resumer. Of the two the last 516 resumer has the tightest coupling to the coroutine it activated and the most 517 up-to-date information. 518 519 Hence, cancellation of the active coroutine is forwarded to the last resumer 520 after the stack is unwound. When the resumer restarts, it resumes exception 462 521 @CoroutineCancelled@, which is polymorphic over the coroutine type and has a 463 522 pointer to the cancelled coroutine. -
doc/theses/andrew_beach_MMath/uw-ethesis.tex
r14533d4 rf6664bf2 108 108 % Removes large sections of the document. 109 109 \usepackage{comment} 110 % Adds todos (Must be included after comment.) 111 \usepackage{todonotes} 112 110 113 111 114 % Hyperlinks make it very easy to navigate an electronic document. … … 213 216 % Optional arguments do not work with pdf string. (Some fix-up required.) 214 217 \pdfstringdefDisableCommands{\def\Cpp{C++}} 218 219 % Colour text, formatted in LaTeX style instead of TeX style. 220 \newcommand*\colour[2]{{\color{#1}#2}} 215 221 \makeatother 216 222 -
doc/theses/thierry_delisle_PhD/thesis/Makefile
r14533d4 rf6664bf2 8 8 BibTeX = BIBINPUTS=${TeXLIB} && export BIBINPUTS && bibtex 9 9 10 MAKEFLAGS = --no-print-directory --silent10 MAKEFLAGS = --no-print-directory # --silent 11 11 VPATH = ${Build} ${Figures} 12 12 … … 52 52 # Directives # 53 53 54 .NOTPARALLEL: # cannot make in parallel 55 54 56 .PHONY : all clean # not file names 55 57 … … 83 85 ${LaTeX} $< 84 86 85 build/fairness.svg : fig/fairness.py | ${Build}86 python3 $< $@87 88 87 ## Define the default recipes. 89 88 … … 107 106 sed -i 's/$@/${Build}\/$@/g' ${Build}/$@_t 108 107 109 build/fairness.svg : fig/fairness.py | ${Build}110 python3 fig/fairness.py build/fairness.svg108 build/fairness.svg : fig/fairness.py | ${Build} 109 python3 $< $@ 111 110 112 111 ## pstex with inverted colors -
doc/theses/thierry_delisle_PhD/thesis/text/io.tex
r14533d4 rf6664bf2 1 1 \chapter{User Level \io} 2 As mention ned in Section~\ref{prev:io}, User-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations. Various operating systems offer various forms of asynchronous operations and as mentioned in Chapter~\ref{intro}, this work is exclusively focuesd on Linux.2 As mentioned in Section~\ref{prev:io}, User-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations. Different operating systems offer various forms of asynchronous operations and as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system. 3 3 4 4 \section{Kernel Interface} 5 Since this work fundamentally depends on operating system support, the first step of any design is to discuss the available interfaces and pick one (or more) as the foundations of the\io subsystem.6 7 \subsection{\lstinline |O_NONBLOCK|}8 In Linux, files can be opened with the flag @O_NONBLOCK@~\cite{MAN:open} (or @SO_NONBLOCK@~\cite{MAN:accept}, the equivalent for sockets) to use the file descriptors in ``nonblocking mode''. In this mode, ``Neither the open()nor any subsequent \io operations on the [opened file descriptor] will cause the calling9 process to wait .'' This feature can be used as the foundation for the \io subsystem. However, for the subsystem to be able to block \glspl{thrd} until an operation completes, @O_NONBLOCK@ must be use in conjunction with a system call that monitors when a file descriptor becomes ready, \ie, the next \io operation on it will not cause the process to wait\footnote{In this context, ready means to \emph{some} operation can be performed without blocking. It does not mean that the last operation that return \lstinline|EAGAIN| will succeed on the next try. A file that is ready to read but has only 1 byte available would be an example of this distinction.}.10 11 There are three options to monitor file descriptors in Linux\footnote{For simplicity, this section omits to mention \lstinline|pselect| and \lstinline|ppoll|. The difference between these system calls and \lstinline|select| and \lstinline|poll| respectively is not relevant for this discussion.}, @select@~\cite{MAN:select}, @poll@~\cite{MAN:poll} and @epoll@~\cite{MAN:epoll}. All three of these options offer a system call that blocks a \gls{kthrd} until at least one of many file descriptor becomes ready. The group of file descriptors being waited on is often referred to as the \newterm{interest set}. 12 13 \paragraph{\lstinline|select|} is the oldest of these options, it takes as an input a contiguous array of bits, where each bits represent a file descriptor of interest. On return, it modifies the set in place to identify which of the file descriptors changed status. This means that calling select in a loop requires re-initializing the array each time and the number of file descriptors supported has a hard limit. Another limit of @select@ is that once the call is started, the interest set can no longer be modified. Monitoring a new file descriptor generally requires aborting any in progress call to @select@\footnote{Starting a new call to \lstinline|select| in this case is possible but requires a distinct kernel thread, and as a result is not a acceptable multiplexing solution when the interest set is large and highly dynamic unless the number of parallel calls to select can be strictly bounded.}. 14 15 \paragraph{\lstinline|poll|} is an improvement over select, which removes the hard limit on the number of file descriptors and the need to re-initialize the input on every call. It works using an array of structures as an input rather than an array of bits, thus allowing a more compact input for small interest sets. Like @select@, @poll@ suffers from the limitation that the interest set cannot be changed while the call is blocked. 16 17 \paragraph{\lstinline|epoll|} further improves on these two functions, by allowing the interest set to be dynamically added to and removed from while a \gls{kthrd} is blocked on a call to @epoll@. This is done by creating an \emph{epoll instance} with a persistent intereset set and that is used across multiple calls. This advantage significantly reduces synchronization overhead on the part of the caller (in this case the \io subsystem) since the interest set can be modified when adding or removing file descriptors without having to synchronize with other \glspl{kthrd} potentially calling @epoll@. 18 19 However, all three of these system calls suffer from generality problems to some extent. The man page for @O_NONBLOCK@ mentions that ``[@O_NONBLOCK@] has no effect for regular files and block devices'', which means none of these three system calls are viable multiplexing strategies for these types of \io operations. Furthermore, @epoll@ has been shown to have some problems with pipes and ttys\cit{Peter's examples in some fashion}. Finally, none of these are useful solutions for multiplexing \io operations that do not have a corresponding file descriptor and can be awkward for operations using multiple file descriptors. 20 21 \subsection{The POSIX asynchronous I/O (AIO)} 22 An alternative to using @O_NONBLOCK@ is to use the AIO interface. Its interface lets programmers enqueue operations to be performed asynchronously by the kernel. Completions of these operations can be communicated in various ways, either by sending a Linux signal, spawning a new \gls{kthrd} or by polling for completion of one or more operation. For the purpose multiplexing operations, spawning a new \gls{kthrd} is counter-productive but a related solution is discussed in Section~\ref{io:morethreads}. Since using interrupts handlers can also lead to fairly complicated interactions between subsystems, I will concentrate on the different polling methods. AIO only supports read and write operations to file descriptors and those do not have the same limitation as @O_NONBLOCK@, \ie, the file descriptors can be regular files and blocked devices. It also supports batching more than one of these operations in a single system call. 23 24 AIO offers two different approach to polling. @aio_error@ can be used as a spinning form of polling, returning @EINPROGRESS@ until the operation is completed, and @aio_suspend@ can be used similarly to @select@, @poll@ or @epoll@, to wait until one or more requests have completed. For the purpose of \io multiplexing, @aio_suspend@ is the intended interface. Even if AIO requests can be submitted concurrently, @aio_suspend@ suffers from the same limitation as @select@ and @poll@, \ie, the interest set cannot be dynamically changed while a call to @aio_suspend@ is in progress. Unlike @select@ and @poll@ however, it also suffers from the limitation that it does not specify which requests have completed, meaning programmers then have to poll each request in the interest set using @aio_error@ to identify which requests have completed. This means that, like @select@ and @poll@ but not @epoll@, the time needed to examine polling results increases based in the total number of requests monitored, not the number of completed requests. 25 26 AIO does not seem to be a particularly popular interface, which I believe is in part due to this less than ideal polling interface. Linus Torvalds talks about this interface as follows:5 Since this work fundamentally depends on operating-system support, the first step of any design is to discuss the available interfaces and pick one (or more) as the foundations of the non-blocking \io subsystem. 6 7 \subsection{\lstinline{O_NONBLOCK}} 8 In Linux, files can be opened with the flag @O_NONBLOCK@~\cite{MAN:open} (or @SO_NONBLOCK@~\cite{MAN:accept}, the equivalent for sockets) to use the file descriptors in ``nonblocking mode''. In this mode, ``Neither the @open()@ nor any subsequent \io operations on the [opened file descriptor] will cause the calling 9 process to wait''~\cite{MAN:open}. This feature can be used as the foundation for the non-blocking \io subsystem. However, for the subsystem to know when an \io operation completes, @O_NONBLOCK@ must be use in conjunction with a system call that monitors when a file descriptor becomes ready, \ie, the next \io operation on it does not cause the process to wait\footnote{In this context, ready means \emph{some} operation can be performed without blocking. It does not mean an operation returning \lstinline{EAGAIN} succeeds on the next try. For example, a ready read may only return a subset of bytes and the read must be issues again for the remaining bytes, at which point it may return \lstinline{EAGAIN}.}. 10 This mechanism is also crucial in determining when all \glspl{thrd} are blocked and the application \glspl{kthrd} can now block. 11 12 There are three options to monitor file descriptors in Linux\footnote{For simplicity, this section omits \lstinline{pselect} and \lstinline{ppoll}. The difference between these system calls and \lstinline{select} and \lstinline{poll}, respectively, is not relevant for this discussion.}, @select@~\cite{MAN:select}, @poll@~\cite{MAN:poll} and @epoll@~\cite{MAN:epoll}. All three of these options offer a system call that blocks a \gls{kthrd} until at least one of many file descriptors becomes ready. The group of file descriptors being waited is called the \newterm{interest set}. 13 14 \paragraph{\lstinline{select}} is the oldest of these options, it takes as an input a contiguous array of bits, where each bits represent a file descriptor of interest. On return, it modifies the set in place to identify which of the file descriptors changed status. This destructive change means that calling select in a loop requires re-initializing the array each time and the number of file descriptors supported has a hard limit. Another limit of @select@ is that once the call is started, the interest set can no longer be modified. Monitoring a new file descriptor generally requires aborting any in progress call to @select@\footnote{Starting a new call to \lstinline{select} is possible but requires a distinct kernel thread, and as a result is not an acceptable multiplexing solution when the interest set is large and highly dynamic unless the number of parallel calls to \lstinline{select} can be strictly bounded.}. 15 16 \paragraph{\lstinline{poll}} is an improvement over select, which removes the hard limit on the number of file descriptors and the need to re-initialize the input on every call. It works using an array of structures as an input rather than an array of bits, thus allowing a more compact input for small interest sets. Like @select@, @poll@ suffers from the limitation that the interest set cannot be changed while the call is blocked. 17 18 \paragraph{\lstinline{epoll}} further improves these two functions by allowing the interest set to be dynamically added to and removed from while a \gls{kthrd} is blocked on an @epoll@ call. This dynamic capability is accomplished by creating an \emph{epoll instance} with a persistent interest set, which is used across multiple calls. This capability significantly reduces synchronization overhead on the part of the caller (in this case the \io subsystem), since the interest set can be modified when adding or removing file descriptors without having to synchronize with other \glspl{kthrd} potentially calling @epoll@. 19 20 However, all three of these system calls have limitations. The @man@ page for @O_NONBLOCK@ mentions that ``[@O_NONBLOCK@] has no effect for regular files and block devices'', which means none of these three system calls are viable multiplexing strategies for these types of \io operations. Furthermore, @epoll@ has been shown to have problems with pipes and ttys~\cit{Peter's examples in some fashion}. Finally, none of these are useful solutions for multiplexing \io operations that do not have a corresponding file descriptor and can be awkward for operations using multiple file descriptors. 21 22 \subsection{POSIX asynchronous I/O (AIO)} 23 An alternative to @O_NONBLOCK@ is the AIO interface. Its interface lets programmers enqueue operations to be performed asynchronously by the kernel. Completions of these operations can be communicated in various ways: either by spawning a new \gls{kthrd}, sending a Linux signal, or by polling for completion of one or more operation. For this work, spawning a new \gls{kthrd} is counter-productive but a related solution is discussed in Section~\ref{io:morethreads}. Using interrupts handlers can also lead to fairly complicated interactions between subsystems. Leaving polling for completion, which is similar to the previous system calls. While AIO only supports read and write operations to file descriptors, it does not have the same limitation as @O_NONBLOCK@, \ie, the file descriptors can be regular files and blocked devices. It also supports batching multiple operations in a single system call. 24 25 AIO offers two different approach to polling: @aio_error@ can be used as a spinning form of polling, returning @EINPROGRESS@ until the operation is completed, and @aio_suspend@ can be used similarly to @select@, @poll@ or @epoll@, to wait until one or more requests have completed. For the purpose of \io multiplexing, @aio_suspend@ is the best interface. However, even if AIO requests can be submitted concurrently, @aio_suspend@ suffers from the same limitation as @select@ and @poll@, \ie, the interest set cannot be dynamically changed while a call to @aio_suspend@ is in progress. AIO also suffers from the limitation of specifying which requests have completed, \ie programmers have to poll each request in the interest set using @aio_error@ to identify the completed requests. This limitation means that, like @select@ and @poll@ but not @epoll@, the time needed to examine polling results increases based on the total number of requests monitored, not the number of completed requests. 26 Finally, AIO does not seem to be a popular interface, which I believe is due in part to this poor polling interface. Linus Torvalds talks about this interface as follows: 27 27 28 28 \begin{displayquote} 29 AIO is a horrible ad-hoc design, with the main excuse being "other,29 AIO is a horrible ad-hoc design, with the main excuse being ``other, 30 30 less gifted people, made that design, and we are implementing it for 31 31 compatibility because database people - who seldom have any shred of 32 taste - actually use it ".32 taste - actually use it''. 33 33 34 34 But AIO was always really really ugly. … … 39 39 \end{displayquote} 40 40 41 Interestingly, in this e-mail answer, Linus goes on to describe41 Interestingly, in this e-mail, Linus goes on to describe 42 42 ``a true \textit{asynchronous system call} interface'' 43 43 that does … … 47 47 This description is actually quite close to the interface described in the next section. 48 48 49 \subsection{\lstinline |io_uring|}50 A very recent addition to Linux, @io_uring@ \cite{MAN:io_uring} is a framework that aims to solve many of the problems listed with the above mentioned interfaces. Like AIO, it represents \io operations as entries added on a queue. But like @epoll@, new requests can be submitted while a blocking call waiting for requests to complete is already in progress. The @io_uring@ interface uses two ring buffers (referred to simply as rings) as its core, a submit ring to which programmers push \io requests and a completion bufferwhich programmers poll for completion.51 52 One of the big advantages over the interfaces listed above is that italso supports a much wider range of operations. In addition to supporting reads and writes to any file descriptor like AIO, it supports other operations like @open@, @close@, @fsync@, @accept@, @connect@, @send@, @recv@, @splice@, \etc.53 54 On top of these, @io_uring@ adds many ``bells and whistles'' like avoiding copies between the kernel and user-space with shared memory, allowing different mechanisms to communicate with device driversand supporting chains of requests, \ie, requests that automatically trigger followup requests on completion.49 \subsection{\lstinline{io_uring}} 50 A very recent addition to Linux, @io_uring@~\cite{MAN:io_uring}, is a framework that aims to solve many of the problems listed in the above interfaces. Like AIO, it represents \io operations as entries added to a queue. But like @epoll@, new requests can be submitted while a blocking call waiting for requests to complete is already in progress. The @io_uring@ interface uses two ring buffers (referred to simply as rings) at its core: a submit ring to which programmers push \io requests and a completion ring from which programmers poll for completion. 51 52 One of the big advantages over the prior interfaces is that @io_uring@ also supports a much wider range of operations. In addition to supporting reads and writes to any file descriptor like AIO, it supports other operations like @open@, @close@, @fsync@, @accept@, @connect@, @send@, @recv@, @splice@, \etc. 53 54 On top of these, @io_uring@ adds many extras like avoiding copies between the kernel and user-space using shared memory, allowing different mechanisms to communicate with device drivers, and supporting chains of requests, \ie, requests that automatically trigger followup requests on completion. 55 55 56 56 \subsection{Extra Kernel Threads}\label{io:morethreads} 57 Finally, if the operating system does not offer a ny satisfying forms of asynchronous \io operations, a solution is to fake it by creating a pool of \glspl{kthrd} and delegating operations to them in order to avoid blocking \glspl{proc}. The is a compromise on multiplexing. In the worst case, where all \glspl{thrd} are consistently blocking on \io, it devolves into 1-to-1 threading. However, regardless of the frequency of \io operations, it achieves the fundamental goal of not blocking \glspl{proc} when \glspl{thrd} are ready to run. This approach is used by languages like Go\cit{Go} and frameworks like libuv\cit{libuv}, since it has the advantage that it can easily be used across multiple operating systems. This advantage is especially relevant for languages like Go, which offer an homogenous \glsxtrshort{api} across all platforms. As opposed to C, which has a very limited standard api for \io, \eg, the C standard library has no networking.57 Finally, if the operating system does not offer a satisfactory form of asynchronous \io operations, an ad-hoc solution is to create a pool of \glspl{kthrd} and delegate operations to it to avoid blocking \glspl{proc}, which is a compromise for multiplexing. In the worst case, where all \glspl{thrd} are consistently blocking on \io, it devolves into 1-to-1 threading. However, regardless of the frequency of \io operations, it achieves the fundamental goal of not blocking \glspl{proc} when \glspl{thrd} are ready to run. This approach is used by languages like Go\cit{Go} and frameworks like libuv\cit{libuv}, since it has the advantage that it can easily be used across multiple operating systems. This advantage is especially relevant for languages like Go, which offer a homogeneous \glsxtrshort{api} across all platforms. As opposed to C, which has a very limited standard api for \io, \eg, the C standard library has no networking. 58 58 59 59 \subsection{Discussion} 60 These options effectively fall into two broad camps of solutions, waiting for \io to be ready versus waiting for \io to be completed. All operating systems that support asynchronous \io must offer an interface along one of these lines, but the details can vary drastically. For example, Free BSD offers @kqueue@~\cite{MAN:bsd/kqueue} which behaves similarly to @epoll@ but with some small quality of life improvements, while Windows (Win32)~\cit{https://docs.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o} offers ``overlapped I/O'' which handles submissions similarly to @O_NONBLOCK@,with extra flags on the synchronous system call, but waits for completion events, similarly to @io_uring@.61 62 For this project, I have chosen to use @io_uring@, in large parts due to its generality. While @epoll@ has been shown to be a good solution to socket \io (\cite{DBLP:journals/pomacs/KarstenB20}), @io_uring@'s transparent support for files, pipesand more complex operations, like @splice@ and @tee@, make it a better choice as the foundation for a general \io subsystem.60 These options effectively fall into two broad camps: waiting for \io to be ready versus waiting for \io to complete. All operating systems that support asynchronous \io must offer an interface along one of these lines, but the details vary drastically. For example, Free BSD offers @kqueue@~\cite{MAN:bsd/kqueue}, which behaves similarly to @epoll@, but with some small quality of use improvements, while Windows (Win32)~\cit{https://docs.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o} offers ``overlapped I/O'', which handles submissions similarly to @O_NONBLOCK@ with extra flags on the synchronous system call, but waits for completion events, similarly to @io_uring@. 61 62 For this project, I selected @io_uring@, in large parts because to its generality. While @epoll@ has been shown to be a good solution for socket \io (\cite{DBLP:journals/pomacs/KarstenB20}), @io_uring@'s transparent support for files, pipes, and more complex operations, like @splice@ and @tee@, make it a better choice as the foundation for a general \io subsystem. 63 63 64 64 \section{Event-Engine} 65 66 The event engines reponsibility is to use the kernel interface to multiplex many \io operations onto few \glspl{kthrd}. In concrete terms, this means that \glspl{thrd} enter the engine through an interface, the event engines then starts the operation and parks the calling \glspl{thrd}, returning control to the \gls{proc}. The parked \glspl{thrd} are then rescheduled by the event engine once the desired operation has completed. 67 68 \subsection{\lstinline|io_uring| in depth} 69 Before going into details on the design of the event engine, I will present some more details on the usage of @io_uring@ which are important for the design of the engine. 65 An event engine's responsibility is to use the kernel interface to multiplex many \io operations onto few \glspl{kthrd}. In concrete terms, this means \glspl{thrd} enter the engine through an interface, the event engines then starts the operation and parks the calling \glspl{thrd}, returning control to the \gls{proc}. The parked \glspl{thrd} are then rescheduled by the event engine once the desired operation has completed. 66 67 \subsection{\lstinline{io_uring} in depth} 68 Before going into details on the design of my event engine, more details on @io_uring@ usage are presented, each important in the design of the engine. 69 Figure~\ref{fig:iouring} shows an overview of an @io_uring@ instance. 70 Two ring buffers are used to communicate with the kernel: one for submissions~(left) and one for completions~(right). 71 The submission ring contains entries, \newterm{Submit Queue Entries} (SQE), produced (appended) by the application when an operation starts and then consumed by the kernel. 72 The completion ring contains entries, \newterm{Completion Queue Entries} (CQE), produced (appended) by the kernel when an operation completes and then consumed by the application. 73 The submission ring contains indexes into the SQE array (denoted \emph{S}) containing entries describing the I/O operation to start; 74 the completion ring contains entries for the completed I/O operation. 75 Multiple @io_uring@ instances can be created, in which case they each have a copy of the data structures in the figure. 70 76 71 77 \begin{figure} 72 78 \centering 73 79 \input{io_uring.pstex_t} 74 \caption[Overview of \lstinline|io_uring|]{Overview of \lstinline|io_uring| \smallskip\newline Two ring buffer are used to communicate with the kernel, one for completions~(right) and one for submissions~(left). The completion ring contains entries, \newterm{CQE}s: Completion Queue Entries, that are produced by the kernel when an operation completes and then consumed by the application. On the other hand, the application produces \newterm{SQE}s: Submit Queue Entries, which it appends to the submission ring for the kernel to consume. Unlike the completion ring, the submission ring does not contain the entries directly, it indexes into the SQE array (denoted \emph{S}) instead.} 80 \caption{Overview of \lstinline{io_uring}} 81 % \caption[Overview of \lstinline{io_uring}]{Overview of \lstinline{io_uring} \smallskip\newline Two ring buffer are used to communicate with the kernel, one for completions~(right) and one for submissions~(left). The completion ring contains entries, \newterm{CQE}s: Completion Queue Entries, that are produced by the kernel when an operation completes and then consumed by the application. On the other hand, the application produces \newterm{SQE}s: Submit Queue Entries, which it appends to the submission ring for the kernel to consume. Unlike the completion ring, the submission ring does not contain the entries directly, it indexes into the SQE array (denoted \emph{S}) instead.} 75 82 \label{fig:iouring} 76 83 \end{figure} 77 84 78 Figure~\ref{fig:iouring} shows an overview of an @io_uring@ instance. Multiple @io_uring@ instances can be created, in which case they each have a copy of the data structures in the figure. New \io operations are submitted to the kernel following 4 steps which use the components shown in the figure. 79 80 \paragraph{First} an @sqe@ must be allocated from the pre-allocated array (denoted \emph{S} in Figure~\ref{fig:iouring}). This array is created at the same time as the @io_uring@ instance, is in kernel-locked memory, which means it is both visible by the kernel and the application, and has a fixed size determined at creation. How these entries are allocated is not important for the functionning of @io_uring@, the only requirement is that no entry is reused before the kernel has consumed it. 81 82 \paragraph{Secondly} the @sqe@ must be filled according to the desired operation. This step is straight forward, the only detail worth mentionning is that @sqe@s have a @user_data@ field that must be filled in order to match submission and completion entries. 83 84 \paragraph{Thirdly} the @sqe@ must be submitted to the submission ring, this requires appending the index of the @sqe@ to the ring following regular ring buffer steps: \lstinline|{ buffer[head] = item; head++ }|. Since the head is visible to the kernel, some memory barriers may be required to prevent the compiler from reordering these operations. Since the submission ring is a regular ring buffer, more than one @sqe@ can be added at once and the head can be updated only after the entire batch has been updated. 85 86 \paragraph{Finally} the kernel must be notified of the change to the ring using the system call @io_uring_enter@. The number of elements appended to the submission ring is passed as a parameter and the number of elements consumed is returned. The @io_uring@ instance can be constructed so that this step is not required, but this requires elevated privilege and early version of @io_uring@ had additionnal restrictions. 87 88 The completion side is simpler, applications call @io_uring_enter@ with the flag @IORING_ENTER_GETEVENTS@ to wait on a desired number of operations to complete. The same call can be used to both submit @sqe@s and wait for operations to complete. When operations do complete the kernel appends a @cqe@ to the completion ring and advances the head of the ring. Each @cqe@ contains the result of the operation as well as a copy of the @user_data@ field of the @sqe@ that triggered the operation. It is not necessary to call @io_uring_enter@ to get new events, the kernel can directly modify the completion ring, the system call is only needed if the application wants to block waiting on operations to complete. 89 90 The @io_uring_enter@ system call is protected by a lock inside the kernel. This means that concurrent call to @io_uring_enter@ using the same instance are possible, but there is can be no performance gained from parallel calls to @io_uring_enter@. It is possible to do the first three submission steps in parallel, however, doing so requires careful synchronization. 91 92 @io_uring@ also introduces some constraints on what the number of operations that can be ``in flight'' at the same time. Obviously, @sqe@s are allocated from a fixed-size array, meaning that there is a hard limit to how many @sqe@s can be submitted at once. In addition, the @io_uring_enter@ system call can fail because ``The kernel [...] ran out of resources to handle [a request]'' or ``The application is attempting to overcommit the number of requests it can have pending.''. This requirement means that it can be required to handle bursts of \io requests by holding back some of the requests so they can be submitted at a later time. 85 New \io operations are submitted to the kernel following 4 steps, which use the components shown in the figure. 86 \begin{enumerate} 87 \item 88 An SQE is allocated from the pre-allocated array (denoted \emph{S} in Figure~\ref{fig:iouring}). This array is created at the same time as the @io_uring@ instance, is in kernel-locked memory visible by both the kernel and the application, and has a fixed size determined at creation. How these entries are allocated is not important for the functioning of @io_uring@, the only requirement is that no entry is reused before the kernel has consumed it. 89 \item 90 The SQE is filled according to the desired operation. This step is straight forward, the only detail worth mentioning is that SQEs have a @user_data@ field that must be filled in order to match submission and completion entries. 91 \item 92 The SQE is submitted to the submission ring by appending the index of the SQE to the ring following regular ring buffer steps: \lstinline{buffer[head] = item; head++}. Since the head is visible to the kernel, some memory barriers may be required to prevent the compiler from reordering these operations. Since the submission ring is a regular ring buffer, more than one SQE can be added at once and the head is updated only after all entries are updated. 93 \item 94 The kernel is notified of the change to the ring using the system call @io_uring_enter@. The number of elements appended to the submission ring is passed as a parameter and the number of elements consumed is returned. The @io_uring@ instance can be constructed so this step is not required, but this requires elevated privilege.% and an early version of @io_uring@ had additional restrictions. 95 \end{enumerate} 96 97 \begin{sloppypar} 98 The completion side is simpler: applications call @io_uring_enter@ with the flag @IORING_ENTER_GETEVENTS@ to wait on a desired number of operations to complete. The same call can be used to both submit SQEs and wait for operations to complete. When operations do complete, the kernel appends a CQE to the completion ring and advances the head of the ring. Each CQE contains the result of the operation as well as a copy of the @user_data@ field of the SQE that triggered the operation. It is not necessary to call @io_uring_enter@ to get new events because the kernel can directly modify the completion ring. The system call is only needed if the application wants to block waiting for operations to complete. 99 \end{sloppypar} 100 101 The @io_uring_enter@ system call is protected by a lock inside the kernel. This protection means that concurrent call to @io_uring_enter@ using the same instance are possible, but there is no performance gained from parallel calls to @io_uring_enter@. It is possible to do the first three submission steps in parallel, however, doing so requires careful synchronization. 102 103 @io_uring@ also introduces constraints on the number of simultaneous operations that can be ``in flight''. Obviously, SQEs are allocated from a fixed-size array, meaning that there is a hard limit to how many SQEs can be submitted at once. In addition, the @io_uring_enter@ system call can fail because ``The kernel [...] ran out of resources to handle [a request]'' or ``The application is attempting to overcommit the number of requests it can have pending.''. This restriction means \io request bursts may have to be subdivided and submitted in chunks at a later time. 93 104 94 105 \subsection{Multiplexing \io: Submission} 95 The submission side is the most complicated aspect of @io_uring@ and the completion side effectively follows from the design decisions made in the submission side. 96 97 While it is possible to do the first steps of submission in parallel, the duration of the system call scales with number of entries submitted. The consequence of this is that how much parallelism can be used to prepare submissions for the next system call is limited. Beyond this limit, the length of the system call will be the throughput limiting factor. I have concluded from early experiments that preparing submissions seems to take about as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}. Therefore the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances. Similarly to scheduling, this sharding can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two. Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continously\footnote{As will be described in Chapter~\ref{practice}, this does not translate into constant cpu usage.}. 106 The submission side is the most complicated aspect of @io_uring@ and the completion side effectively follows from the design decisions made in the submission side. While it is possible to do the first steps of submission in parallel, the duration of the system call scales with number of entries submitted. The consequence is that the amount of parallelism used to prepare submissions for the next system call is limited. 107 Beyond this limit, the length of the system call is the throughput limiting factor. I concluded from early experiments that preparing submissions seems to take about as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}. Therefore the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances. Similarly to scheduling, this sharding can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two. Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continously\footnote{As will be described in Chapter~\ref{practice}, this does not translate into constant cpu usage.}. 98 108 99 109 \subsubsection{Shared Instances} … … 104 114 Allocation failures need to be pushed up to the routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient @sqe@s available. Furthermore, the routing algorithm should block operations up-front if none of the instances have available @sqe@s. 105 115 106 Once an @sqe@ is allocated, \glspl{thrd} can fill them normally, they simply need to keep trac of the @sqe@ index and which instance it belongs to. 107 108 Once an @sqe@ is filled in, what needs to happen is that the @sqe@ must be added to the submission ring buffer, an operation that is not thread-safe on itself, and the kernel must be notified using the @io_uring_enter@ system call. The submission ring buffer is the same size as the pre-allocated @sqe@ buffer, therefore pushing to the ring buffer cannot fail\footnote{This is because it is invalid to have the same \lstinline|sqe| multiple times in the ring buffer.}. However, as mentioned, the system call itself can fail with the expectation that it will be retried once some of the already submitted operations complete. Since multiple @sqe@s can be submitted to the kernel at once, it is important to strike a balance between batching and latency. Operations that are ready to be submitted should be batched together in few system calls, but at the same time, operations should not be left pending for long period of times before being submitted. This can be handled by either designating one of the submitting \glspl{thrd} as the being responsible for the system call for the current batch of @sqe@s or by having some other party regularly submitting all ready @sqe@s, \eg, the poller \gls{thrd} mentionned later in this section. 109 110 In the case of designating a \gls{thrd}, ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests would be batched together and one of the \glspl{thrd} would do the system call on behalf of the others, referred to as the \newterm{submitter}. In practice however, it is important that the \io requests are not left pending indefinately and as such, it may be required to have a current submitter and a next submitter. Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call will include their request. Once the system call is done, the submitter must also free @sqe@s so that the allocator can reused them. 111 112 Finally, the completion side is much simpler since the @io_uring@ system call enforces a natural synchronization point. Polling simply needs to regularly do the system call, go through the produced @cqe@s and communicate the result back to the originating \glspl{thrd}. Since @cqe@s only own a signed 32 bit result, in addition to the copy of the @user_data@ field, all that is needed to communicate the result is a simple future~\cite{wiki:future}. If the submission side does not designate submitters, polling can also submit all @sqe@s as it is polling events. A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled. This design is especially convinient for reasons explained in Chapter~\ref{practice}. 113 116 Once an SQE is allocated, \glspl{thrd} can fill them normally, they simply need to keep track of the SQE index and which instance it belongs to. 117 118 Once an SQE is filled in, what needs to happen is that the SQE must be added to the submission ring buffer, an operation that is not thread-safe on itself, and the kernel must be notified using the @io_uring_enter@ system call. The submission ring buffer is the same size as the pre-allocated SQE buffer, therefore pushing to the ring buffer cannot fail\footnote{This is because it is invalid to have the same \lstinline{sqe} multiple times in the ring buffer.}. However, as mentioned, the system call itself can fail with the expectation that it will be retried once some of the already submitted operations complete. Since multiple SQEs can be submitted to the kernel at once, it is important to strike a balance between batching and latency. Operations that are ready to be submitted should be batched together in few system calls, but at the same time, operations should not be left pending for long period of times before being submitted. This can be handled by either designating one of the submitting \glspl{thrd} as the being responsible for the system call for the current batch of SQEs or by having some other party regularly submitting all ready SQEs, \eg, the poller \gls{thrd} mentioned later in this section. 119 120 In the case of designating a \gls{thrd}, ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests would be batched together and one of the \glspl{thrd} would do the system call on behalf of the others, referred to as the \newterm{submitter}. In practice however, it is important that the \io requests are not left pending indefinitely and as such, it may be required to have a current submitter and a next submitter. Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call will include their request. Once the system call is done, the submitter must also free SQEs so that the allocator can reused them. 121 122 Finally, the completion side is much simpler since the @io_uring@ system call enforces a natural synchronization point. Polling simply needs to regularly do the system call, go through the produced CQEs and communicate the result back to the originating \glspl{thrd}. Since CQEs only own a signed 32 bit result, in addition to the copy of the @user_data@ field, all that is needed to communicate the result is a simple future~\cite{wiki:future}. If the submission side does not designate submitters, polling can also submit all SQEs as it is polling events. A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled. This design is especially convenient for reasons explained in Chapter~\ref{practice}. 123 124 <<<<<<< HEAD 114 125 With this pool of instances approach, the big advantage is that it is fairly flexible. It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions. It also can gracefully handles running out of ressources, @sqe@s or the kernel returning @EBUSY@. The down side to this is that many of the steps used for submitting need complex synchronization to work properly. The routing and allocation algorithm needs to keep track of which ring instances have available @sqe@s, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for @sqe@s and handle @sqe@s being freed. The submission side needs to safely append @sqe@s to the ring buffer, make sure no @sqe@ is dropped or left pending forever, notify the allocation side when @sqe@s can be reused and handle the kernel returning @EBUSY@. All this synchronization may have a significant cost and, compare to the next approach presented, this synchronization is entirely overhead. 115 126 116 127 \subsubsection{Private Instances} 117 128 Another approach is to simply create one ring instance per \gls{proc}. This alleviate the need for synchronization on the submissions, requiring only that \glspl{thrd} are not interrupted in between two submission steps. This is effectively the same requirement as using @thread_local@ variables. Since @sqe@s that are allocated must be submitted to the same ring, on the same \gls{proc}, this effectively forces the application to submit @sqe@s in allocation order\footnote{The actual requirement is that \glspl{thrd} cannot context switch between allocation and submission. This requirement means that from the subsystem's point of view, the allocation and submission are sequential. To remove this requirement, a \gls{thrd} would need the ability to ``yield to a specific \gls{proc}'', \ie, park with the promise that it will be run next on a specific \gls{proc}, the \gls{proc} attached to the correct ring.}, greatly simplifying both allocation and submission. In this design, allocation and submission form a ring partitionned ring buffer as shown in Figure~\ref{fig:pring}. Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to do the system call. Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of threads \glspl{thrd}, etc. 129 ======= 130 With this pool of instances approach, the big advantage is that it is fairly flexible. It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions. It also can gracefully handle running out of resources, SQEs or the kernel returning @EBUSY@. The down side to this is that many of the steps used for submitting need complex synchronization to work properly. The routing and allocation algorithm needs to keep track of which ring instances have available SQEs, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for SQEs and handle SQEs being freed. The submission side needs to safely append SQEs to the ring buffer, make sure no SQE is dropped or left pending forever, notify the allocation side when SQEs can be reused and handle the kernel returning @EBUSY@. Sharding the @io_uring@ instances should alleviate much of the contention caused by this, but all this synchronization may still have non-zero cost. 131 132 \subsubsection{Private Instances} 133 Another approach is to simply create one ring instance per \gls{proc}. This alleviate the need for synchronization on the submissions, requiring only that \glspl{thrd} are not interrupted in between two submission steps. This is effectively the same requirement as using @thread_local@ variables. Since SQEs that are allocated must be submitted to the same ring, on the same \gls{proc}, this effectively forces the application to submit SQEs in allocation order\footnote{The actual requirement is that \glspl{thrd} cannot context switch between allocation and submission. This requirement means that from the subsystem's point of view, the allocation and submission are sequential. To remove this requirement, a \gls{thrd} would need the ability to ``yield to a specific \gls{proc}'', \ie, park with the promise that it will be run next on a specific \gls{proc}, the \gls{proc} attached to the correct ring. This is not a current or planned feature of \CFA.}, greatly simplifying both allocation and submission. In this design, allocation and submission form a ring partitioned ring buffer as shown in Figure~\ref{fig:pring}. Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to do the system call. Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of threads \glspl{thrd}, etc. 134 >>>>>>> 1830a8657cb302a89a7ca045bee06baa48b18101 118 135 119 136 \begin{figure} 120 137 \centering 121 138 \input{pivot_ring.pstex_t} 122 \caption[Partition ned ring buffer]{Partitionned ring buffer \smallskip\newline Allocated sqes are appending to the first partition. When submitting, the partition is simply advanced to include all the sqes that should be submitted. The kernel considers the partition as the head of the ring.}139 \caption[Partitioned ring buffer]{Partitioned ring buffer \smallskip\newline Allocated sqes are appending to the first partition. When submitting, the partition is simply advanced to include all the sqes that should be submitted. The kernel considers the partition as the head of the ring.} 123 140 \label{fig:pring} 124 141 \end{figure} 125 142 143 <<<<<<< HEAD 126 144 This approach has the advantage that it does not require much of the synchronization needed in the shared approach. This comes at the cost that \glspl{thrd} submitting \io operations have less flexibility, they cannot park or yield, and several exceptional cases are handled poorly. Instances running out of @sqe@s cannot run \glspl{thrd} wanting to do \io operations, in such a case the \gls{thrd} needs to be moved to a different \gls{proc}, the only current way of achieving this would be to @yield()@ hoping to be scheduled on a different \gls{proc}, which is not guaranteed. 127 145 … … 190 208 % if cltr.io.flag || proc.io != alloc.io || proc.io->flag: 191 209 % return submit_slow(cltr.io) 210 ======= 211 This approach has the advantage that it does not require much of the synchronization needed in the shared approach. This comes at the cost that \glspl{thrd} submitting \io operations have less flexibility, they cannot park or yield, and several exceptional cases are handled poorly. Instances running out of SQEs cannot run \glspl{thrd} wanting to do \io operations, in such a case the \gls{thrd} needs to be moved to a different \gls{proc}, the only current way of achieving this would be to @yield()@ hoping to be scheduled on a different \gls{proc}, which is not guaranteed. Another problematic case is that \glspl{thrd} that do not park for long periods of time will delay the submission of any SQE not already submitted. This issue is similar to fairness issues which schedulers that use work-stealing mentioned in the previous chapter. 212 >>>>>>> 1830a8657cb302a89a7ca045bee06baa48b18101 192 213 193 214 % submit_fast(proc.io, a) … … 214 235 \subsection{Asynchronous Extension} 215 236 216 \subsection{Interface directly to \lstinline |io_uring|}237 \subsection{Interface directly to \lstinline{io_uring}} -
doc/theses/thierry_delisle_PhD/thesis/thesis.tex
r14533d4 rf6664bf2 1 % uWaterloo Thesis Template for LaTeX 2 % Last Updated June 14, 2017 by Stephen Carr, IST Client Services 3 % FOR ASSISTANCE, please send mail to rt-IST-CSmathsci@ist.uwaterloo.ca 4 5 % Effective October 2006, the University of Waterloo 6 % requires electronic thesis submission. See the uWaterloo thesis regulations at 1 %====================================================================== 2 % University of Waterloo Thesis Template for LaTeX 3 % Last Updated November, 2020 4 % by Stephen Carr, IST Client Services, 5 % University of Waterloo, 200 University Ave. W., Waterloo, Ontario, Canada 6 % FOR ASSISTANCE, please send mail to request@uwaterloo.ca 7 8 % DISCLAIMER 9 % To the best of our knowledge, this template satisfies the current uWaterloo thesis requirements. 10 % However, it is your responsibility to assure that you have met all requirements of the University and your particular department. 11 12 % Many thanks for the feedback from many graduates who assisted the development of this template. 13 % Also note that there are explanatory comments and tips throughout this template. 14 %====================================================================== 15 % Some important notes on using this template and making it your own... 16 17 % The University of Waterloo has required electronic thesis submission since October 2006. 18 % See the uWaterloo thesis regulations at 7 19 % https://uwaterloo.ca/graduate-studies/thesis. 8 9 % DON'T FORGET TO ADD YOUR OWN NAME AND TITLE in the "hyperref" package 10 % configuration below. THIS INFORMATION GETS EMBEDDED IN THE PDF FINAL PDF DOCUMENT. 11 % You can view the information if you view Properties of the PDF document. 12 13 % Many faculties/departments also require one or more printed 14 % copies. This template attempts to satisfy both types of output. 15 % It is based on the standard "book" document class which provides all necessary 16 % sectioning structures and allows multi-part theses. 17 18 % DISCLAIMER 19 % To the best of our knowledge, this template satisfies the current uWaterloo requirements. 20 % However, it is your responsibility to assure that you have met all 21 % requirements of the University and your particular department. 22 % Many thanks for the feedback from many graduates that assisted the development of this template. 23 24 % ----------------------------------------------------------------------- 25 26 % By default, output is produced that is geared toward generating a PDF 27 % version optimized for viewing on an electronic display, including 28 % hyperlinks within the PDF. 29 20 % This thesis template is geared towards generating a PDF version optimized for viewing on an electronic display, including hyperlinks within the PDF. 21 22 % DON'T FORGET TO ADD YOUR OWN NAME AND TITLE in the "hyperref" package configuration below. 23 % THIS INFORMATION GETS EMBEDDED IN THE PDF FINAL PDF DOCUMENT. 24 % You can view the information if you view properties of the PDF document. 25 26 % Many faculties/departments also require one or more printed copies. 27 % This template attempts to satisfy both types of output. 28 % See additional notes below. 29 % It is based on the standard "book" document class which provides all necessary sectioning structures and allows multi-part theses. 30 31 % If you are using this template in Overleaf (cloud-based collaboration service), then it is automatically processed and previewed for you as you edit. 32 33 % For people who prefer to install their own LaTeX distributions on their own computers, and process the source files manually, the following notes provide the sequence of tasks: 34 30 35 % E.g. to process a thesis called "mythesis.tex" based on this template, run: 31 36 32 37 % pdflatex mythesis -- first pass of the pdflatex processor 33 38 % bibtex mythesis -- generates bibliography from .bib data file(s) 34 % makeindex -- should be run only if an index is used 39 % makeindex -- should be run only if an index is used 35 40 % pdflatex mythesis -- fixes numbering in cross-references, bibliographic references, glossaries, index, etc. 36 % pdflatex mythesis -- fixes numbering in cross-references, bibliographic references, glossaries, index, etc. 37 38 % If you use the recommended LaTeX editor, Texmaker, you would open the mythesis.tex 39 % file, then click the PDFLaTeX button. Then run BibTeX (under the Tools menu). 40 % Then click the PDFLaTeX button two more times. If you have an index as well, 41 % you'll need to run MakeIndex from the Tools menu as well, before running pdflatex 41 % pdflatex mythesis -- it takes a couple of passes to completely process all cross-references 42 43 % If you use the recommended LaTeX editor, Texmaker, you would open the mythesis.tex file, then click the PDFLaTeX button. Then run BibTeX (under the Tools menu). 44 % Then click the PDFLaTeX button two more times. 45 % If you have an index as well,you'll need to run MakeIndex from the Tools menu as well, before running pdflatex 42 46 % the last two times. 43 47 44 % N.B. The "pdftex" program allows graphics in the following formats to be 45 % included with the "\includegraphics" command: PNG, PDF, JPEG, TIFF 46 % Tip 1: Generate your figures and photos in the size you want them to appear 47 % in your thesis, rather than scaling them with \includegraphics options. 48 % Tip 2: Any drawings you do should be in scalable vector graphic formats: 49 % SVG, PNG, WMF, EPS and then converted to PNG or PDF, so they are scalable in 50 % the final PDF as well. 51 % Tip 3: Photographs should be cropped and compressed so as not to be too large. 52 53 % To create a PDF output that is optimized for double-sided printing: 54 % 55 % 1) comment-out the \documentclass statement in the preamble below, and 56 % un-comment the second \documentclass line. 57 % 58 % 2) change the value assigned below to the boolean variable 59 % "PrintVersion" from "false" to "true". 60 61 % --------------------- Start of Document Preamble ----------------------- 62 63 % Specify the document class, default style attributes, and page dimensions 48 % N.B. The "pdftex" program allows graphics in the following formats to be included with the "\includegraphics" command: PNG, PDF, JPEG, TIFF 49 % Tip: Generate your figures and photos in the size you want them to appear in your thesis, rather than scaling them with \includegraphics options. 50 % Tip: Any drawings you do should be in scalable vector graphic formats: SVG, PNG, WMF, EPS and then converted to PNG or PDF, so they are scalable in the final PDF as well. 51 % Tip: Photographs should be cropped and compressed so as not to be too large. 52 53 % To create a PDF output that is optimized for double-sided printing: 54 % 1) comment-out the \documentclass statement in the preamble below, and un-comment the second \documentclass line. 55 % 2) change the value assigned below to the boolean variable "PrintVersion" from " false" to "true". 56 57 %====================================================================== 58 % D O C U M E N T P R E A M B L E 59 % Specify the document class, default style attributes, and page dimensions, etc. 64 60 % For hyperlinked PDF, suitable for viewing on a computer, use this: 65 61 \documentclass[letterpaper,12pt,titlepage,oneside,final]{book} 66 62 67 % For PDF, suitable for double-sided printing, change the PrintVersion variable below 68 % to "true" and use this \documentclass line instead of the one above: 63 % For PDF, suitable for double-sided printing, change the PrintVersion variable below to "true" and use this \documentclass line instead of the one above: 69 64 %\documentclass[letterpaper,12pt,titlepage,openright,twoside,final]{book} 70 65 71 \newcommand{\href}[1]{#1} % does nothing, but defines the command so the 72 % print-optimized version will ignore \href tags (redefined by hyperref pkg). 66 % Some LaTeX commands I define for my own nomenclature. 67 % If you have to, it's easier to make changes to nomenclature once here than in a million places throughout your thesis! 68 \newcommand{\package}[1]{\textbf{#1}} % package names in bold text 69 \newcommand{\cmmd}[1]{\textbackslash\texttt{#1}} % command name in tt font 70 \newcommand{\href}[1]{#1} % does nothing, but defines the command so the print-optimized version will ignore \href tags (redefined by hyperref pkg). 71 %\newcommand{\texorpdfstring}[2]{#1} % does nothing, but defines the command 72 % Anything defined here may be redefined by packages added below... 73 73 74 74 % This package allows if-then-else control structures. … … 76 76 \newboolean{PrintVersion} 77 77 \setboolean{PrintVersion}{false} 78 % CHANGE THIS VALUE TO "true" as necessary, to improve printed results for hard copies 79 % by overriding some options of the hyperref package below. 78 % CHANGE THIS VALUE TO "true" as necessary, to improve printed results for hard copies by overriding some options of the hyperref package, called below. 80 79 81 80 %\usepackage{nomencl} % For a nomenclature (optional; available from ctan.org) … … 85 84 86 85 % Hyperlinks make it very easy to navigate an electronic document. 87 % In addition, this is where you should specify the thesis title 88 % and author as they appear in the properties of the PDF document. 86 % In addition, this is where you should specify the thesis title and author as they appear in the properties of the PDF document. 89 87 % Use the "hyperref" package 90 88 % N.B. HYPERREF MUST BE THE LAST PACKAGE LOADED; ADD ADDITIONAL PKGS ABOVE 91 89 \usepackage[pagebackref=false]{hyperref} % with basic options 92 % N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing. 90 %\usepackage[pdftex,pagebackref=true]{hyperref} 91 % N.B. pagebackref=true provides links back from the References to the body text. This can cause trouble for printing. 93 92 \hypersetup{ 94 93 plainpages=false, % needed if Roman numbers in frontpages 95 unicode=false, % non-Latin characters in Acrobat ’s bookmarks96 pdftoolbar=true, % show Acrobat ’s toolbar?97 pdfmenubar=true, % show Acrobat ’s menu?94 unicode=false, % non-Latin characters in Acrobat's bookmarks 95 pdftoolbar=true, % show Acrobat's toolbar? 96 pdfmenubar=true, % show Acrobat's menu? 98 97 pdffitwindow=false, % window fit to page when opened 99 98 pdfstartview={FitH}, % fits the width of the page to the window … … 111 110 \ifthenelse{\boolean{PrintVersion}}{ % for improved print quality, change some hyperref options 112 111 \hypersetup{ % override some previously defined hyperref options 113 citecolor=black, 114 filecolor=black, 115 linkcolor=black, 112 citecolor=black,% 113 filecolor=black,% 114 linkcolor=black,% 116 115 urlcolor=black 117 116 }}{} % end of ifthenelse (no else) … … 136 135 137 136 % Setting up the page margins... 138 \setlength{\textheight}{9in}\setlength{\topmargin}{-0.45in}\setlength{\headsep}{0.25in} 137 \setlength{\textheight}{9in} 138 \setlength{\topmargin}{-0.45in} 139 \setlength{\headsep}{0.25in} 139 140 % uWaterloo thesis requirements specify a minimum of 1 inch (72pt) margin at the 140 % top, bottom, and outside page edges and a 1.125 in. (81pt) gutter 141 % margin (on binding side). While this is not an issue for electronic 142 % viewing, a PDF may be printed, and so we have the same page layout for 143 % both printed and electronic versions, we leave the gutter margin in. 141 % top, bottom, and outside page edges and a 1.125 in. (81pt) gutter margin (on binding side). 142 % While this is not an issue for electronic viewing, a PDF may be printed, and so we have the same page layout for both printed and electronic versions, we leave the gutter margin in. 144 143 % Set margins to minimum permitted by uWaterloo thesis regulations: 145 144 \setlength{\marginparwidth}{0pt} % width of margin notes … … 150 149 \setlength{\evensidemargin}{0.125in} % Adds 1/8 in. to binding side of all 151 150 % even-numbered pages when the "twoside" printing option is selected 152 \setlength{\oddsidemargin}{0.125in} % Adds 1/8 in. to the left of all pages 153 % when "oneside" printing is selected, and to the left of all odd-numbered 154 % pages when "twoside" printing is selected 155 \setlength{\textwidth}{6.375in} % assuming US letter paper (8.5 in. x 11 in.) and 156 % side margins as above 151 \setlength{\oddsidemargin}{0.125in} % Adds 1/8 in. to the left of all pages when "oneside" printing is selected, and to the left of all odd-numbered pages when "twoside" printing is selected 152 \setlength{\textwidth}{6.375in} % assuming US letter paper (8.5 in. x 11 in.) and side margins as above 157 153 \raggedbottom 158 154 159 % The following statement specifies the amount of space between 160 % paragraphs. Other reasonable specifications are \bigskipamount and \smallskipamount. 155 % The following statement specifies the amount of space between paragraphs. Other reasonable specifications are \bigskipamount and \smallskipamount. 161 156 \setlength{\parskip}{\medskipamount} 162 157 163 % The following statement controls the line spacing. The default 164 % spacing corresponds to good typographic conventions and only slight 165 % changes (e.g., perhaps "1.2"), if any, should be made. 158 % The following statement controls the line spacing. 159 % The default spacing corresponds to good typographic conventions and only slight changes (e.g., perhaps "1.2"), if any, should be made. 166 160 \renewcommand{\baselinestretch}{1} % this is the default line space setting 167 161 168 % By default, each chapter will start on a recto (right-hand side) 169 % page. We also force each section of the front pages to start on 170 % a recto page by inserting \cleardoublepage commands. 171 % In many cases, this will require that the verso page be 172 % blank and, while it should be counted, a page number should not be 173 % printed. The following statements ensure a page number is not 174 % printed on an otherwise blank verso page. 162 % By default, each chapter will start on a recto (right-hand side) page. 163 % We also force each section of the front pages to start on a recto page by inserting \cleardoublepage commands. 164 % In many cases, this will require that the verso (left-hand) page be blank, and while it should be counted, a page number should not be printed. 165 % The following statements ensure a page number is not printed on an otherwise blank verso page. 175 166 \let\origdoublepage\cleardoublepage 176 167 \newcommand{\clearemptydoublepage}{% … … 204 195 \input{common} 205 196 \CFAStyle % CFA code-style for all languages 206 \lstset{ basicstyle=\linespread{0.9}\tt}197 \lstset{language=CFA,basicstyle=\linespread{0.9}\tt} % CFA default language 207 198 208 199 % glossary of terms to use … … 210 201 \makeindex 211 202 212 \newcommand\io{\glsxtrshort{io}}% 213 214 %====================================================================== 215 % L O G I C A L D O C U M E N T -- the content of your thesis 203 \newcommand\io{\glsxtrshort{io}\xspace}% 204 205 %====================================================================== 206 % L O G I C A L D O C U M E N T 207 % The logical document contains the main content of your thesis. 208 % Being a large document, it is a good idea to divide your thesis into several files, each one containing one chapter or other significant chunk of content, so you can easily shuffle things around later if desired. 216 209 %====================================================================== 217 210 \begin{document} 218 211 219 % For a large document, it is a good idea to divide your thesis220 % into several files, each one containing one chapter.221 % To illustrate this idea, the "front pages" (i.e., title page,222 % declaration, borrowers' page, abstract, acknowledgements,223 % dedication, table of contents, list of tables, list of figures,224 % nomenclature) are contained within the file "uw-ethesis-frontpgs.tex" which is225 % included into the document by the following statement.226 212 %---------------------------------------------------------------------- 227 213 % FRONT MATERIAL 214 % title page,declaration, borrowers' page, abstract, acknowledgements, 215 % dedication, table of contents, list of tables, list of figures, nomenclature, etc. 228 216 %---------------------------------------------------------------------- 229 217 \input{text/front.tex} 230 218 231 232 219 %---------------------------------------------------------------------- 233 220 % MAIN BODY 234 % ----------------------------------------------------------------------235 % Because this is a short document, and to reduce the number of files236 % needed for this template, the chapters are not separate237 % documents as suggested above, but you get the idea. If they were238 % separate documents, they would each start with the \chapter command, i.e,239 % do not contain \documentclass or \begin{document} and \end{document} commands. 221 % We suggest using a separate file for each chapter of your thesis. 222 % Start each chapter file with the \chapter command. 223 % Only use \documentclass or \begin{document} and \end{document} commands in this master document. 224 % Tip: Putting each sentence on a new line is a way to simplify later editing. 225 %---------------------------------------------------------------------- 226 240 227 \part{Introduction} 241 228 \input{text/intro.tex} … … 255 242 %---------------------------------------------------------------------- 256 243 % END MATERIAL 257 %---------------------------------------------------------------------- 258 259 % B I B L I O G R A P H Y 260 % ----------------------- 261 262 % The following statement selects the style to use for references. It controls the sort order of the entries in the bibliography and also the formatting for the in-text labels. 244 % Bibliography, Appendices, Index, etc. 245 %---------------------------------------------------------------------- 246 247 % Bibliography 248 249 % The following statement selects the style to use for references. 250 % It controls the sort order of the entries in the bibliography and also the formatting for the in-text labels. 263 251 \bibliographystyle{plain} 264 252 % This specifies the location of the file containing the bibliographic information. 265 % It assumes you're using BibTeX (if not, why not?). 266 \cleardoublepage % This is needed if the book class is used, to place the anchor in the correct page, 267 % because the bibliography will start on its own page. 268 % Use \clearpage instead if the document class uses the "oneside" argument 253 % It assumes you're using BibTeX to manage your references (if not, why not?). 254 \cleardoublepage % This is needed if the "book" document class is used, to place the anchor in the correct page, because the bibliography will start on its own page. 255 % Use \clearpage instead if the document class uses the "oneside" argument 269 256 \phantomsection % With hyperref package, enables hyperlinking from the table of contents to bibliography 270 257 % The following statement causes the title "References" to be used for the bibliography section: … … 275 262 276 263 \bibliography{local,pl} 277 % Tip 5: You can create multiple .bib files to organize your references.264 % Tip: You can create multiple .bib files to organize your references. 278 265 % Just list them all in the \bibliogaphy command, separated by commas (no spaces). 279 266 280 % % The following statement causes the specified references to be added to the bibliography% even if they were not281 % % cited in the text.The asterisk is a wildcard that causes all entries in the bibliographic database to be included (optional).267 % The following statement causes the specified references to be added to the bibliography even if they were not cited in the text. 268 % The asterisk is a wildcard that causes all entries in the bibliographic database to be included (optional). 282 269 % \nocite{*} 270 %---------------------------------------------------------------------- 271 272 % Appendices 283 273 284 274 % The \appendix statement indicates the beginning of the appendices. 285 275 \appendix 286 % Add a title page before the appendices and a line in the Table of Contents276 % Add an un-numbered title page before the appendices and a line in the Table of Contents 287 277 \chapter*{APPENDICES} 288 278 \addcontentsline{toc}{chapter}{APPENDICES} 279 % Appendices are just more chapters, with different labeling (letters instead of numbers). 289 280 %====================================================================== 290 281 \chapter[PDF Plots From Matlab]{Matlab Code for Making a PDF Plot} … … 324 315 %\input{thesis.ind} % index 325 316 326 \phantomsection 327 328 \end{document} 317 \phantomsection % allows hyperref to link to the correct page 318 319 %---------------------------------------------------------------------- 320 \end{document} % end of logical document -
doc/user/user.tex
r14533d4 rf6664bf2 11 11 %% Created On : Wed Apr 6 14:53:29 2016 12 12 %% Last Modified By : Peter A. Buhr 13 %% Last Modified On : Mon Feb 8 21:53:31202114 %% Update Count : 4 32713 %% Last Modified On : Mon Feb 15 13:48:53 2021 14 %% Update Count : 4452 15 15 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 16 16 … … 105 105 106 106 \author{ 107 \huge \CFA Team \medskip \\107 \huge \CFA Team (past and present) \medskip \\ 108 108 \Large Andrew Beach, Richard Bilson, Michael Brooks, Peter A. Buhr, Thierry Delisle, \smallskip \\ 109 109 \Large Glen Ditchfield, Rodolfo G. Esteves, Aaron Moss, Colby Parsons, Rob Schluntz, \smallskip \\ … … 129 129 \vspace*{\fill} 130 130 \noindent 131 \copyright\,2016 \CFA Project \\ \\131 \copyright\,2016, 2018, 2021 \CFA Project \\ \\ 132 132 \noindent 133 133 This work is licensed under the Creative Commons Attribution 4.0 International License. … … 970 970 \hline 971 971 \begin{cfa} 972 while @( )@ { sout | "empty"; break; }973 do { sout | "empty"; break; } while @( )@;974 for @( )@ { sout | "empty"; break; }972 while @($\,$)@ { sout | "empty"; break; } 973 do { sout | "empty"; break; } while @($\,$)@; 974 for @($\,$)@ { sout | "empty"; break; } 975 975 for ( @0@ ) { sout | "A"; } sout | "zero"; 976 976 for ( @1@ ) { sout | "A"; } … … 1145 1145 \subsection{\texorpdfstring{Labelled \LstKeywordStyle{continue} / \LstKeywordStyle{break} Statement}{Labelled continue / break Statement}} 1146 1146 1147 While C provides ©continue© and ©break© statements for altering control flow, bothare restricted to one level of nesting for a particular control structure.1148 Unfortunately, this restriction forces programmers to use \Indexc{goto} to achieve the equivalent control-flow for more than one level of nesting.1147 C ©continue© and ©break© statements, for altering control flow, are restricted to one level of nesting for a particular control structure. 1148 This restriction forces programmers to use \Indexc{goto} to achieve the equivalent control-flow for more than one level of nesting. 1149 1149 To prevent having to switch to the ©goto©, \CFA extends the \Indexc{continue}\index{continue@©continue©!labelled}\index{labelled!continue@©continue©} and \Indexc{break}\index{break@©break©!labelled}\index{labelled!break@©break©} with a target label to support static multi-level exit\index{multi-level exit}\index{static multi-level exit}~\cite{Buhr85}, as in Java. 1150 1150 For both ©continue© and ©break©, the target label must be directly associated with a ©for©, ©while© or ©do© statement; 1151 1151 for ©break©, the target label can also be associated with a ©switch©, ©if© or compound (©{}©) statement. 1152 \VRef[Figure]{f:MultiLevelExit} shows ©continue© and ©break© indicating the specific control structure, and the corresponding C program using only©goto© and labels.1152 \VRef[Figure]{f:MultiLevelExit} shows a comparison between labelled ©continue© and ©break© and the corresponding C equivalent using ©goto© and labels. 1153 1153 The innermost loop has 8 exit points, which cause continuation or termination of one or more of the 7 \Index{nested control-structure}s. 1154 1154 … … 1215 1215 \end{lrbox} 1216 1216 1217 \hspace*{-10pt}1218 1217 \subfloat[\CFA]{\label{f:CFibonacci}\usebox\myboxA} 1219 \hspace{ 2pt}1218 \hspace{3pt} 1220 1219 \vrule 1220 \hspace{3pt} 1221 1221 \subfloat[C]{\label{f:CFAFibonacciGen}\usebox\myboxB} 1222 1222 \caption{Multi-level Exit} … … 1233 1233 This restriction prevents missing declarations and/or initializations at the start of a control structure resulting in undefined behaviour. 1234 1234 \end{itemize} 1235 The advantage of the labelled ©continue©/©break© is allowing static multi-level exits without having to use the ©goto© statement, and tying control flow to the target control structure rather than an arbitrary point in a program .1235 The advantage of the labelled ©continue©/©break© is allowing static multi-level exits without having to use the ©goto© statement, and tying control flow to the target control structure rather than an arbitrary point in a program via a label. 1236 1236 Furthermore, the location of the label at the \emph{beginning} of the target control structure informs the reader (\Index{eye candy}) that complex control-flow is occurring in the body of the control structure. 1237 1237 With ©goto©, the label is at the end of the control structure, which fails to convey this important clue early enough to the reader. … … 1240 1240 1241 1241 1242 %\s ection{\texorpdfstring{\protect\lstinline@with@ Statement}{with Statement}}1243 \s ection{\texorpdfstring{\LstKeywordStyle{with} Statement}{with Statement}}1242 %\subsection{\texorpdfstring{\protect\lstinline@with@ Statement}{with Statement}} 1243 \subsection{\texorpdfstring{\LstKeywordStyle{with} Statement}{with Statement}} 1244 1244 \label{s:WithStatement} 1245 1245 1246 Grouping heterogeneous data into \newterm{aggregate}s (structure/union) is a common programming practice, and an aggregate can be further organized into more complex structures, such as arrays and containers:1247 \begin{cfa} 1248 struct S {$\C{// aggregate}$1249 char c; $\C{// fields}$1250 int i;1251 double d;1246 Grouping heterogeneous data into an \newterm{aggregate} (structure/union) is a common programming practice, and aggregates may be nested: 1247 \begin{cfa} 1248 struct Person { $\C{// aggregate}$ 1249 struct Name { char first[20], last[20]; } name $\C{// nesting}$ 1250 struct Address { ... } address $\C{// nesting}$ 1251 int sex; 1252 1252 }; 1253 S s, as[10]; 1254 \end{cfa} 1255 However, functions manipulating aggregates must repeat the aggregate name to access its containing fields: 1256 \begin{cfa} 1257 void f( S s ) { 1258 @s.@c; @s.@i; @s.@d; $\C{// access containing fields}$ 1259 } 1260 \end{cfa} 1261 which extends to multiple levels of qualification for nested aggregates. 1262 A similar situation occurs in object-oriented programming, \eg \CC: 1253 \end{cfa} 1254 Functions manipulating aggregates must repeat the aggregate name to access its containing fields. 1255 \begin{cfa} 1256 Person p 1257 @p.@name; @p.@address; @p.@sex; $\C{// access containing fields}$ 1258 \end{cfa} 1259 which extends to multiple levels of qualification for nested aggregates and multiple aggregates. 1260 \begin{cfa} 1261 struct Ticket { ... } t; 1262 @p.name@.first; @p.address@.street; $\C{// access nested fields}$ 1263 @t.@departure; @t.@cost; $\C{// access multiple aggregate}$ 1264 \end{cfa} 1265 Repeated aggregate qualification is tedious and makes code difficult to read. 1266 Therefore, reducing aggregate qualification is a useful language design goal. 1267 1268 C allows unnamed nested aggregates that open their scope into the containing aggregate. 1269 This feature is used to group fields for attributes and/or with ©union© aggregates. 1270 \begin{cfa} 1271 struct S { 1272 struct { int g, h; } __attribute__(( aligned(64) )); 1273 int tag; 1274 union { 1275 struct { char c1, c2; } __attribute__(( aligned(128) )); 1276 struct { int i1, i2; }; 1277 struct { double d1, d2; }; 1278 }; 1279 }; 1280 s.g; s.h; s.tag; s.c1; s.c2; s.i1; s.i2; s.d1; s.d2; 1281 \end{cfa} 1282 1283 Object-oriented languages reduce qualification for class variables within member functions, \eg \CC: 1263 1284 \begin{C++} 1264 1285 struct S { 1265 char c; $\C{// fields}$ 1266 int i; 1267 double d; 1268 void f() { $\C{// implicit ``this'' aggregate}$ 1269 @this->@c; @this->@i; @this->@d; $\C{// access containing fields}$ 1286 char @c@; int @i@; double @d@; 1287 void f( /* S * this */ ) { $\C{// implicit ``this'' parameter}$ 1288 @c@; @i@; @d@; $\C{// this->c; this->i; this->d;}$ 1270 1289 } 1271 1290 } 1272 1291 \end{C++} 1273 Object-oriented nesting of member functions in a \lstinline[language=C++]@class/struct@ allows eliding \lstinline[language=C++]@this->@ because of lexical scoping. 1274 However, for other aggregate parameters, qualification is necessary: 1275 \begin{cfa} 1276 struct T { double m, n; }; 1277 int S::f( T & t ) { $\C{// multiple aggregate parameters}$ 1278 c; i; d; $\C{\R{// this--{\textgreater}c, this--{\textgreater}i, this--{\textgreater}d}}$ 1279 @t.@m; @t.@n; $\C{// must qualify}$ 1280 } 1281 \end{cfa} 1282 1283 To simplify the programmer experience, \CFA provides a ©with© statement \see{Pascal~\cite[\S~4.F]{Pascal}} to elide aggregate qualification to fields by opening a scope containing the field identifiers. 1284 Hence, the qualified fields become variables with the side-effect that it is easier to optimizing field references in a block. 1285 \begin{cfa} 1286 void f( S & this ) @with ( this )@ { $\C{// with statement}$ 1287 c; i; d; $\C{\R{// this.c, this.i, this.d}}$ 1292 In general, qualification is elided for the variables and functions in the lexical scopes visible from a member function. 1293 However, qualification is necessary for name shadowing and explicit aggregate parameters. 1294 \begin{cfa} 1295 struct T { 1296 char @m@; int @i@; double @n@; $\C{// derived class variables}$ 1297 }; 1298 struct S : public T { 1299 char @c@; int @i@; double @d@; $\C{// class variables}$ 1300 void g( double @d@, T & t ) { 1301 d; @t@.m; @t@.i; @t@.n; $\C{// function parameter}$ 1302 c; i; @this->@d; @S::@d; $\C{// class S variables}$ 1303 m; @T::@i; n; $\C{// class T variables}$ 1304 } 1305 }; 1306 \end{cfa} 1307 Note the three different forms of qualification syntax in \CC, ©.©, ©->©, ©::©, which is confusing. 1308 1309 Since \CFA in not object-oriented, it has no implicit parameter with its implicit qualification. 1310 Instead \CFA introduces a general mechanism using the ©with© statement \see{Pascal~\cite[\S~4.F]{Pascal}} to explicitly elide aggregate qualification by opening a scope containing the field identifiers. 1311 Hence, the qualified fields become variables with the side-effect that it is simpler to write, easier to read, and optimize field references in a block. 1312 \begin{cfa} 1313 void f( S & this ) @with ( this )@ { $\C{// with statement}$ 1314 @c@; @i@; @d@; $\C{// this.c, this.i, this.d}$ 1288 1315 } 1289 1316 \end{cfa} 1290 1317 with the generality of opening multiple aggregate-parameters: 1291 1318 \begin{cfa} 1292 void f( S & s, T & t ) @with ( s, t )@ { $\C{// multiple aggregate parameters}$ 1293 c; i; d; $\C{\R{// s.c, s.i, s.d}}$ 1294 m; n; $\C{\R{// t.m, t.n}}$ 1295 } 1296 \end{cfa} 1297 1298 In detail, the ©with© statement has the form: 1299 \begin{cfa} 1300 $\emph{with-statement}$: 1301 'with' '(' $\emph{expression-list}$ ')' $\emph{compound-statement}$ 1302 \end{cfa} 1303 and may appear as the body of a function or nested within a function body. 1304 Each expression in the expression-list provides a type and object. 1305 The type must be an aggregate type. 1319 void g( S & s, T & t ) @with ( s, t )@ { $\C{// multiple aggregate parameters}$ 1320 c; @s.@i; d; $\C{// s.c, s.i, s.d}$ 1321 m; @t.@i; n; $\C{// t.m, t.i, t.n}$ 1322 } 1323 \end{cfa} 1324 where qualification is only necessary to disambiguate the shadowed variable ©i©. 1325 1326 In detail, the ©with© statement may appear as the body of a function or nested within a function body. 1327 The ©with© clause takes a list of expressions, where each expression provides an aggregate type and object. 1306 1328 (Enumerations are already opened.) 1307 The object is the implicit qualifier for the open structure-fields. 1308 1329 To open a pointer type, the pointer must be dereferenced to obtain a reference to the aggregate type. 1330 \begin{cfa} 1331 S * sp; 1332 with ( *sp ) { ... } 1333 \end{cfa} 1334 The expression object is the implicit qualifier for the open structure-fields. 1335 \CFA's ability to overload variables \see{\VRef{s:VariableOverload}} and use the left-side of assignment in type resolution means most fields with the same name but different types are automatically disambiguated, eliminating qualification. 1309 1336 All expressions in the expression list are open in parallel within the compound statement. 1310 1337 This semantic is different from Pascal, which nests the openings from left to right. 1311 1338 The difference between parallel and nesting occurs for fields with the same name and type: 1312 1339 \begin{cfa} 1313 struct S { int @i@; int j; double m; } s, w; 1314 struct T { int @i@; int k; int m; } t, w; 1315 with ( s, t ) { 1316 j + k; $\C{// unambiguous, s.j + t.k}$ 1317 m = 5.0; $\C{// unambiguous, t.m = 5.0}$ 1318 m = 1; $\C{// unambiguous, s.m = 1}$ 1319 int a = m; $\C{// unambiguous, a = s.i }$ 1320 double b = m; $\C{// unambiguous, b = t.m}$ 1321 int c = s.i + t.i; $\C{// unambiguous, qualification}$ 1322 (double)m; $\C{// unambiguous, cast}$ 1323 } 1324 \end{cfa} 1325 For parallel semantics, both ©s.i© and ©t.i© are visible, so ©i© is ambiguous without qualification; 1326 for nested semantics, ©t.i© hides ©s.i©, so ©i© implies ©t.i©. 1327 \CFA's ability to overload variables means fields with the same name but different types are automatically disambiguated, eliminating most qualification when opening multiple aggregates. 1328 Qualification or a cast is used to disambiguate. 1329 1330 There is an interesting problem between parameters and the function-body ©with©, \eg: 1340 struct Q { int @i@; int k; int @m@; } q, w; 1341 struct R { int @i@; int j; double @m@; } r, w; 1342 with ( r, q ) { 1343 j + k; $\C{// unambiguous, r.j + q.k}$ 1344 m = 5.0; $\C{// unambiguous, q.m = 5.0}$ 1345 m = 1; $\C{// unambiguous, r.m = 1}$ 1346 int a = m; $\C{// unambiguous, a = r.i }$ 1347 double b = m; $\C{// unambiguous, b = q.m}$ 1348 int c = r.i + q.i; $\C{// disambiguate with qualification}$ 1349 (double)m; $\C{// disambiguate with cast}$ 1350 } 1351 \end{cfa} 1352 For parallel semantics, both ©r.i© and ©q.i© are visible, so ©i© is ambiguous without qualification; 1353 for nested semantics, ©q.i© hides ©r.i©, so ©i© implies ©q.i©. 1354 Pascal nested-semantics is possible by nesting ©with© statements. 1355 \begin{cfa} 1356 with ( r ) { 1357 i; $\C{// unambiguous, r.i}$ 1358 with ( q ) { 1359 i; $\C{// unambiguous, q.i}$ 1360 } 1361 } 1362 \end{cfa} 1363 A cast or qualification can be used to disambiguate variables within a ©with© \emph{statement}. 1364 A cast can be used to disambiguate among overload variables in a ©with© \emph{expression}: 1365 \begin{cfa} 1366 with ( w ) { ... } $\C{// ambiguous, same name and no context}$ 1367 with ( (Q)w ) { ... } $\C{// unambiguous, cast}$ 1368 \end{cfa} 1369 Because there is no left-side in the ©with© expression to implicitly disambiguate between the ©w© variables, it is necessary to explicitly disambiguate by casting ©w© to type ©Q© or ©R©. 1370 1371 Finally, there is an interesting problem between parameters and the function-body ©with©, \eg: 1331 1372 \begin{cfa} 1332 1373 void ?{}( S & s, int i ) with ( s ) { $\C{// constructor}$ … … 1344 1385 and implicitly opened \emph{after} a function-body open, to give them higher priority: 1345 1386 \begin{cfa} 1346 void ?{}( S & s, int @i@ ) with ( s ) @with( $\emph{\R{params}}$ )@ { 1387 void ?{}( S & s, int @i@ ) with ( s ) @with( $\emph{\R{params}}$ )@ { // syntax not allowed, illustration only 1347 1388 s.i = @i@; j = 3; m = 5.5; 1348 1389 } 1349 1390 \end{cfa} 1350 Finally, a cast may be used to disambiguate among overload variables in a ©with© expression: 1351 \begin{cfa} 1352 with ( w ) { ... } $\C{// ambiguous, same name and no context}$ 1353 with ( (S)w ) { ... } $\C{// unambiguous, cast}$ 1354 \end{cfa} 1355 and ©with© expressions may be complex expressions with type reference \see{\VRef{s:References}} to aggregate: 1356 % \begin{cfa} 1357 % struct S { int i, j; } sv; 1358 % with ( sv ) { $\C{// implicit reference}$ 1359 % S & sr = sv; 1360 % with ( sr ) { $\C{// explicit reference}$ 1361 % S * sp = &sv; 1362 % with ( *sp ) { $\C{// computed reference}$ 1363 % i = 3; j = 4; $\C{\color{red}// sp--{\textgreater}i, sp--{\textgreater}j}$ 1364 % } 1365 % i = 2; j = 3; $\C{\color{red}// sr.i, sr.j}$ 1366 % } 1367 % i = 1; j = 2; $\C{\color{red}// sv.i, sv.j}$ 1368 % } 1369 % \end{cfa} 1370 1371 In \Index{object-oriented} programming, there is an implicit first parameter, often names \textbf{©self©} or \textbf{©this©}, which is elided. 1372 \begin{C++} 1373 class C { 1374 int i, j; 1375 int mem() { $\C{\R{// implicit "this" parameter}}$ 1376 i = 1; $\C{\R{// this->i}}$ 1377 j = 2; $\C{\R{// this->j}}$ 1378 } 1379 } 1380 \end{C++} 1381 Since \CFA is non-object-oriented, the equivalent object-oriented program looks like: 1382 \begin{cfa} 1383 struct S { int i, j; }; 1384 int mem( S & @this@ ) { $\C{// explicit "this" parameter}$ 1385 @this.@i = 1; $\C{// "this" is not elided}$ 1386 @this.@j = 2; 1387 } 1388 \end{cfa} 1389 but it is cumbersome having to write ``©this.©'' many times in a member. 1390 1391 \CFA provides a ©with© clause/statement \see{Pascal~\cite[\S~4.F]{Pascal}} to elided the "©this.©" by opening a scope containing field identifiers, changing the qualified fields into variables and giving an opportunity for optimizing qualified references. 1392 \begin{cfa} 1393 int mem( S & this ) @with( this )@ { $\C{// with clause}$ 1394 i = 1; $\C{\R{// this.i}}$ 1395 j = 2; $\C{\R{// this.j}}$ 1396 } 1397 \end{cfa} 1398 which extends to multiple routine parameters: 1399 \begin{cfa} 1400 struct T { double m, n; }; 1401 int mem2( S & this1, T & this2 ) @with( this1, this2 )@ { 1402 i = 1; j = 2; 1403 m = 1.0; n = 2.0; 1404 } 1405 \end{cfa} 1406 1407 The statement form is used within a block: 1408 \begin{cfa} 1409 int foo() { 1410 struct S1 { ... } s1; 1411 struct S2 { ... } s2; 1412 @with( s1 )@ { $\C{// with statement}$ 1413 // access fields of s1 without qualification 1414 @with s2@ { $\C{// nesting}$ 1415 // access fields of s1 and s2 without qualification 1416 } 1417 } 1418 @with s1, s2@ { 1419 // access unambiguous fields of s1 and s2 without qualification 1420 } 1421 } 1422 \end{cfa} 1423 1424 When opening multiple structures, fields with the same name and type are ambiguous and must be fully qualified. 1425 For fields with the same name but different type, context/cast can be used to disambiguate. 1426 \begin{cfa} 1427 struct S { int i; int j; double m; } a, c; 1428 struct T { int i; int k; int m } b, c; 1429 with( a, b ) 1430 { 1431 } 1432 \end{cfa} 1433 1434 \begin{comment} 1435 The components in the "with" clause 1436 1437 with a, b, c { ... } 1438 1439 serve 2 purposes: each component provides a type and object. The type must be a 1440 structure type. Enumerations are already opened, and I think a union is opened 1441 to some extent, too. (Or is that just unnamed unions?) The object is the target 1442 that the naked structure-fields apply to. The components are open in "parallel" 1443 at the scope of the "with" clause/statement, so opening "a" does not affect 1444 opening "b", etc. This semantic is different from Pascal, which nests the 1445 openings. 1446 1447 Having said the above, it seems reasonable to allow a "with" component to be an 1448 expression. The type is the static expression-type and the object is the result 1449 of the expression. Again, the type must be an aggregate. Expressions require 1450 parenthesis around the components. 1451 1452 with( a, b, c ) { ... } 1453 1454 Does this now make sense? 1455 1456 Having written more CFA code, it is becoming clear to me that I *really* want 1457 the "with" to be implemented because I hate having to type all those object 1458 names for fields. It's a great way to drive people away from the language. 1459 \end{comment} 1391 This implicit semantic matches with programmer expectation. 1392 1460 1393 1461 1394 … … 4345 4278 4346 4279 4347 \subsection{ OverloadedConstant}4280 \subsection{Constant} 4348 4281 4349 4282 The constants 0 and 1 have special meaning. … … 4384 4317 4385 4318 4386 \subsection{Variable Overloading} 4319 \subsection{Variable} 4320 \label{s:VariableOverload} 4387 4321 4388 4322 The overload rules of \CFA allow a programmer to define multiple variables with the same name, but different types. … … 4427 4361 4428 4362 4429 \subsection{Operator Overloading}4363 \subsection{Operator} 4430 4364 4431 4365 \CFA also allows operators to be overloaded, to simplify the use of user-defined types. … … 5685 5619 \end{cfa} 5686 5620 & 5687 \begin{ lstlisting}[language=C++]5621 \begin{C++} 5688 5622 class Line { 5689 5623 float lnth; … … 5712 5646 Line line1; 5713 5647 Line line2( 3.4 ); 5714 \end{ lstlisting}5648 \end{C++} 5715 5649 & 5716 5650 \begin{lstlisting}[language=Golang]
Note: See TracChangeset
for help on using the changeset viewer.