Changeset 8f250e0 for doc/theses/mike_brooks_MMath/string.tex
- Timestamp:
- Mar 17, 2025, 10:52:58 AM (3 days ago)
- Branches:
- master
- Children:
- 71624f8
- Parents:
- 8617ee90
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified doc/theses/mike_brooks_MMath/string.tex ¶
r8617ee90 r8f250e0 57 57 s1.replace( 2, 3, "xy" ); $\C[2.25in]{// replace by position (zero origin) and length, mutable}\CRT$ 58 58 cout << s1 << endl; 59 $\texttt{ abxy}$59 $\texttt{\small abxy}$ 60 60 \end{cfa} 61 61 while Java allocates and returns a new string with the result, leaving the receiver unmodified. … … 65 65 String r = s.replace( "cde", "xy" ); $\C[2.25in]{// replace by text, immutable}$ 66 66 System.out.println( s + ' ' + r ); 67 $\texttt{ abcde abxy}$67 $\texttt{\small abcde abxy}$ 68 68 \end{java} 69 69 % Generally, Java's @String@ type is immutable. … … 73 73 sb.replace( 2, 5, "xy" ); $\C[2.25in]{// replace by position, mutable}\CRT$ 74 74 System.out.println( sb ); 75 $\texttt{ abxy}$75 $\texttt{\small abxy}$ 76 76 \end{java} 77 77 However, there are significant differences; … … 278 278 \input{sharing10.tex} 279 279 Strings @s1_crs@ and @s1_mid@ overlap at character 4, @j@ because the substrings are 3,2 and 4,2. 280 When @s1_crs@'s size increases by 1, @s1_mid@'s starting location moves from 4 to 5, but the overlap ing character remains, changing to @'+'@.280 When @s1_crs@'s size increases by 1, @s1_mid@'s starting location moves from 4 to 5, but the overlapping character remains, changing to @'+'@. 281 281 282 282 \PAB{TODO: finish typesetting the demo} … … 298 298 299 299 In general, a lifecycle function has access to an object by location, \ie constructors and destructors receive a @this@ parameter providing an object's memory address. 300 \begin{cfa} 301 struct S { int * ip; }; 302 void ?{}( S & @this@ ) { this.ip = new(); } $\C[3in]{// default constructor}$ 303 void ?{}( S & @this@, int i ) { ?{}(this); *this.ip = i; } $\C{// initializing constructor}$ 304 void ?{}( S & @this@, S s ) { this = s; } $\C{// copy constructor}$ 305 void ^?{}( S & @this@ ) { delete( this.ip ); } $\C{// destructor}\CRT$ 306 \end{cfa} 300 307 The lifecycle implementation can then add this object to a collection at creation and remove it at destruction. 301 308 A module providing lifecycle semantics can traverse the collection at relevant times to keep the objects ``good.'' … … 307 314 To provide this knowledge, languages differentiate between initialization and assignment to a left-hand side. 308 315 \begin{cfa} 309 Obj obj2 = obj1; // initialization, obj2 is uninitialized310 obj2 = obj1; // assignment, obj2 must be initialized for management to work316 Obj obj2 = obj1; $\C[1.5in]{// initialization, obj2 is initialized}$ 317 obj2 = obj1; $\C{// assignment, obj2 must be initialized for management to work}\CRT$ 311 318 \end{cfa} 312 319 Initialization occurs at declaration by value, parameter by argument, return temporary by function call. 313 320 Hence, it is necessary to have two kinds of constructors: by value or object. 314 321 \begin{cfa} 315 Obj obj1{ 1, 2, 3 }; // by value, management is initialized316 Obj obj2 = obj1; // by obj, management is updated322 Obj obj1{ 1, 2, 3 }; $\C[1.5in]{// by value, management is initialized}$ 323 Obj obj2 = obj1; $\C{// by obj, management is updated}\CRT$ 317 324 \end{cfa} 318 325 When no object management is required, initialization copies the right-hand value. … … 321 328 The \CFA RAII system supports lifecycle functions, except for returning a value from a function to a temporary. 322 329 For example, in \CC: 323 \begin{c fa}330 \begin{c++} 324 331 struct S {...}; 325 332 S identity( S s ) { return s; } 326 333 S s; 327 334 s = identity( s ); // S temp = identity( s ); s = temp; 328 \end{c fa}329 the generated code for the function call created a temporary with initialization from the function call, and then assigns the temporary to the receiver.330 This two step approach means extra storage for the temporary and two copies to get the result into the receivervariable.335 \end{c++} 336 the generated code for the function call created a temporary with initialization from the function call, and then assigns the temporary to the object. 337 This two step approach means extra storage for the temporary and two copies to get the result into the object variable. 331 338 \CC{17} introduced return value-optimization (RVO)~\cite{RVO20} to ``avoid copying an object that a function returns as its value, including avoiding creation of a temporary object''. 332 339 \CFA uses C semantics for function return giving direct value-assignment, which eliminates unnecessary code, but skips an essential feature needed by lifetime management. … … 342 349 Then, programs written with the HL API will simply run faster. 343 350 In the meantime, performance-critical sections of applications use LL. 344 Subsequent performance experiments ~\VRef{s:PerformanceAssessment} with other string libraries has \CFA strings using the LL API.351 Subsequent performance experiments \see{\VRef{s:PerformanceAssessment}} with other string libraries has \CFA strings using the LL API. 345 352 These measurement gives a fair estimate of the goal state for \CFA. 346 353 … … 356 363 This cycle of frequent cheap allocations, interspersed with infrequent expensive compactions, has obvious similarities to a general-purpose memory manager based on garbage collection (GC). 357 364 A few differences are noteworthy. 358 First, in a general purpose manager, the objects of allocation contain pointers to other such objects, making the transitive reachability of these objects be a critical property.365 First, in a general purpose manager, the allocated objects may contain pointers to other objects, making the transitive reachability of these objects a crucial property. 359 366 Here, the allocations are text, so one allocation never keeps another alive. 360 367 Second, in a general purpose manager, the handle that keeps an allocation alive is just a lean pointer. … … 368 375 369 376 \VRef[Figure]{f:memmgr-basic} shows the representation. 370 A heap header and its text buffer, definesa sharing context.377 The heap header and text buffer define a sharing context. 371 378 Normally, one global sharing context is appropriate for an entire program; 372 exceptions are discussed in [xref TBD].379 concurrent exceptions are discussed in \VRef{s:AvoidingImplicitSharing}. 373 380 A string is a handle into the buffer and linked into a list. 374 381 The list is doubly linked for $O(1)$ insertion and removal at any location. 375 Strings are orders n the list by text-bufferaddress, where there is no overlapping, and approximately, where there is.376 The header maintains a next-allocation pointer, @alloc@, pointing to the last live allocation ofthe buffer.382 Strings are orders in the list by string-text address, where there is no overlapping, and approximately, where there is. 383 The header maintains a next-allocation pointer, @alloc@, pointing to the last live allocation in the buffer. 377 384 No external references point into the buffer and the management procedure relocates the text allocations as needed. 378 A string handle contains an explicitstring, while its string is contiguous and not null terminated.385 A string handle references a containing string, while its string is contiguous and not null terminated. 379 386 The length sets an upper limit on the string size, but is large (4 or 8 bytes). 380 String handles can be allocated in the stack or heap, while the text buffer is large enough with good management so that only one dynamic allocation is necessary for it during program execution. 381 During this period strings can vary in size dynamically. 387 String handles can be allocated in the stack or heap, and represent the string variables in a program. 388 Normal C life-time rules apply to guarantee correctness of the string linked-list. 389 The text buffer is large enough with good management so that often only one dynamic allocation is necessary during program execution. 390 % During this period, strings can vary in size dynamically. 382 391 383 392 When the text buffer fills, \ie the next new string allocation causes @alloc@ to point beyond the end of the buffer, the strings are compacted. … … 386 395 After compaction, if the amount of free storage is still less than the new string allocation, a larger text buffer is heap allocated, the current buffer is copies into the new buffer, and the original buffer is freed. 387 396 Note, the list of string handles is unaffected during a compaction; 388 only the string pointers are modified to new buffer locations.389 390 Object lifecycle events are the subscription-managementtriggers in such a service.397 only the string pointers in the handles are modified to new buffer locations. 398 399 Object lifecycle events are the \emph{subscription-management} triggers in such a service. 391 400 There are two fundamental string-creation routines: importing external text like a C-string or reading a string, and initialization from an existing \CFA string. 392 401 When importing, storage comes from the end of the buffer, into which the text is copied. 393 The resultant handle is inserted at the end of the handle list to maintain ordering.402 The new string handle is inserted at the end of the handle list because the new text is at the end of the buffer. 394 403 When initializing from text already in the text buffer, the new handle is a second reference into the original run of characters. 395 404 In this case, the new handle's linked-list position is after the original handle. … … 398 407 399 408 Certain string operations can results in a subset (substring) of another string. 400 The resulting handle is then place in the correct sorted position in the list, possible with a short linear search to locate the position.409 The resulting handle is then placed in the correct sorted position in the list, possible with a short linear search to locate the position. 401 410 For string operations resulting in a new string, that string is allocated at the end of the buffer. 402 411 For shared-edit strings, handles that originally referenced containing locations need to see the new value at the new buffer location. 403 These strings are moved to appropriate locations at the end of the list (addressed further in [xref: TBD].412 These strings are moved to appropriate locations at the end of the list \see{[xref: TBD]}. 404 413 For nonshared-edit strings, a containing string can be moved and the nonshared strings can remain in the same position. 405 String assignment words similarly to string initialization, maintain the invariant of linked list order matching buffer order. 406 407 At the level of the memory manager, these modifications can always be explained as assignments; for example, an append is an assignment into the empty substring at the end. 408 409 While favourable conditions allow for in-place editing, the general case requires a fresh buffer run. 410 For example, if the new value does not fit in the old place, or if other handles are still using the old value, then the new value will use a fresh buffer run. 411 412 where there is room for the resulting value in the original buffer location, and where all handles referring to the original buffer location should see the new value, 413 414 always boiled down to assignment and appendment. 415 Assignment has special cases that happen in-place, but in the general case, it is implemented as a sequence of appends onto a fresh allocation at the end of the buffer. 416 (The sequence has multiple steps when the assignment target is a substring: old before, new middle, old after.) 417 Similarly, an append request can be serviced in-place when there is room, or as a pair of appends. 414 String assignment words similarly to string initialization, maintain the invariant of linked-list order matching buffer order. 415 416 At the level of the memory manager, these modifications can always be explained as assignments and appendment; 417 for example, an append is an assignment into the empty substring at the end of the buffer. 418 Favourable conditions allow for in-place editing: where there is room for the resulting value in the original buffer location, and where all handles referring to the original buffer location see the new value. 419 However, the general case requires a new buffer allocation: where the new value does not fit in the old place, or if other handles are still using the old value. 418 420 419 421 420 422 \subsection{Sharing implementation} 421 423 422 The \CFA string module has two manners in which several string handles can share an underlying run of characters. 423 424 The first type of sharing is user-requested, following the [xref Logical Overlap]. Here, the user requests, explicitly, that both handles be views of the same logical, modifiable string. This state is typically produced by the substring operation. In a typical substring call, the source string-handle is referencing an entire string, and the resulting, newly made, string handle is referencing a portion of the original. In this state, a subsequent modification made by either is visible in both. 425 426 The second type of sharing happens when the system implicitly delays the physical execution of a logical \emph{copy} operation, as part of its copy-on-write optimization. This state is typically produced by constructing a new string, using an original string as its initialization source. In this state, a subsequent modification done on one handle triggers the deferred copy action, leaving the handles referencing different runs within the buffer, holding distinct values. 427 428 A further abstraction, in the string module's implementation, helps distinguish the two senses of sharing. A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one being constructed from the other, with the ``share edits'' opt-in given. It is represented by a second linked list among the handles. A string that shares edits with no other is in a SES by itself. Inside a SES, a logical modification of one substring portion may change the logical value in another, depending on whether the two actually overlap. Conversely, no logical value change can flow outside of a SES. Even if a modification on one string handle does not reveal itself \emph{logically} to anther handle in the same SES (because they don't overlap), if the modification is length-changing, completing the modification requires visiting the second handle to adjust its location in the sliding text. 424 The \CFA string module has two mechanisms to handle the case when string handles share a string of text. 425 426 The first type of sharing is the user requests both string handles be views of the same logical, modifiable string. 427 This state is typically produced by the substring operation. 428 \begin{cfa} 429 string s = "abcde"; 430 string s1 = s( 1, 2 )@`share@; $\C[2.25in]{// explicit sharing}$ 431 s[1] = 'x'; $\C{// change s and s1}\CRT$ 432 sout | s | s1; 433 $\texttt{\small axcde xc}$ 434 \end{cfa} 435 In a typical substring call, the source string-handle is referencing an entire string, and the resulting, newly made, string handle is referencing a portion of the original. 436 In this state, a subsequent modification made by either is visible in both. 437 438 The second type of sharing happens when the system implicitly delays the physical execution of a logical \emph{copy} operation, as part of its copy-on-write optimization. 439 This state is typically produced by constructing a new string, using an original string as its initialization source. 440 \begin{cfa} 441 string s = "abcde"; 442 string s1 = s( 1, 2 )@@; $\C[2.25in]{// no sharing}$ 443 s[1] = 'x'; $\C{// copy-on-write s1}\CRT$ 444 sout | s | s1; 445 $\texttt{\small axcde bc}$ 446 \end{cfa} 447 In this state, a subsequent modification done on one handle triggers the deferred copy action, leaving the handles referencing different text within the buffer, holding distinct values. 448 449 A further abstraction, in the string module's implementation, helps distinguish the two senses of sharing. 450 A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one string being constructed from another, with the ``share'' opt-in given. 451 The SES is represented by a second linked list among the handles. 452 A string that shares edits with no other is in a SES by itself. 453 Inside a SES, a logical modification of one substring portion may change the logical value in another, depending on whether the two actually overlap. 454 Conversely, no logical value change can flow outside of a SES. 455 Even if a modification on one string handle does not reveal itself \emph{logically} to anther handle in the same SES (because they do not overlap), if the modification is length-changing, completing the modification requires visiting the second handle to adjust its location in the sliding text. 429 456 430 457 431 458 \subsection{Avoiding implicit sharing} 432 433 There are tradeoffs associated with the copy-on-write mechanism. Several qualitative matters are detailed in the [xref: Performance Assessment] section and the qualitative issue of multi-threaded support is introduced here. The \CFA sting library provides a switch to disable the sharing mechanism for situations where it is inappropriate. 434 435 Because of the inter-linked string handles, any participant managing one string is also managing, directly, the neighbouring strings, and from there, a data structure of the ``set of all strings.'' This data structure is intended for sequential access. A negative consequence of this decision is that multiple threads using strings need to be set up so that they avoid attempting to modify (concurrently) an instance of this structure. A positive consequence is that a single-threaded program, or a program with several independent threads, can use the sharing context without an overhead from locking. 436 437 The \CFA sting library provides the @string_sharectx@ type to control an ambient sharing context for the current thread. It allows two adjustments: to opt out of sharing entirely, or to begin sharing within a private context. Either way, the chosen mode applies to the current thread, for the duration of the lifetime of the created @string_sharectx@ object, up to being suspended by child lifetimes of different contexts. The indented use is with stack-managed lifetimes, in which the established context lasts until the current function returns, and affects all functions called that don't create their own contexts. 459 \label{s:AvoidingImplicitSharing} 460 461 There are tradeoffs associated with the copy-on-write mechanism. 462 Several qualitative matters are detailed in the \VRef{s:PerformanceAssessment} and the qualitative issue of multi-threaded support is introduced here. 463 The \CFA sting library provides a switch to disable the sharing mechanism for situations where it is inappropriate. 464 465 Because of the inter-linked string handles, any participant managing one string is also managing, directly, the neighbouring strings, and from there, a data structure of the ``set of all strings.'' This data structure is intended for sequential access. 466 A negative consequence of this decision is that multiple threads using strings need to be set up so that they avoid attempting to modify (concurrently) an instance of this structure. 467 A positive consequence is that a single-threaded program, or a program with several independent threads, can use the sharing context without locking overhead. 468 469 When the string library is running with sharing disabled, it runs without implicit thread-safety challenges, which is the same as the \CC STL, and with performance goals similar to the STL. 470 Running with sharing disabled can be thought of as a STL-emulation mode. 471 Hence, concurrent users of string objects must still bring their own mutual exclusion, but the string library does not add any cross thread uses that are not apparent in a user's code. 472 473 \PAB{I could not get this to do anything.} 474 The \CFA string library does provides a @string_sharectx@ type to control an ambient sharing context for the current thread. 475 It allows two adjustments: to opt out of sharing entirely or to begin sharing within a private context. 476 Either way, the chosen mode applies only to the current thread, for the duration of the lifetime of the created @string_sharectx@ object, up to being suspended by child lifetimes of different contexts. 477 The intended use is with stack-managed lifetimes, in which the established context lasts until the current function returns, and affects all functions called that do not create their own contexts. 438 478 \lstinputlisting[language=CFA, firstline=20, lastline=34]{sharectx.run.cfa} 439 In this example, the single-letter functions are called in alphabetic order. The functions @a@ and @d@ share string character ranges within themselves, but not with each other. The functions @b@, @c@ and @e@ never share anything. 479 In this example, the single-letter functions are called in alphabetic order. 480 The functions @a@ and @d@ share string character ranges within themselves, but not with each other. 481 The functions @b@, @c@ and @e@ never share anything. 440 482 441 483 [ TODO: true up with ``is thread local'' (implement that and expand this discussion to give a concurrent example, or adjust this wording) ] 442 443 When the string library is running with sharing disabled, it runs without implicit thread-safety challenges (which same as the STL) and with performance goals similar to the STL's. This thread-safety quality means concurrent users of one string object must still bring their own mutual exclusion, but the string library will not add any cross thread uses that were not apparent in the user's code.444 445 Running with sharing disabled can be thought of as STL-emulation mode.446 447 484 448 485 … … 457 494 \label{s:PerformanceAssessment} 458 495 459 I assessed the \CFA string library's speed and memory usage. I present these results in even equivalent cases, due to either micro-optimizations foregone, or fundamental costs of the added functionality. They also show the benefits and tradeoffs, as >100\% effects, of switching to \CFA, with the tradeoff points quantified. The final test shows the overall win of the \CFA text-sharing mechanism. It exercises several operations together, showing \CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve. 460 461 To discuss: general goal of ... while STL makes you think about memory management, all the time, and if you do your performance can be great ... \CFA sacrifices this advantage modestly in exchange for big wins when you're not thinking about memory management. [Does this position cover all of it?] 496 I assessed the \CFA string library's speed and memory usage against strings in \CC STL. 497 The results are presented in even equivalent cases, due to either micro-optimizations foregone, or fundamental costs of the added functionality. 498 They also show the benefits and tradeoffs, as >100\% effects, of switching to \CFA, with the tradeoff points quantified. 499 The final test shows the overall win of the \CFA text-sharing mechanism. 500 It exercises several operations together, showing \CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve. 501 502 To discuss: general goal of ... 503 while STL makes you think about memory management, all the time, and if you do your performance can be great ... 504 \CFA sacrifices this advantage modestly in exchange for big wins when you're not thinking about memory management. 505 [Does this position cover all of it?] 462 506 463 507 To discuss: revisit HL v LL APIs … … 465 509 To discuss: revisit no-sharing as STL emulation modes 466 510 467 These tests use randomly generated text fragments of varying lengths. A collection of such fragments is a \emph{corpus}. The mean length of a fragment from corpus is a typical explanatory variable. Such a length is used in one of three modes: 511 512 \subsection{Methodology} 513 514 These tests use randomly generated text fragments of varying lengths. 515 A collection of such fragments is a \emph{corpus}. 516 The mean length of a fragment from a corpus is a typical explanatory variable. 517 Such a length is used in one of three modes: 468 518 \begin{description} 469 \item [Fixed-size] means all string fragments are of the stated size 470 \item [Varying from 1] means string lengths are drawn from a geometric distribution with the stated mean, and all lengths occur 471 \item [Varying from 16] means string lengths are drawn from a geometric distribution with the stated mean, but only lengths 16 and above occur; thus, the stated mean will beabove 16.519 \item [Fixed-size] means all string fragments are of the stated size. 520 \item [Varying from 1] means string lengths are drawn from a geometric distribution with the stated mean, and all lengths occur. 521 \item [Varying from 16] means string lengths are drawn from a geometric distribution with the stated mean, but only lengths 16 and above occur; thus, the stated mean is above 16. 472 522 \end{description} 473 The geometric distribution implies that lengths much longer than the mean occur frequently. The special treatment of length 16 deals with comparison to STL, given that STL has short-string optimization (see [TODO: write and cross-ref future-work SSO]), currently not implemented in \CFA. When success notwithstanding SSO is illustrated, a fixed-size or from-16 distribution ensures that extra-optimized cases are not part of the mix on the STL side. In all experiments that use a corpus, its text is generated and loaded into the SUT before the timed phase begins. 523 The geometric distribution implies that lengths much longer than the mean occur frequently. 524 The special treatment of length 16 deals with comparison to STL, given that STL has short-string optimization (see [TODO: write and cross-ref future-work SSO]), currently not implemented in \CFA. 525 When success, notwithstanding SSO, is illustrated, a fixed-size or from-16 distribution ensures that extra-optimized cases are not part of the mix on the STL side. 526 In all experiments that use a corpus, its text is generated and loaded into the SUT before the timed phase begins. 474 527 475 528 To discuss: vocabulary for reused case variables … … 482 535 483 536 484 \subsubsection{Test: Append} 485 486 This test measures the speed of appending fragments of text onto a growing string. Its subcases include both \CFA being similar to STL, and their designs offering a tradeoff. 487 488 One experimental variable is the user's operation being @a = a + b@ vs. @a += b@. While experienced programmers expect the latter to be ``what you obviously should do,'' controlling the penalty of the former both helps the API be accessible to beginners and also helps offer confidence that when a user tries to compose operations, the forms that are most natural to the user's composition are viable. 489 490 Another experimental variable is whether the user's logical allocation is fresh vs reused. Here, \emph{reusing a logical allocation}, means that the program variable, into which the user is concatenating, previously held a long string:\\ 491 \begin{tabular}{ll} 492 Logical allocation fresh & Logical allocation reused \\ 493 & @ string x; @ \\ 494 @ for( ... ) { @ & @ for( ... ) { @ \\ 495 @ string x; @ & @ x = ""; @ \\ 496 @ for( ... ) @ & @ for( ... ) @ \\ 497 @ x += ... @ & @ x += ... @ \\ 498 @ } @ & @ } @ 499 \end{tabular}\\ 500 These benchmark drivers have an outer loop for ``until a sample-worthy amount of execution has happened'' and an inner loop for ``build up the desired-length string.'' It is sensible to doubt that a user should have to care about this difference, yet the STL performs differently in these cases. Concretely, both cases incur the cost of copying characters into the target string, but only the allocation-fresh case incurs a further reallocation cost, which is generally paid at points of doubling the length. For the STL, this cost includes obtaining a fresh buffer from the memory allocator and copying older characters into the new buffer, while \CFA-sharing hides such a cost entirely. The reuse-vs-fresh distinction is only relevant in the current \emph{append} tests. 501 502 The \emph{append} tests use the varying-from-1 corpus construction; that is they do not assume away the STL's advantage from small-string optimization. 503 504 To discuss: any other case variables introduced in the performance intro 537 \subsection{Test: Append} 538 539 This test measures the speed of appending fragments of text onto a growing string. 540 Its subcases include both \CFA being similar to STL and their designs offering a tradeoff. 541 542 One experimental variable is logically equivalent operations such as @a = a + b@ \vs @a += b@. 543 For numeric types, the generated code is equivalence, giving identical performance. 544 However, for string types there can be a significant difference in performance, especially if this code appears in a loop iterating a large number of times. 545 This difference might not be intuitive to beginners. 546 547 Another experimental variable is whether the user's logical allocation is fresh \vs reused. 548 Here, \emph{reusing a logical allocation}, means that the program variable, into which the user is concatenating, previously held a long string:\\ 549 \begin{cquote} 550 \setlength{\tabcolsep}{20pt} 551 \begin{tabular}{@{}ll@{}} 552 \begin{cfa} 553 554 for ( ... ) { 555 @string x;@ // fresh 556 for ( ... ) 557 x += ... 558 } 559 \end{cfa} 560 & 561 \begin{cfa} 562 string x; 563 for ( ... ) { 564 @x = "";@ // reused 565 for ( ... ) 566 x += ... 567 } 568 \end{cfa} 569 \end{tabular} 570 \end{cquote} 571 These benchmark drivers have an outer loop for ``until a sample-worthy amount of execution has happened'' and an inner loop for ``build up the desired-length string.'' 572 In general, a user should not have to care about this difference, yet the STL performs differently in these cases. 573 Furthermore, if a routine takes a string by reference, if cannot use the fresh approach. 574 Concretely, both cases incur the cost of copying characters into the target string, but only the allocation-fresh case incurs a further reallocation cost, which is generally paid at points of doubling the length. 575 For the STL, this cost includes obtaining a fresh buffer from the memory allocator and copying older characters into the new buffer, while \CFA-sharing hides such a cost entirely. 576 The fresh \vs reuse distinction is only relevant in the \emph{append} tests. 577 578 The \emph{append} tests use the varying-from-1 corpus construction, \ie they do not assume away the STL's advantage for small-string optimization. 579 \PAB{To discuss: any other case variables introduced in the performance intro} 580 Figure \ref{fig:string-graph-peq-cppemu} shows this behaviour, by the STL and by \CFA in STL emulation mode. 581 \CFA reproduces STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case. 582 This penalty characterizes the amount of implementation fine tuning done with STL and not done with \CFA in present state. 583 The larger inherent penalty, for a user mismanaging reuse, is 40\% averaged over the cases shown, is minimally 24\%, shows up consistently between the STL and \CFA implementations, and increases with larger strings. 584 585 \PAB{Most of your graphs are unreadable. gnuplot is a good tool for generating high quality graphs.} 505 586 506 587 \begin{figure} 507 508 509 588 \includegraphics[width=\textwidth]{string-graph-peq-cppemu.png} 589 \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA with STL implementations (given \CFA running in STL emulation mode), and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.} 590 \label{fig:string-graph-peq-cppemu} 510 591 \end{figure} 511 592 512 Figure \ref{fig:string-graph-peq-cppemu} shows this behaviour, by the STL and by \CFA in STL emulation mode. \CFA reproduces STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case. This penalty characterizes the amount of implementation fine tuning done with STL and not done with \CFA in present state. The larger inherent penalty, for a user mismanaging reuse, is 40\% averaged over the cases shown, is minimally 24\%, shows up consistently between the STL and \CFA implementations, and increases with larger strings.513 514 593 \begin{figure} 515 516 517 594 \includegraphics[width=\textwidth]{string-graph-peq-sharing.png} 595 \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA (having implicit sharing activated) with STL, and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.} 596 \label{fig:string-graph-peq-sharing} 518 597 \end{figure} 519 598 520 In sharing mode, \CFA makes the fresh/reuse difference disappear, as shown in Figure \ref{fig:string-graph-peq-sharing}. At append lengths 5 and above, \CFA not only splits the two baseline STL cases, but its slowdown of 16\% over (STL with user-managed reuse) is close to the \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode. 599 In sharing mode, \CFA makes the fresh/reuse difference disappear, as shown in Figure \ref{fig:string-graph-peq-sharing}. 600 At append lengths 5 and above, \CFA not only splits the two baseline STL cases, but its slowdown of 16\% over (STL with user-managed reuse) is close to the \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode. 521 601 522 602 \begin{figure} 523 \includegraphics[width=\textwidth]{string-graph-pta-sharing.png} 524 \caption{Average time per iteration with one \lstinline{x = x + y} invocation (new, purple bands), comparing \CFA (having implicit sharing activated) with STL. For context, the results from Figure \ref{fig:string-graph-peq-sharing} are repeated as the bottom bands. While not a design goal, and not graphed out, \CFA in STL-emulation mode outperformed STL in this case; user-managed allocation reuse did not affect any of the implementations in this case.} 525 \label{fig:string-graph-pta-sharing} 603 \includegraphics[width=\textwidth]{string-graph-pta-sharing.png} 604 \caption{Average time per iteration with one \lstinline{x = x + y} invocation (new, purple bands), comparing \CFA (having implicit sharing activated) with STL. 605 For context, the results from Figure \ref{fig:string-graph-peq-sharing} are repeated as the bottom bands. 606 While not a design goal, and not graphed out, \CFA in STL-emulation mode outperformed STL in this case; user-managed allocation reuse did not affect any of the implementations in this case.} 607 \label{fig:string-graph-pta-sharing} 526 608 \end{figure} 527 609 528 When the user takes a further step beyond the STL's optimal zone, by running @x = x + y@, as in Figure \ref{fig:string-graph-pta-sharing}, the STL's penalty is above $15 \times$ while \CFA's (with sharing) is under $2 \times$, averaged across the cases shown here. Moreover, the STL's gap increases with string size, while \CFA's converges. 610 When the user takes a further step beyond the STL's optimal zone, by running @x = x + y@, as in Figure \ref{fig:string-graph-pta-sharing}, the STL's penalty is above $15 \times$ while \CFA's (with sharing) is under $2 \times$, averaged across the cases shown here. 611 Moreover, the STL's gap increases with string size, while \CFA's converges. 529 612 530 613 \subsubsection{Test: Pass argument} … … 532 615 To have introduced: STL string library forces users to think about memory management when communicating values across a function call 533 616 534 STL charges a prohibitive penalty for passing a string by value. With implicit sharing active, \CFA treats this operation as normal and supported. This test illustrates a main advantage of the \CFA sharing algorithm. It also has a case in which STL's small-string optimization provides a successful mitigation. 617 STL charges a prohibitive penalty for passing a string by value. 618 With implicit sharing active, \CFA treats this operation as normal and supported. 619 This test illustrates a main advantage of the \CFA sharing algorithm. 620 It also has a case in which STL's small-string optimization provides a successful mitigation. 535 621 536 622 \begin{figure} 537 \includegraphics[width=\textwidth]{string-graph-pbv.png} 538 \caption{Average time per iteration with one call to a function that takes a by-value string argument, comparing \CFA (having implicit sharing activated) with STL. (a) With \emph{Varying-from-1} corpus construction, in which the STL-only benefit of small-string optimization occurs, in varying degrees, at all string sizes. (b) With \emph{Fixed-size} corpus construction, in which this benefit applies exactly to strings with length below 16. [TODO: show version (b)]} 539 \label{fig:string-graph-pbv} 623 \includegraphics[width=\textwidth]{string-graph-pbv.png} 624 \caption{Average time per iteration with one call to a function that takes a by-value string argument, comparing \CFA (having implicit sharing activated) with STL. 625 (a) With \emph{Varying-from-1} corpus construction, in which the STL-only benefit of small-string optimization occurs, in varying degrees, at all string sizes. 626 (b) With \emph{Fixed-size} corpus construction, in which this benefit applies exactly to strings with length below 16. 627 [TODO: show version (b)]} 628 \label{fig:string-graph-pbv} 540 629 \end{figure} 541 630 542 Figure \ref{fig:string-graph-pbv} shows the costs for calling a function that receives a string argument by value. STL's performance worsens as string length increases, while \CFA has the same performance at all sizes. 543 544 The \CFA cost to pass is nontrivial. The contributor is adding and removing the callee's string handle from the global list. This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, given a \CFA realization of this optimization. At the larger sizes, when STL has to manage storage for the string, STL runs more than $3 \times$ slower, mainly due to time spent in the general-purpose memory allocator. 631 Figure \ref{fig:string-graph-pbv} shows the costs for calling a function that receives a string argument by value. 632 STL's performance worsens as string length increases, while \CFA has the same performance at all sizes. 633 634 The \CFA cost to pass is nontrivial. 635 The contributor is adding and removing the callee's string handle from the global list. 636 This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, given a \CFA realization of this optimization. 637 At the larger sizes, when STL has to manage storage for the string, STL runs more than $3 \times$ slower, mainly due to time spent in the general-purpose memory allocator. 545 638 546 639 547 640 \subsubsection{Test: Allocate} 548 641 549 This test directly compares the allocation schemes of the \CFA string with sharing, compared with the STL string. It treats the \CFA scheme as a form of garbage collection, and the STL scheme as an application of malloc-free. The test shows that \CFA enables faster speed at a cost in memory usage. 550 551 A garbage collector, afforded the freedom of managed memory, often runs faster than malloc-free (in an amortized analysis, even though it must occasionally stop to collect) because it is able to use its collection time to move objects. (In the case of the mini-allocator powering the \CFA string library, objects are runs of text.) Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' book-keeping for such a scheme is very light. A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier book-keeping to navigate and maintain a linked structure. 552 553 A garbage collector keeps allocations around for longer than the using program can reach them. By contrast, a program using malloc-free (correctly) releases allocations exactly when they are no longer reachable. Therefore, the same harness will use more memory while running under garbage collection. A garbage collector can minimize the memory overhead by searching for these dead allocations aggressively, that is, by collecting more often. Tuned in this way, it spends a lot of time collecting, easily so much as to overwhelm its speed advantage from bump-pointer allocation. If it is tuned to collect rarely, then it leaves a lot of garbage allocated (waiting to be collected) but gains the advantage of little time spent doing collection. 642 This test directly compares the allocation schemes of the \CFA string with sharing, compared with the STL string. 643 It treats the \CFA scheme as a form of garbage collection, and the STL scheme as an application of malloc-free. 644 The test shows that \CFA enables faster speed at a cost in memory usage. 645 646 A garbage collector, afforded the freedom of managed memory, often runs faster than malloc-free (in an amortized analysis, even though it must occasionally stop to collect) because it is able to use its collection time to move objects. 647 (In the case of the mini-allocator powering the \CFA string library, objects are runs of text.) Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' book-keeping for such a scheme is very light. 648 A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier book-keeping to navigate and maintain a linked structure. 649 650 A garbage collector keeps allocations around for longer than the using program can reach them. 651 By contrast, a program using malloc-free (correctly) releases allocations exactly when they are no longer reachable. 652 Therefore, the same harness will use more memory while running under garbage collection. 653 A garbage collector can minimize the memory overhead by searching for these dead allocations aggressively, that is, by collecting more often. 654 Tuned in this way, it spends a lot of time collecting, easily so much as to overwhelm its speed advantage from bump-pointer allocation. 655 If it is tuned to collect rarely, then it leaves a lot of garbage allocated (waiting to be collected) but gains the advantage of little time spent doing collection. 554 656 555 657 [TODO: find citations for the above knowledge] 556 658 557 The speed for memory tradeoff is, therefore, standard for comparisons like \CFA--STL string allocations. The test verifies that it is so and quantifies the returns available. 558 559 These tests manipulate a tuning knob that controls how much extra space to use. Specific values of this knob are not user-visible and are not presented in the results here. Instead, its two effects (amount of space used and time per operation) are shown. The independent variable is the liveness target, which is the fraction of the text buffer that is in use at the end of a collection. The allocator will expand its text buffer during a collection if the actual fraction live exceeds this target. 560 561 This experiment's driver allocates strings by constructing a string handle as a local variable then looping over recursive calls. The time measurement is of nanoseconds per such allocating call. The arrangement of recursive calls and their fan-out (iterations per recursion level) makes some of the strings long-lived and some of them short-lived. String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls. The run presented in this section used a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one. This sizing was chosen to keep the amounts of consumed memory within the machine's last-level cache. 659 The speed for memory tradeoff is, therefore, standard for comparisons like \CFA--STL string allocations. 660 The test verifies that it is so and quantifies the returns available. 661 662 These tests manipulate a tuning knob that controls how much extra space to use. 663 Specific values of this knob are not user-visible and are not presented in the results here. 664 Instead, its two effects (amount of space used and time per operation) are shown. 665 The independent variable is the liveness target, which is the fraction of the text buffer that is in use at the end of a collection. 666 The allocator will expand its text buffer during a collection if the actual fraction live exceeds this target. 667 668 This experiment's driver allocates strings by constructing a string handle as a local variable then looping over recursive calls. 669 The time measurement is of nanoseconds per such allocating call. 670 The arrangement of recursive calls and their fan-out (iterations per recursion level) makes some of the strings long-lived and some of them short-lived. 671 String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls. 672 The run presented in this section used a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one. 673 This sizing was chosen to keep the amounts of consumed memory within the machine's last-level cache. 562 674 563 675 \begin{figure} 564 \includegraphics[width=\textwidth]{string-graph-allocn.png} 565 \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at (\emph{Fixed-size} corpus construction. [MISSING] The identified clusters are for the default fraction-live target, which is 30\%. MISSING: STL results, typically just below the 0.5--0.9 \CFA segment. All runs keep an average of 836 strings live, and the median string lifetime is ?? allocations.} 566 \label{fig:string-graph-allocn} 676 \includegraphics[width=\textwidth]{string-graph-allocn.png} 677 \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at (\emph{Fixed-size} corpus construction. 678 [MISSING] The identified clusters are for the default fraction-live target, which is 30\%. 679 MISSING: STL results, typically just below the 0.5--0.9 \CFA segment. 680 All runs keep an average of 836 strings live, and the median string lifetime is ?? allocations.} 681 \label{fig:string-graph-allocn} 567 682 \end{figure} 568 683 569 Figure \ref{fig:string-graph-allocn} shows the results of this experiment. At all string sizes, varying the liveness threshold gives offers speed-for-space tradeoffs relative to STL. At the default liveness threshold, all measured string sizes see a ??\%--??\% speedup for a ??\%--??\% increase in memory footprint. 570 684 Figure \ref{fig:string-graph-allocn} shows the results of this experiment. 685 At all string sizes, varying the liveness threshold gives offers speed-for-space tradeoffs relative to STL. 686 At the default liveness threshold, all measured string sizes see a ??\%--??\% speedup for a ??\%--??\% increase in memory footprint. 571 687 572 688 573 689 \subsubsection{Test: Normalize} 574 690 575 This test is more applied than the earlier ones. It combines the effects of several operations. It also demonstrates a case of the \CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity. 691 This test is more applied than the earlier ones. 692 It combines the effects of several operations. 693 It also demonstrates a case of the \CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity. 576 694 577 695 To motivate: edits being rare 578 696 579 The program is doing a specialized find-replace operation on a large body of text. In the program under test, the replacement is just to erase a magic character. But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings. The challenge is to apply this packaged function across chunks taken from the large body. Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context. Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context. 697 The program is doing a specialized find-replace operation on a large body of text. 698 In the program under test, the replacement is just to erase a magic character. 699 But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings. 700 The challenge is to apply this packaged function across chunks taken from the large body. 701 Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context. 702 Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context. 580 703 581 704 \begin{lstlisting} 582 705 void processItem( string & item ) { 583 706 // find issues in item and fix them 584 707 } 585 708 \end{lstlisting}
Note: See TracChangeset
for help on using the changeset viewer.