Ignore:
Timestamp:
Mar 17, 2025, 10:52:58 AM (3 days ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
master
Children:
71624f8
Parents:
8617ee90
Message:

continue proofreading string chapter

File:
1 edited

Legend:

Unmodified
Added
Removed
  • TabularUnified doc/theses/mike_brooks_MMath/string.tex

    r8617ee90 r8f250e0  
    5757s1.replace( 2, 3, "xy" );  $\C[2.25in]{// replace by position (zero origin) and length, mutable}\CRT$
    5858cout << s1 << endl;
    59 $\texttt{abxy}$
     59$\texttt{\small abxy}$
    6060\end{cfa}
    6161while Java allocates and returns a new string with the result, leaving the receiver unmodified.
     
    6565String r = s.replace( "cde", "xy" );  $\C[2.25in]{// replace by text, immutable}$
    6666System.out.println( s + ' ' + r );
    67 $\texttt{abcde abxy}$
     67$\texttt{\small abcde abxy}$
    6868\end{java}
    6969% Generally, Java's @String@ type is immutable.
     
    7373sb.replace( 2, 5, "xy" );  $\C[2.25in]{// replace by position, mutable}\CRT$
    7474System.out.println( sb );
    75 $\texttt{abxy}$
     75$\texttt{\small abxy}$
    7676\end{java}
    7777However, there are significant differences;
     
    278278\input{sharing10.tex}
    279279Strings @s1_crs@ and @s1_mid@ overlap at character 4, @j@ because the substrings are 3,2 and 4,2.
    280 When @s1_crs@'s size increases by 1, @s1_mid@'s starting location moves from 4 to 5, but the overlaping character remains, changing to @'+'@.
     280When @s1_crs@'s size increases by 1, @s1_mid@'s starting location moves from 4 to 5, but the overlapping character remains, changing to @'+'@.
    281281
    282282\PAB{TODO: finish typesetting the demo}
     
    298298
    299299In general, a lifecycle function has access to an object by location, \ie constructors and destructors receive a @this@ parameter providing an object's memory address.
     300\begin{cfa}
     301struct S { int * ip; };
     302void ?{}( S & @this@ ) { this.ip = new(); } $\C[3in]{// default constructor}$
     303void ?{}( S & @this@, int i ) { ?{}(this); *this.ip = i; } $\C{// initializing constructor}$
     304void ?{}( S & @this@, S s ) { this = s; } $\C{// copy constructor}$
     305void ^?{}( S & @this@ ) { delete( this.ip ); } $\C{// destructor}\CRT$
     306\end{cfa}
    300307The lifecycle implementation can then add this object to a collection at creation and remove it at destruction.
    301308A module providing lifecycle semantics can traverse the collection at relevant times to keep the objects ``good.''
     
    307314To provide this knowledge, languages differentiate between initialization and assignment to a left-hand side.
    308315\begin{cfa}
    309 Obj obj2 = obj1;  // initialization, obj2 is uninitialized
    310 obj2 = obj1;        // assignment, obj2 must be initialized for management to work
     316Obj obj2 = obj1;  $\C[1.5in]{// initialization, obj2 is initialized}$
     317obj2 = obj1;      $\C{// assignment, obj2 must be initialized for management to work}\CRT$
    311318\end{cfa}
    312319Initialization occurs at declaration by value, parameter by argument, return temporary by function call.
    313320Hence, it is necessary to have two kinds of constructors: by value or object.
    314321\begin{cfa}
    315 Obj obj1{ 1, 2, 3 };  // by value, management is initialized
    316 Obj obj2 = obj1;     // by obj, management is updated
     322Obj obj1{ 1, 2, 3 }; $\C[1.5in]{// by value, management is initialized}$
     323Obj obj2 = obj1;     $\C{// by obj, management is updated}\CRT$
    317324\end{cfa}
    318325When no object management is required, initialization copies the right-hand value.
     
    321328The \CFA RAII system supports lifecycle functions, except for returning a value from a function to a temporary.
    322329For example, in \CC:
    323 \begin{cfa}
     330\begin{c++}
    324331struct S {...};
    325332S identity( S s ) { return s; }
    326333S s;
    327334s = identity( s ); // S temp = identity( s ); s = temp;
    328 \end{cfa}
    329 the generated code for the function call created a temporary with initialization from the function call, and then assigns the temporary to the receiver.
    330 This two step approach means extra storage for the temporary and two copies to get the result into the receiver variable.
     335\end{c++}
     336the generated code for the function call created a temporary with initialization from the function call, and then assigns the temporary to the object.
     337This two step approach means extra storage for the temporary and two copies to get the result into the object variable.
    331338\CC{17} introduced return value-optimization (RVO)~\cite{RVO20} to ``avoid copying an object that a function returns as its value, including avoiding creation of a temporary object''.
    332339\CFA uses C semantics for function return giving direct value-assignment, which eliminates unnecessary code, but skips an essential feature needed by lifetime management.
     
    342349Then, programs written with the HL API will simply run faster.
    343350In the meantime, performance-critical sections of applications use LL.
    344 Subsequent performance experiments~\VRef{s:PerformanceAssessment} with other string libraries has \CFA strings using the LL API.
     351Subsequent performance experiments \see{\VRef{s:PerformanceAssessment}} with other string libraries has \CFA strings using the LL API.
    345352These measurement gives a fair estimate of the goal state for \CFA.
    346353
     
    356363This cycle of frequent cheap allocations, interspersed with infrequent expensive compactions, has obvious similarities to a general-purpose memory manager based on garbage collection (GC).
    357364A few differences are noteworthy.
    358 First, in a general purpose manager, the objects of allocation contain pointers to other such objects, making the transitive reachability of these objects be a critical property.
     365First, in a general purpose manager, the allocated objects may contain pointers to other objects, making the transitive reachability of these objects a crucial property.
    359366Here, the allocations are text, so one allocation never keeps another alive.
    360367Second, in a general purpose manager, the handle that keeps an allocation alive is just a lean pointer.
     
    368375
    369376\VRef[Figure]{f:memmgr-basic} shows the representation.
    370 A heap header and its text buffer, defines a sharing context.
     377The heap header and text buffer define a sharing context.
    371378Normally, one global sharing context is appropriate for an entire program;
    372 exceptions are discussed in [xref TBD].
     379concurrent exceptions are discussed in \VRef{s:AvoidingImplicitSharing}.
    373380A string is a handle into the buffer and linked into a list.
    374381The list is doubly linked for $O(1)$ insertion and removal at any location.
    375 Strings are orders n the list by text-buffer address, where there is no overlapping, and approximately, where there is.
    376 The header maintains a next-allocation pointer, @alloc@, pointing to the last live allocation of the buffer.
     382Strings are orders in the list by string-text address, where there is no overlapping, and approximately, where there is.
     383The header maintains a next-allocation pointer, @alloc@, pointing to the last live allocation in the buffer.
    377384No external references point into the buffer and the management procedure relocates the text allocations as needed.
    378 A string handle contains an explicit string, while its string is contiguous and not null terminated.
     385A string handle references a containing string, while its string is contiguous and not null terminated.
    379386The length sets an upper limit on the string size, but is large (4 or 8 bytes).
    380 String handles can be allocated in the stack or heap, while the text buffer is large enough with good management so that only one dynamic allocation is necessary for it during program execution.
    381 During this period strings can vary in size dynamically.
     387String handles can be allocated in the stack or heap, and represent the string variables in a program.
     388Normal C life-time rules apply to guarantee correctness of the string linked-list.
     389The text buffer is large enough with good management so that often only one dynamic allocation is necessary during program execution.
     390% During this period, strings can vary in size dynamically.
    382391
    383392When the text buffer fills, \ie the next new string allocation causes @alloc@ to point beyond the end of the buffer, the strings are compacted.
     
    386395After compaction, if the amount of free storage is still less than the new string allocation, a larger text buffer is heap allocated, the current buffer is copies into the new buffer, and the original buffer is freed.
    387396Note, the list of string handles is unaffected during a compaction;
    388 only the string pointers are modified to new buffer locations.
    389 
    390 Object lifecycle events are the subscription-management triggers in such a service.
     397only the string pointers in the handles are modified to new buffer locations.
     398
     399Object lifecycle events are the \emph{subscription-management} triggers in such a service.
    391400There are two fundamental string-creation routines: importing external text like a C-string or reading a string, and initialization from an existing \CFA string.
    392401When importing, storage comes from the end of the buffer, into which the text is copied.
    393 The resultant handle is inserted at the end of the handle list to maintain ordering.
     402The new string handle is inserted at the end of the handle list because the new text is at the end of the buffer.
    394403When initializing from text already in the text buffer, the new handle is a second reference into the original run of characters.
    395404In this case, the new handle's linked-list position is after the original handle.
     
    398407
    399408Certain string operations can results in a subset (substring) of another string.
    400 The resulting handle is then place in the correct sorted position in the list, possible with a short linear search to locate the position.
     409The resulting handle is then placed in the correct sorted position in the list, possible with a short linear search to locate the position.
    401410For string operations resulting in a new string, that string is allocated at the end of the buffer.
    402411For shared-edit strings, handles that originally referenced containing locations need to see the new value at the new buffer location.
    403 These strings are moved to appropriate locations at the end of the list (addressed further in [xref: TBD].
     412These strings are moved to appropriate locations at the end of the list \see{[xref: TBD]}.
    404413For nonshared-edit strings, a containing string can be moved and the nonshared strings can remain in the same position.
    405 String assignment words similarly to string initialization, maintain the invariant of linked list order matching buffer order.
    406 
    407 At the level of the memory manager, these modifications can always be explained as assignments; for example, an append is an assignment into the empty substring at the end.
    408 
    409 While favourable conditions allow for in-place editing, the general case requires a fresh buffer run.
    410 For example, if the new value does not fit in the old place, or if other handles are still using the old value, then the new value will use a fresh buffer run.
    411 
    412 where there is room for the resulting value in the original buffer location, and where all handles referring to the original buffer location should see the new value,
    413 
    414 always boiled down to assignment and appendment.
    415 Assignment has special cases that happen in-place, but in the general case, it is implemented as a sequence of appends onto a fresh allocation at the end of the buffer.
    416 (The sequence has multiple steps when the assignment target is a substring: old before, new middle, old after.)
    417 Similarly, an append request can be serviced in-place when there is room, or as a pair of appends.
     414String assignment words similarly to string initialization, maintain the invariant of linked-list order matching buffer order.
     415
     416At the level of the memory manager, these modifications can always be explained as assignments and appendment;
     417for example, an append is an assignment into the empty substring at the end of the buffer.
     418Favourable conditions allow for in-place editing: where there is room for the resulting value in the original buffer location, and where all handles referring to the original buffer location see the new value.
     419However, the general case requires a new buffer allocation: where the new value does not fit in the old place, or if other handles are still using the old value.
    418420
    419421
    420422\subsection{Sharing implementation}
    421423
    422 The \CFA string module has two manners in which several string handles can share an underlying run of characters. 
    423 
    424 The first type of sharing is user-requested, following the [xref Logical Overlap].  Here, the user requests, explicitly, that both handles be views of the same logical, modifiable string.  This state is typically produced by the substring operation.  In a typical substring call, the source string-handle is referencing an entire string, and the resulting, newly made, string handle is referencing a portion of the original.  In this state, a subsequent modification made by either is visible in both.
    425 
    426 The second type of sharing happens when the system implicitly delays the physical execution of a logical \emph{copy} operation, as part of its copy-on-write optimization.  This state is typically produced by constructing a new string, using an original string as its initialization source.  In this state, a subsequent modification done on one handle triggers the deferred copy action, leaving the handles referencing different runs within the buffer, holding distinct values.
    427 
    428 A further abstraction, in the string module's implementation, helps distinguish the two senses of sharing.  A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one being constructed from the other, with the ``share edits'' opt-in given.  It is represented by a second linked list among the handles.  A string that shares edits with no other is in a SES by itself.  Inside a SES, a logical modification of one substring portion may change the logical value in another, depending on whether the two actually overlap.  Conversely, no logical value change can flow outside of a SES.  Even if a modification on one string handle does not reveal itself \emph{logically} to anther handle in the same SES (because they don't overlap), if the modification is length-changing, completing the modification requires visiting the second handle to adjust its location in the sliding text.
     424The \CFA string module has two mechanisms to handle the case when string handles share a string of text. 
     425
     426The first type of sharing is the user requests both string handles be views of the same logical, modifiable string.
     427This state is typically produced by the substring operation.
     428\begin{cfa}
     429string s = "abcde";
     430string s1 = s( 1, 2 )@`share@; $\C[2.25in]{// explicit sharing}$
     431s[1] = 'x'; $\C{// change s and s1}\CRT$
     432sout | s | s1;
     433$\texttt{\small axcde xc}$
     434\end{cfa}
     435In a typical substring call, the source string-handle is referencing an entire string, and the resulting, newly made, string handle is referencing a portion of the original.
     436In this state, a subsequent modification made by either is visible in both.
     437
     438The second type of sharing happens when the system implicitly delays the physical execution of a logical \emph{copy} operation, as part of its copy-on-write optimization.
     439This state is typically produced by constructing a new string, using an original string as its initialization source.
     440\begin{cfa}
     441string s = "abcde";
     442string s1 = s( 1, 2 )@@; $\C[2.25in]{// no sharing}$
     443s[1] = 'x'; $\C{// copy-on-write s1}\CRT$
     444sout | s | s1;
     445$\texttt{\small axcde bc}$
     446\end{cfa}
     447In this state, a subsequent modification done on one handle triggers the deferred copy action, leaving the handles referencing different text within the buffer, holding distinct values.
     448
     449A further abstraction, in the string module's implementation, helps distinguish the two senses of sharing.
     450A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one string being constructed from another, with the ``share'' opt-in given.
     451The SES is represented by a second linked list among the handles.
     452A string that shares edits with no other is in a SES by itself.
     453Inside a SES, a logical modification of one substring portion may change the logical value in another, depending on whether the two actually overlap.
     454Conversely, no logical value change can flow outside of a SES.
     455Even if a modification on one string handle does not reveal itself \emph{logically} to anther handle in the same SES (because they do not overlap), if the modification is length-changing, completing the modification requires visiting the second handle to adjust its location in the sliding text.
    429456
    430457
    431458\subsection{Avoiding implicit sharing}
    432 
    433 There are tradeoffs associated with the copy-on-write mechanism.  Several qualitative matters are detailed in the [xref: Performance Assessment] section and the qualitative issue of multi-threaded support is introduced here.  The \CFA sting library provides a switch to disable the sharing mechanism for situations where it is inappropriate.
    434 
    435 Because of the inter-linked string handles, any participant managing one string is also managing, directly, the neighbouring strings, and from there, a data structure of the ``set of all strings.''  This data structure is intended for sequential access.  A negative consequence of this decision is that multiple threads using strings need to be set up so that they avoid attempting to modify (concurrently) an instance of this structure.  A positive consequence is that a single-threaded program, or a program with several independent threads, can use the sharing context without an overhead from locking.
    436 
    437 The \CFA sting library provides the @string_sharectx@ type to control an ambient sharing context for the current thread.  It allows two adjustments: to opt out of sharing entirely, or to begin sharing within a private context.  Either way, the chosen mode applies to the current thread, for the duration of the lifetime of the created  @string_sharectx@ object, up to being suspended by child lifetimes of different contexts.  The indented use is with stack-managed lifetimes, in which the established context lasts until the current function returns, and affects all functions called that don't create their own contexts.
     459\label{s:AvoidingImplicitSharing}
     460
     461There are tradeoffs associated with the copy-on-write mechanism.
     462Several qualitative matters are detailed in the \VRef{s:PerformanceAssessment} and the qualitative issue of multi-threaded support is introduced here.
     463The \CFA sting library provides a switch to disable the sharing mechanism for situations where it is inappropriate.
     464
     465Because of the inter-linked string handles, any participant managing one string is also managing, directly, the neighbouring strings, and from there, a data structure of the ``set of all strings.''  This data structure is intended for sequential access.
     466A negative consequence of this decision is that multiple threads using strings need to be set up so that they avoid attempting to modify (concurrently) an instance of this structure.
     467A positive consequence is that a single-threaded program, or a program with several independent threads, can use the sharing context without locking overhead.
     468
     469When the string library is running with sharing disabled, it runs without implicit thread-safety challenges, which is the same as the \CC STL, and with performance goals similar to the STL.
     470Running with sharing disabled can be thought of as a STL-emulation mode.
     471Hence, concurrent users of string objects must still bring their own mutual exclusion, but the string library does not add any cross thread uses that are not apparent in a user's code.
     472
     473\PAB{I could not get this to do anything.}
     474The \CFA string library does provides a @string_sharectx@ type to control an ambient sharing context for the current thread.
     475It allows two adjustments: to opt out of sharing entirely or to begin sharing within a private context.
     476Either way, the chosen mode applies only to the current thread, for the duration of the lifetime of the created  @string_sharectx@ object, up to being suspended by child lifetimes of different contexts.
     477The intended use is with stack-managed lifetimes, in which the established context lasts until the current function returns, and affects all functions called that do not create their own contexts.
    438478\lstinputlisting[language=CFA, firstline=20, lastline=34]{sharectx.run.cfa}
    439 In this example, the single-letter functions are called in alphabetic order.  The functions @a@ and @d@ share string character ranges within themselves, but not with each other.  The functions @b@, @c@ and @e@ never share anything.
     479In this example, the single-letter functions are called in alphabetic order.
     480The functions @a@ and @d@ share string character ranges within themselves, but not with each other.
     481The functions @b@, @c@ and @e@ never share anything.
    440482
    441483[ TODO: true up with ``is thread local'' (implement that and expand this discussion to give a concurrent example, or adjust this wording) ]
    442 
    443 When the string library is running with sharing disabled, it runs without implicit thread-safety challenges (which same as the STL) and with performance goals similar to the STL's.  This thread-safety quality means concurrent users of one string object must still bring their own mutual exclusion, but the string library will not add any cross thread uses that were not apparent in the user's code.
    444 
    445 Running with sharing disabled can be thought of as STL-emulation mode.
    446 
    447484
    448485
     
    457494\label{s:PerformanceAssessment}
    458495
    459 I assessed the \CFA string library's speed and memory usage.  I present these results in even equivalent cases, due to either micro-optimizations foregone, or fundamental costs of the added functionality.  They also show the benefits and tradeoffs, as >100\% effects, of switching to \CFA, with the tradeoff points quantified.  The final test shows the overall win of the \CFA text-sharing mechanism.  It exercises several operations together, showing \CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve.
    460 
    461 To discuss: general goal of ... while STL makes you think about memory management, all the time, and if you do your performance can be great ... \CFA sacrifices this advantage modestly in exchange for big wins when you're not thinking about memory management.  [Does this position cover all of it?]
     496I assessed the \CFA string library's speed and memory usage against strings in \CC STL.
     497The results are presented in even equivalent cases, due to either micro-optimizations foregone, or fundamental costs of the added functionality.
     498They also show the benefits and tradeoffs, as >100\% effects, of switching to \CFA, with the tradeoff points quantified.
     499The final test shows the overall win of the \CFA text-sharing mechanism.
     500It exercises several operations together, showing \CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve.
     501
     502To discuss: general goal of ...
     503while STL makes you think about memory management, all the time, and if you do your performance can be great ...
     504\CFA sacrifices this advantage modestly in exchange for big wins when you're not thinking about memory management.
     505[Does this position cover all of it?]
    462506
    463507To discuss: revisit HL v LL APIs
     
    465509To discuss: revisit no-sharing as STL emulation modes
    466510
    467 These tests use randomly generated text fragments of varying lengths.  A collection of such fragments is a \emph{corpus}.  The mean length of a fragment from corpus is a typical explanatory variable.  Such a length is used in one of three modes:
     511
     512\subsection{Methodology}
     513
     514These tests use randomly generated text fragments of varying lengths.
     515A collection of such fragments is a \emph{corpus}.
     516The mean length of a fragment from a corpus is a typical explanatory variable.
     517Such a length is used in one of three modes:
    468518\begin{description}
    469     \item [Fixed-size] means all string fragments are of the stated size
    470     \item [Varying from 1] means string lengths are drawn from a geometric distribution with the stated mean, and all lengths occur
    471     \item [Varying from 16] means string lengths are drawn from a geometric distribution with the stated mean, but only lengths 16 and above occur; thus, the stated mean will be above 16.
     519        \item [Fixed-size] means all string fragments are of the stated size.
     520        \item [Varying from 1] means string lengths are drawn from a geometric distribution with the stated mean, and all lengths occur.
     521        \item [Varying from 16] means string lengths are drawn from a geometric distribution with the stated mean, but only lengths 16 and above occur; thus, the stated mean is above 16.
    472522\end{description}
    473 The geometric distribution implies that lengths much longer than the mean occur frequently.  The special treatment of length 16 deals with comparison to STL, given that STL has short-string optimization (see [TODO: write and cross-ref future-work SSO]), currently not implemented in \CFA.  When success notwithstanding SSO is illustrated, a fixed-size or from-16 distribution ensures that extra-optimized cases are not part of the mix on the STL side.  In all experiments that use a corpus, its text is generated and loaded into the SUT before the timed phase begins.
     523The geometric distribution implies that lengths much longer than the mean occur frequently.
     524The special treatment of length 16 deals with comparison to STL, given that STL has short-string optimization (see [TODO: write and cross-ref future-work SSO]), currently not implemented in \CFA.
     525When success, notwithstanding SSO, is illustrated, a fixed-size or from-16 distribution ensures that extra-optimized cases are not part of the mix on the STL side.
     526In all experiments that use a corpus, its text is generated and loaded into the SUT before the timed phase begins.
    474527
    475528To discuss: vocabulary for reused case variables
     
    482535
    483536
    484 \subsubsection{Test: Append}
    485 
    486 This test measures the speed of appending fragments of text onto a growing string.  Its subcases include both \CFA being similar to STL, and their designs offering a tradeoff.
    487 
    488 One experimental variable is the user's operation being @a = a + b@ vs. @a += b@.  While experienced programmers expect the latter to be ``what you obviously should do,'' controlling the penalty of the former both helps the API be accessible to beginners and also helps offer confidence that when a user tries to compose operations, the forms that are most natural to the user's composition are viable.
    489 
    490 Another experimental variable is whether the user's logical allocation is fresh vs reused.  Here, \emph{reusing a logical allocation}, means that the program variable, into which the user is concatenating, previously held a long string:\\
    491 \begin{tabular}{ll}
    492     Logical allocation fresh                   & Logical allocation reused                  \\
    493                                                & @ string x; @                              \\
    494     @ for( ... ) { @                           & @ for( ... ) { @                           \\
    495     @     string x; @                          & @     x = ""; @                            \\
    496     @     for( ... ) @                         & @     for( ... ) @                         \\
    497     @        x += ... @                        & @        x += ... @                        \\
    498     @ } @                                      & @ } @
    499 \end{tabular}\\
    500 These benchmark drivers have an outer loop for ``until a sample-worthy amount of execution has happened'' and an inner loop for ``build up the desired-length string.''  It is sensible to doubt that a user should have to care about this difference, yet the STL performs differently in these cases.  Concretely, both cases incur the cost of copying characters into the target string, but only the allocation-fresh case incurs a further reallocation cost, which is generally paid at points of doubling the length.  For the STL, this cost includes obtaining a fresh buffer from the memory allocator and copying older characters into the new buffer, while \CFA-sharing hides such a cost entirely.  The reuse-vs-fresh distinction is only relevant in the current \emph{append} tests.
    501 
    502 The \emph{append} tests use the varying-from-1 corpus construction; that is they do not assume away the STL's advantage from small-string optimization.
    503 
    504 To discuss: any other case variables introduced in the performance intro
     537\subsection{Test: Append}
     538
     539This test measures the speed of appending fragments of text onto a growing string.
     540Its subcases include both \CFA being similar to STL and their designs offering a tradeoff.
     541
     542One experimental variable is logically equivalent operations such as @a = a + b@ \vs @a += b@.
     543For numeric types, the generated code is equivalence, giving identical performance.
     544However, for string types there can be a significant difference in performance, especially if this code appears in a loop iterating a large number of times.
     545This difference might not be intuitive to beginners.
     546
     547Another experimental variable is whether the user's logical allocation is fresh \vs reused.
     548Here, \emph{reusing a logical allocation}, means that the program variable, into which the user is concatenating, previously held a long string:\\
     549\begin{cquote}
     550\setlength{\tabcolsep}{20pt}
     551\begin{tabular}{@{}ll@{}}
     552\begin{cfa}
     553
     554for ( ... ) {
     555        @string x;@   // fresh
     556        for ( ... )
     557                x += ...
     558}
     559\end{cfa}
     560&
     561\begin{cfa}
     562string x;
     563for ( ... ) {
     564        @x = "";@  // reused
     565        for ( ... )
     566                x += ...
     567}
     568\end{cfa}
     569\end{tabular}
     570\end{cquote}
     571These benchmark drivers have an outer loop for ``until a sample-worthy amount of execution has happened'' and an inner loop for ``build up the desired-length string.''
     572In general, a user should not have to care about this difference, yet the STL performs differently in these cases.
     573Furthermore, if a routine takes a string by reference, if cannot use the fresh approach.
     574Concretely, both cases incur the cost of copying characters into the target string, but only the allocation-fresh case incurs a further reallocation cost, which is generally paid at points of doubling the length.
     575For the STL, this cost includes obtaining a fresh buffer from the memory allocator and copying older characters into the new buffer, while \CFA-sharing hides such a cost entirely.
     576The fresh \vs reuse distinction is only relevant in the \emph{append} tests.
     577
     578The \emph{append} tests use the varying-from-1 corpus construction, \ie they do not assume away the STL's advantage for small-string optimization.
     579\PAB{To discuss: any other case variables introduced in the performance intro}
     580Figure \ref{fig:string-graph-peq-cppemu} shows this behaviour, by the STL and by \CFA in STL emulation mode.
     581\CFA reproduces STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case.
     582This penalty characterizes the amount of implementation fine tuning done with STL and not done with \CFA in present state.
     583The larger inherent penalty, for a user mismanaging reuse, is 40\% averaged over the cases shown, is minimally 24\%, shows up consistently between the STL and \CFA implementations, and increases with larger strings.
     584
     585\PAB{Most of your graphs are unreadable. gnuplot is a good tool for generating high quality graphs.}
    505586
    506587\begin{figure}
    507     \includegraphics[width=\textwidth]{string-graph-peq-cppemu.png}
    508     \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA with STL implementations (given \CFA running in STL emulation mode), and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}
    509     \label{fig:string-graph-peq-cppemu}
     588        \includegraphics[width=\textwidth]{string-graph-peq-cppemu.png}
     589        \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA with STL implementations (given \CFA running in STL emulation mode), and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}
     590        \label{fig:string-graph-peq-cppemu}
    510591\end{figure}
    511592
    512 Figure \ref{fig:string-graph-peq-cppemu} shows this behaviour, by the STL and by \CFA in STL emulation mode.  \CFA reproduces STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case.  This penalty characterizes the amount of implementation fine tuning done with STL and not done with \CFA in present state.  The larger inherent penalty, for a user mismanaging reuse, is 40\% averaged over the cases shown, is minimally 24\%, shows up consistently between the STL and \CFA implementations, and increases with larger strings.
    513 
    514593\begin{figure}
    515     \includegraphics[width=\textwidth]{string-graph-peq-sharing.png}
    516     \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA (having implicit sharing activated) with STL, and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}
    517     \label{fig:string-graph-peq-sharing}
     594        \includegraphics[width=\textwidth]{string-graph-peq-sharing.png}
     595        \caption{Average time per iteration with one \lstinline{x += y} invocation, comparing \CFA (having implicit sharing activated) with STL, and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}
     596        \label{fig:string-graph-peq-sharing}
    518597\end{figure}
    519598
    520 In sharing mode, \CFA makes the fresh/reuse difference disappear, as shown in Figure \ref{fig:string-graph-peq-sharing}.  At append lengths 5 and above, \CFA not only splits the two baseline STL cases, but its slowdown of 16\% over (STL with user-managed reuse) is close to the \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode.
     599In sharing mode, \CFA makes the fresh/reuse difference disappear, as shown in Figure \ref{fig:string-graph-peq-sharing}.
     600At append lengths 5 and above, \CFA not only splits the two baseline STL cases, but its slowdown of 16\% over (STL with user-managed reuse) is close to the \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode.
    521601
    522602\begin{figure}
    523     \includegraphics[width=\textwidth]{string-graph-pta-sharing.png}
    524     \caption{Average time per iteration with one \lstinline{x = x + y} invocation (new, purple bands), comparing \CFA (having implicit sharing activated) with STL.  For context, the results from Figure \ref{fig:string-graph-peq-sharing} are repeated as the bottom bands.  While not a design goal, and not graphed out, \CFA in STL-emulation mode outperformed STL in this case; user-managed allocation reuse did not affect any of the implementations in this case.}
    525     \label{fig:string-graph-pta-sharing}
     603        \includegraphics[width=\textwidth]{string-graph-pta-sharing.png}
     604        \caption{Average time per iteration with one \lstinline{x = x + y} invocation (new, purple bands), comparing \CFA (having implicit sharing activated) with STL.
     605For context, the results from Figure \ref{fig:string-graph-peq-sharing} are repeated as the bottom bands.
     606While not a design goal, and not graphed out, \CFA in STL-emulation mode outperformed STL in this case; user-managed allocation reuse did not affect any of the implementations in this case.}
     607        \label{fig:string-graph-pta-sharing}
    526608\end{figure}
    527609
    528 When the user takes a further step beyond the STL's optimal zone, by running @x = x + y@, as in Figure \ref{fig:string-graph-pta-sharing}, the STL's penalty is above $15 \times$ while \CFA's (with sharing) is under $2 \times$, averaged across the cases shown here.  Moreover, the STL's gap increases with string size, while \CFA's converges.
     610When the user takes a further step beyond the STL's optimal zone, by running @x = x + y@, as in Figure \ref{fig:string-graph-pta-sharing}, the STL's penalty is above $15 \times$ while \CFA's (with sharing) is under $2 \times$, averaged across the cases shown here.
     611Moreover, the STL's gap increases with string size, while \CFA's converges.
    529612
    530613\subsubsection{Test: Pass argument}
     
    532615To have introduced:  STL string library forces users to think about memory management when communicating values across a function call
    533616
    534 STL charges a prohibitive penalty for passing a string by value.  With implicit sharing active, \CFA treats this operation as normal and supported.  This test illustrates a main advantage of the \CFA sharing algorithm.  It also has a case in which STL's small-string optimization provides a successful mitigation.
     617STL charges a prohibitive penalty for passing a string by value.
     618With implicit sharing active, \CFA treats this operation as normal and supported.
     619This test illustrates a main advantage of the \CFA sharing algorithm.
     620It also has a case in which STL's small-string optimization provides a successful mitigation.
    535621
    536622\begin{figure}
    537     \includegraphics[width=\textwidth]{string-graph-pbv.png}
    538     \caption{Average time per iteration with one call to a function that takes a by-value string argument, comparing \CFA (having implicit sharing activated) with STL.  (a) With \emph{Varying-from-1} corpus construction, in which the STL-only benefit of small-string optimization occurs, in varying degrees, at all string sizes.  (b) With \emph{Fixed-size} corpus construction, in which this benefit applies exactly to strings with length below 16.  [TODO: show version (b)]}
    539     \label{fig:string-graph-pbv}
     623        \includegraphics[width=\textwidth]{string-graph-pbv.png}
     624        \caption{Average time per iteration with one call to a function that takes a by-value string argument, comparing \CFA (having implicit sharing activated) with STL.
     625(a) With \emph{Varying-from-1} corpus construction, in which the STL-only benefit of small-string optimization occurs, in varying degrees, at all string sizes.
     626(b) With \emph{Fixed-size} corpus construction, in which this benefit applies exactly to strings with length below 16.
     627[TODO: show version (b)]}
     628        \label{fig:string-graph-pbv}
    540629\end{figure}
    541630
    542 Figure \ref{fig:string-graph-pbv} shows the costs for calling a function that receives a string argument by value.  STL's performance worsens as string length increases, while \CFA has the same performance at all sizes.
    543 
    544 The \CFA cost to pass is nontrivial.  The contributor is adding and removing the callee's string handle from the global list.  This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, given a \CFA realization of this optimization.  At the larger sizes, when STL has to manage storage for the string, STL runs more than $3 \times$ slower, mainly due to time spent in the general-purpose memory allocator.
     631Figure \ref{fig:string-graph-pbv} shows the costs for calling a function that receives a string argument by value.
     632STL's performance worsens as string length increases, while \CFA has the same performance at all sizes.
     633
     634The \CFA cost to pass is nontrivial.
     635The contributor is adding and removing the callee's string handle from the global list.
     636This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, given a \CFA realization of this optimization.
     637At the larger sizes, when STL has to manage storage for the string, STL runs more than $3 \times$ slower, mainly due to time spent in the general-purpose memory allocator.
    545638
    546639
    547640\subsubsection{Test: Allocate}
    548641
    549 This test directly compares the allocation schemes of the \CFA string with sharing, compared with the STL string.  It treats the \CFA scheme as a form of garbage collection, and the STL scheme as an application of malloc-free.  The test shows that \CFA enables faster speed at a cost in memory usage.
    550 
    551 A garbage collector, afforded the freedom of managed memory, often runs faster than malloc-free (in an amortized analysis, even though it must occasionally stop to collect) because it is able to use its collection time to move objects.  (In the case of the mini-allocator powering the \CFA string library, objects are runs of text.)  Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' book-keeping for such a scheme is very light.  A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier book-keeping to navigate and maintain a linked structure.
    552 
    553 A garbage collector keeps allocations around for longer than the using program can reach them.  By contrast, a program using malloc-free (correctly) releases allocations exactly when they are no longer reachable.  Therefore, the same harness will use more memory while running under garbage collection.  A garbage collector can minimize the memory overhead by searching for these dead allocations aggressively, that is, by collecting more often.  Tuned in this way, it spends a lot of time collecting, easily so much as to overwhelm its speed advantage from bump-pointer allocation.  If it is tuned to collect rarely, then it leaves a lot of garbage allocated (waiting to be collected) but gains the advantage of little time spent doing collection.
     642This test directly compares the allocation schemes of the \CFA string with sharing, compared with the STL string.
     643It treats the \CFA scheme as a form of garbage collection, and the STL scheme as an application of malloc-free.
     644The test shows that \CFA enables faster speed at a cost in memory usage.
     645
     646A garbage collector, afforded the freedom of managed memory, often runs faster than malloc-free (in an amortized analysis, even though it must occasionally stop to collect) because it is able to use its collection time to move objects.
     647(In the case of the mini-allocator powering the \CFA string library, objects are runs of text.)  Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' book-keeping for such a scheme is very light.
     648A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier book-keeping to navigate and maintain a linked structure.
     649
     650A garbage collector keeps allocations around for longer than the using program can reach them.
     651By contrast, a program using malloc-free (correctly) releases allocations exactly when they are no longer reachable.
     652Therefore, the same harness will use more memory while running under garbage collection.
     653A garbage collector can minimize the memory overhead by searching for these dead allocations aggressively, that is, by collecting more often.
     654Tuned in this way, it spends a lot of time collecting, easily so much as to overwhelm its speed advantage from bump-pointer allocation.
     655If it is tuned to collect rarely, then it leaves a lot of garbage allocated (waiting to be collected) but gains the advantage of little time spent doing collection.
    554656
    555657[TODO: find citations for the above knowledge]
    556658
    557 The speed for memory tradeoff is, therefore, standard for comparisons like \CFA--STL string allocations.  The test verifies that it is so and quantifies the returns available.
    558 
    559 These tests manipulate a tuning knob that controls how much extra space to use.  Specific values of this knob are not user-visible and are not presented in the results here.  Instead, its two effects (amount of space used and time per operation) are shown.  The independent variable is the liveness target, which is the fraction of the text buffer that is in use at the end of a collection.  The allocator will expand its text buffer during a collection if the actual fraction live exceeds this target.
    560 
    561 This experiment's driver allocates strings by constructing a string handle as a local variable then looping over recursive calls.  The time measurement is of nanoseconds per such allocating call.  The arrangement of recursive calls and their fan-out (iterations per recursion level) makes some of the strings long-lived and some of them short-lived.  String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls.  The run presented in this section used a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one.  This sizing was chosen to keep the amounts of consumed memory within the machine's last-level cache.
     659The speed for memory tradeoff is, therefore, standard for comparisons like \CFA--STL string allocations.
     660The test verifies that it is so and quantifies the returns available.
     661
     662These tests manipulate a tuning knob that controls how much extra space to use.
     663Specific values of this knob are not user-visible and are not presented in the results here.
     664Instead, its two effects (amount of space used and time per operation) are shown.
     665The independent variable is the liveness target, which is the fraction of the text buffer that is in use at the end of a collection.
     666The allocator will expand its text buffer during a collection if the actual fraction live exceeds this target.
     667
     668This experiment's driver allocates strings by constructing a string handle as a local variable then looping over recursive calls.
     669The time measurement is of nanoseconds per such allocating call.
     670The arrangement of recursive calls and their fan-out (iterations per recursion level) makes some of the strings long-lived and some of them short-lived.
     671String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls.
     672The run presented in this section used a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one.
     673This sizing was chosen to keep the amounts of consumed memory within the machine's last-level cache.
    562674
    563675\begin{figure}
    564     \includegraphics[width=\textwidth]{string-graph-allocn.png}
    565     \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at (\emph{Fixed-size} corpus construction.  [MISSING] The identified clusters are for the default fraction-live target, which is 30\%.  MISSING: STL results, typically just below the 0.5--0.9 \CFA segment.  All runs keep an average of 836 strings live, and the median string lifetime is ?? allocations.}
    566     \label{fig:string-graph-allocn}
     676  \includegraphics[width=\textwidth]{string-graph-allocn.png}
     677  \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at (\emph{Fixed-size} corpus construction.
     678[MISSING] The identified clusters are for the default fraction-live target, which is 30\%.
     679MISSING: STL results, typically just below the 0.5--0.9 \CFA segment.
     680All runs keep an average of 836 strings live, and the median string lifetime is ?? allocations.}
     681  \label{fig:string-graph-allocn}
    567682\end{figure}
    568683
    569 Figure \ref{fig:string-graph-allocn} shows the results of this experiment.  At all string sizes, varying the liveness threshold gives offers speed-for-space tradeoffs relative to STL.  At the default liveness threshold, all measured string sizes see a ??\%--??\% speedup for a ??\%--??\% increase in memory footprint.
    570 
     684Figure \ref{fig:string-graph-allocn} shows the results of this experiment.
     685At all string sizes, varying the liveness threshold gives offers speed-for-space tradeoffs relative to STL.
     686At the default liveness threshold, all measured string sizes see a ??\%--??\% speedup for a ??\%--??\% increase in memory footprint.
    571687
    572688
    573689\subsubsection{Test: Normalize}
    574690
    575 This test is more applied than the earlier ones.  It combines the effects of several operations.  It also demonstrates a case of the \CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity.
     691This test is more applied than the earlier ones.
     692It combines the effects of several operations.
     693It also demonstrates a case of the \CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity.
    576694
    577695To motivate: edits being rare
    578696
    579 The program is doing a specialized find-replace operation on a large body of text.  In the program under test, the replacement is just to erase a magic character.  But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings.  The challenge is to apply this packaged function across chunks taken from the large body.  Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context.  Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context.
     697The program is doing a specialized find-replace operation on a large body of text.
     698In the program under test, the replacement is just to erase a magic character.
     699But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings.
     700The challenge is to apply this packaged function across chunks taken from the large body.
     701Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context.
     702Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context.
    580703
    581704\begin{lstlisting}
    582705void processItem( string & item ) {
    583     // find issues in item and fix them
     706        // find issues in item and fix them
    584707}
    585708\end{lstlisting}
Note: See TracChangeset for help on using the changeset viewer.