Changeset f85de47 for doc/theses/mike_brooks_MMath/string.tex
- Timestamp:
- Apr 25, 2025, 4:03:53 AM (5 months ago)
- Branches:
- master
- Children:
- 65bd3c2
- Parents:
- b195498
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mike_brooks_MMath/string.tex
rb195498 rf85de47 20 20 \hline 21 21 @strcpy@, @strncpy@ & @=@ & @=@ & @=@ \\ 22 @strcat@, @strncat@ & @+@ & @+@ & @+@ \\22 @strcat@, @strncat@ & @+@, @+=@ & @+@, @+=@ & @+@, @+=@ \\ 23 23 @strcmp@, @strncmp@ & @==@, @!=@, @<@, @<=@, @>@, @>=@ 24 & @equals@, @compareTo@ & @==@, @!=@, @<@, @<=@, @>@, @>=@ \\ 24 & @equals@, @compareTo@ 25 & @==@, @!=@, @<@, @<=@, @>@, @>=@ \\ 25 26 @strlen@ & @length@, @size@ & @length@ & @size@ \\ 26 27 @[ ]@ & @[ ]@ & @charAt@ & @[ ]@ \\ 27 @strncpy@ & @substr@ & @substring@ & @( )@ RHS@=@ \\28 @strncpy@ & @replace@ & @replace@ & @( )@ LHS@=@ \\28 @strncpy@ & @substr@ & @substring@ & @( )@, on RHS of @=@ \\ 29 @strncpy@ & @replace@ & @replace@ & @( )@, on LHS of @=@ \\ 29 30 @strstr@ & @find@ & @indexOf@ & @find@ \\ 30 31 @strcspn@ & @find_first_of@ & @matches@ & @include@ \\ … … 834 835 & \multirow{2}{2in} 835 836 {Fast Initialize: The target receives the characters of the source without copying the characters, resulting in an Alias or Snapshot.} 836 & Alias: The target name is withinthe source text; changes made in either variable are visible in both.837 & Alias: The target variable names the source text; changes made in either variable are visible in both. 837 838 & yes & yes & no & yes \\ 838 839 \cline{3-7} 839 840 & 840 & Snapshot: The target is an alias within the source until the target changes (copy on write).841 & Snapshot: Subsequent operations on either the source or target variable do not affect the other. 841 842 & no & no & yes & yes \\ 842 843 \hline … … 844 845 & Laxed: The target's type is anything string-like; it may have a different status concerning ownership. 845 846 & Strict: The target's type is the same as the source; both strings are equivalent peers concerning ownership. 846 & N/A& no & yes & yes \\847 & n/a & no & yes & yes \\ 847 848 \hline 848 849 Referent … … 860 861 A language gets ``Supports Helpful'' in one criterion if it can do so without sacrificing the Required achievement on all other criteria. 861 862 \item 862 The C ``string'' is actually @char []@, under the conventions that @<string.h>@ requires. Hence, there is no actual string type in C, sosymmetry does not apply.863 The C ``string'' is @char *@, under the conventions of @<string.h>@. Because this type does not manage a text allocation, symmetry does not apply. 863 864 \item 864 865 The Java @String@ class is analyzed; its @StringBuffer@ class behaves similarly to @C++@. … … 868 869 \end{figure} 869 870 870 In C, the declaration 871 \begin{cfa} 872 char s[$\,$] = "abcde"; 873 \end{cfa} 874 creates a second-class fixed-sized string-variable, as it can only be used in its lexical context, \ie it cannot be passed by value to string operations or user functions. 875 The reason is that there is no implicit mechanism to pass the string-length information to the function. 876 Therefore, only pointers to strings are first-class, and discussed further. 877 \begin{cfa} 878 (const) char * s = "abcde"; $\C[2.25in]{// alias state, n/a symmetry, variable-constrained referent}$ 871 In C, these declarations give very different things. 872 \begin{cfa} 873 char x[$\,$] = "abcde"; 874 char * y = "abcde"; 875 \end{cfa} 876 Both associate the declared name with fixed-six contiguous bytes, filled as @{'a', 'b', 'c', 'd', 'e', 0}@. 877 But @x@ gets them allocated in the active stack frame (with values filled in as control passes the declaration), while @y@ refers into the executable's read-only data section. 878 With @x@ representing an allocation, it offers information in @sizeof(x)@ that @y@ does not. 879 But this extra information is second-class, as it can only be used in the immediate lexical context, \ie it cannot be passed on to string operations or user functions. 880 Only pointers to text buffers are first-class, and discussed further. 881 882 \begin{cfa} 883 char * s = "abcde"; 879 884 char * s1 = s; $\C{// alias state, n/a symmetry, variable-constrained referent}$ 880 char * s2 = s; $\C{// alias state, n/a symmetry, variable-constrained referent}$ 881 char * s3 = &s[1]; $\C{// alias state, n/a symmetry, variable-constrained referent}$ 882 char * s4 = &s3[1]; $\C{// alias state, n/a symmetry, variable-constrained referent}\CRT$ 883 printf( "%s %s %s %s %s\n", s, s1, s2, s3, s4 ); 884 $\texttt{\small abcde abcde abcde bcde cde}$ 885 char * s2 = &s[1]; $\C{// alias state, n/a symmetry, variable-constrained referent}\CRT$ 886 char * s3 = &s2[1]; $\C{// alias state, n/a symmetry, variable-constrained referent} 887 printf( "%s %s %s %s\n", s, s1, s2, s3 ); 888 $\texttt{\small abcde abcde bcde cde}$ 885 889 \end{cfa} 886 890 Note, all of these aliased strings rely on the single null termination character at the end of @s@. 887 The issue of symmetry does not apply to C strings because the value and pointer strings are essentially different types, and so this feature is scored as not applicable for C.888 With the type not managing the text storage, there is no ownership question, \ie operations on @s1@ or @s2@ never leadsto their memory becoming reusable.891 The issue of symmetry does not apply to a low-level API because no management of a text buffer is provided. 892 With the @char *@ type not managing the text storage, there is no ownership question, \ie operations on @s1@ or @s2@ never lead to their memory becoming reusable. 889 893 While @s3@ is a valid C-string that contains a proper substring of @s1@, the @s3@ technique does not constitute having a fragment referent because null termination implies the substring cannot be chosen arbitrarily; the technique works only for suffixes. 890 894 … … 893 897 string s = "abcde"; 894 898 string & s1 = s; $\C[2.25in]{// alias state, lax symmetry, variable-constrained referent}$ 895 string s2 = s; $\C{// copy(strict symmetry, variable-constrained referent)}$896 string s3 = s.substr( 1, 2 ); $\C{// copy(strict symmetry, fragment referent)}$897 string s4 = s3.substr( 1, 1 ); $\C{// copy(strict symmetry, fragment referent)}$899 string s2 = s; $\C{// no fast-initialize (strict symmetry, variable-constrained referent)}$ 900 string s3 = s.substr( 1, 2 ); $\C{// no fast-initialize (strict symmetry, fragment referent)}$ 901 string s4 = s3.substr( 1, 1 ); $\C{// no fast-initialize (strict symmetry, fragment referent)}$ 898 902 cout << s << ' ' << s1 << ' ' << s2 << ' ' << s3 << ' ' << s4 << endl; 899 903 $\texttt{\small abcde abcde abcde bc c}$ 900 904 string & s5 = s.substr(2,4); $\C{// error: cannot point to temporary}\CRT$ 901 905 \end{cfa} 902 The lax symmetry reflects how the validity of @s1@ depends on the content and lifetime of @s@. 903 It is common practice in \CC to use the @s1@-style pass by reference, with the understanding that the callee only uses the referenced string for the duration of the call, \ie no side-effect using the parameter. 904 So, when the called function is a constructor, it is typical to use an @s2@-style copy-initialization. 906 The @s1@ lax symmetry reflects how its validity of depends on the lifetime of @s@. 907 It is common practice in \CC to use the @s1@-style for a by-reference function parameter. 908 Doing so assumes that the callee only uses the referenced string for the duration of the call, \ie no storing the parameter (as a reference) for later. 909 So, when the called function is a constructor, its definition typically uses an @s2@-style copy-initialization. 905 910 Exceptions to this pattern are possible, but require the programmer to assure safety where the type system does not. 906 The @s3@ initialization is constrained to copy the substring because @c_str@ always provides a null-terminated character, which may be different from the source string.907 @s 3@ assignment could be fast by referencecounting the text area and using copy-on-write, but would require an implementation upgrade.911 The @s3@ initialization must copy the substring because it must support a subsequent @c_str@ call, which provides a null-termination, generally at a different position than the source string's. 912 @s2@ assignment could be made fast, by reference-counting the text area and using copy-on-write, but would require an implementation upgrade. 908 913 909 914 In Java, @String@ also offers a high-level abstraction: … … 918 923 $\texttt{\small true false false}$ 919 924 \end{java} 920 Note, @substring@ takes a start and end position, rather than a start position and length.925 Note, Java's @substring@ takes a start and end position, rather than a start position and length. 921 926 Here, facts about Java's implicit pointers and pointer equality can over complicate the picture, and so are ignored. 922 927 Furthermore, Java's string immutability means string variables behave as simple values. … … 924 929 With @s2@, the case for fast-copy is more subtle. 925 930 Certainly, its value is not pointer-equal to @s@, implying at least a further allocation. 926 \PAB{TODO: finish the fast-copy case.} 927 Java does not meet the aliasing requirement because immutability make it impossible to modify.931 But because Java is not constrained to use a null-terminated representation, a standard-library implementation is free to refer to the source characters in-place. 932 Java does not meet the aliasing requirement because immutability makes it impossible to modify. 928 933 Java's @StringBuffer@ provides aliasing (see @replace@ example on \VPageref{p:JavaReplace}), though without supporting symmetric treatment of a fragment referent, \eg @substring@ of a @StringBuffer@ is a @String@; 929 934 as a result, @StringBuffer@ scores as \CC. 930 The easy symmetry that the Java string enjoys is aided by Java's garbage collection; Java's @s2@ is doing effectively the operation of \CC's @s3@, though without the consequence of complicating memory management. 931 \PAB{What complex storage management is going on here?} 935 The easy symmetry that the Java string enjoys is aided by Java's garbage collection; Java's @s1@ is doing effectively the operation of \CC's @s1@, though without the consequence of complicating memory management. 936 937 % former \PAB{What complex storage management is going on here?} 938 % answer: the variable names in the sentence were out of date; fixed 932 939 933 940 Finally, in \CFA, @string@ also offers a high-level abstraction: 934 941 \begin{cfa} 935 942 string s = "abcde"; 936 string & s1 = s; $\C[2.25in]{// alias state, strictsymmetry, variable-constrained referent}$943 string & s1 = s; $\C[2.25in]{// alias state, lax symmetry, variable-constrained referent}$ 937 944 string s2 = s; $\C{// snapshot state, strict symmetry, variable-constrained referent}$ 938 string s3 = s`share; $\C{// alias state, strict symmetry, variable-constrained referent} \CRT$939 string s4 = s( 1, 2 ); 940 string s5 = s4( 1, 1 ) ;945 string s3 = s`share; $\C{// alias state, strict symmetry, variable-constrained referent}$ 946 string s4 = s( 1, 2 ); $\C{// snapshot state, strict symmetry, fragment referent}$ 947 string s5 = s4( 1, 1 )`share'; $\C{// alias state, strict symmetry, fragment referent}\CRT$ 941 948 sout | s | s1 | s2 | s3 | s4 | s5; 942 949 $\texttt{\small abcde abcde abcde abcde bc c}$ … … 944 951 % all helpful criteria of \VRef[Figure]{f:StrSemanticCompare} are satisfied. 945 952 The \CFA string manages storage, handles all assignments, including those of fragment referents with fast initialization, provides the choice between snapshot and alias semantics, and does so symmetrically with one type (which assures text validity according to the lifecycles of the string variables). 946 The intended metaphor for \CFA stings is similar to a GUI text-editor or web browser. 947 Select a consecutive block of text using the mouse generates an aliased substring in the file/dialog-box. 948 Typing into the selected area is like assigning to an aliased substring, where the highlighted text is replaced with more or less text; 949 depending on the text entered, the file/dialog-box content grows or shrinks. 950 \PAB{Need to discuss the example, as for the other languages.} 951 952 The remainder of this chapter explains how the \CFA string achieves this usage style. 953 954 955 \section{Storage Management} 956 957 This section discusses issues related to storage management of strings. 958 Specifically, it is common for strings to logically overlap partially or completely. 959 \begin{cfa} 960 string s1 = "abcdef"; 961 string s2 = s1; $\C{// complete overlap, s2 == "abcdef"}$ 962 string s3 = s1.substr( 0, 3 ); $\C{// partial overlap, s3 == "abc"}$ 963 \end{cfa} 964 This raises the question of how strings behave when an overlapping component is changed, 965 \begin{cfa} 966 s3[1] = 'w'; $\C{// what happens to s1 and s2?}$ 967 \end{cfa} 968 which is restricted by a string's mutable or immutable property. 969 For example, Java's immutable strings require copy-on-write when any overlapping string changes. 970 Note, the notion of underlying string mutability is not specified by @const@; \eg in \CC: 971 \begin{cfa} 972 const string s1 = "abc"; 973 \end{cfa} 974 the @const@ applies to the @s1@ pointer to @"abc"@, and @"abc"@ is an immutable constant that is \emph{copied} into the string's storage. 975 Hence, @s1@ is not pointing at an immutable constant, meaning its underlying string can be mutable, unless some other designation is specified, such as Java's global immutable rule. 953 The @s1@ case is the same in \CFA as in \CC. 954 But the \CFA @s2@ delivers a fast snapshot, while the \CC @s2@ does not. 955 \CFA offers the snapshot-vs-alias choice in @s2@-vs-@s3@ and @s4@-vs-@s5@. 956 Regardless of snapshot-vs-alias and fragment-vs-whole, the target is always of the same @string@ type. 957 958 % former \PAB{Need to discuss the example, as for the other languages.} 959 % answer: done 960 976 961 977 962 978 963 \subsection{Logical overlap} 979 964 980 \CFA provides a dynamic mechanism to indicate mutable or immutable using the attribute @`share@. 981 This aliasing relationship is a sticky-property established at initialization. 965 It may be unfamiliar to combine \VRef[Figure]{f:StrSemanticCompare}'s alias state and fragment referent in one API, or at the same time. 966 This section shows the capability in action. 967 968 In summary, the metaphor of a GUI text editor is intended. 969 Selecting a consecutive block of text using the mouse defines an aliased substring within the file. 970 Typing in this state overwrites what was there before, replacing the originally selected text with more or less text. 971 But the \emph{whole file} grows or shrinks as a result, not just the selection. 972 This action models assigning to an aliased substring when the two strings overlap by total containment: one string is the selection, the other is the whole file. 973 974 Now extend the metaphor to a multi-user online editor. 975 If Alice selects a range of text at the bottom of the file, wile Bob is rewriting a paragraph at the top, Alice's selection holds onto the logical characters initially selected, unaffected by Bob making the total file grow/shrink, and unaffectd by Bob causing the start index of Alice's selction to vary. 976 This action models assigning to an aliased substring when the two strings do not overlap at all: one string is Alice's selection, the other is Bob's. 977 978 If a third office worker were also watching Alice's and Bob's actions on the whole file (a string with ``all the text'' is kept around), then two further single-user-edit cases give the semantics of the individual edits flowing into the whole. 979 But, departing from the document analogy, it is not necessary to keep a such a third string: 980 no one has to resource-manage ``the document.'' 981 When an original string, from which both the Alice- and Bob-parts came, ceases to exist, Alice and Bob are left with two independent strings. 982 They are independent because Alice and Bob have no API for growing the bounds of a string to subsume text that may once have been around it. 983 984 Edge cases, notably ``Venn-diagram overlap,'' had to have handlings chosen. 985 The intent in fleshing out these details was to achieve the above story, with a single API, while keeping the rest as simple as possible. 986 The remainder of this section shows the resulting decisions, played out at the API level. 987 988 \CFA uses the marker @`share@ as a dynamic mechanism to indicate alias (mutations shared) \vs snapshot (not quite an immutable result, but one with subsequent mutations isolated). 989 This aliasing relationship is a sticky property established at initialization. 982 990 For example, here strings @s1@ and @s1a@ are in an aliasing relationship, while @s2@ is in a copy relationship. 983 991 \input{sharing1.tex} … … 1003 1011 The following rules explain aliasing substrings that flow in the opposite direction, large to small. 1004 1012 1005 Growth and shrinkage are natural extensions, as for the text-editor example mentioned earlier, where an empty substring is as real as an empty string.1013 Growth and shrinkage are natural extensions, as for the text-editor analogy, where an empty substring is as real as an empty string. 1006 1014 \input{sharing8.tex} 1007 1015 … … 1071 1079 1072 1080 1073 \subsection{RAII limitations} 1074 1075 Earlier work on \CFA~\cite[ch.~2]{Schluntz17} implemented object constructors and destructors for all types (basic and user defined). 1076 A constructor is a user-defined function run implicitly \emph{after} an object's declaration-storage is created, and a destructor is a user-defined function run \emph{before} an object's declaration-storage is deleted. 1077 This feature, called RAII~\cite[p.~389]{Stroustrup94}, guarantees pre-invariants for users before accessing an object and post invariants for the programming environment after an object terminates. 1078 1079 The purposes of these invariants goes beyond ensuring authentic values inside an object. 1080 Invariants can also track occurrences of managed objects in other data structures. 1081 For example, reference counting is a typical application of an invariant outside of the data values. 1082 With a reference-counting smart-pointer, the constructor and destructor \emph{of a pointer type} tracks the life cycle of the object it points to. 1083 Both \CC and \CFA RAII systems are powerful enough to achieve reference counting. 1084 1085 In general, a lifecycle function has access to an object by location, \ie constructors and destructors receive a @this@ parameter providing an object's memory address. 1086 \begin{cfa} 1087 struct S { int * ip; }; 1088 void ?{}( S & @this@ ) { this.ip = new(); } $\C[3in]{// default constructor}$ 1089 void ?{}( S & @this@, int i ) { ?{}(this); *this.ip = i; } $\C{// initializing constructor}$ 1090 void ?{}( S & @this@, S s ) { this = s; } $\C{// copy constructor}$ 1091 void ^?{}( S & @this@ ) { delete( this.ip ); } $\C{// destructor}\CRT$ 1092 \end{cfa} 1093 The lifecycle implementation can then add this object to a collection at creation and remove it at destruction. 1094 A module providing lifecycle semantics can traverse the collection at relevant times to keep the objects ``good.'' 1095 Hence, declaring such an object not only ensures ``good'' authentic values, but also an implicit subscription to a service that keeps the value ``good'' across its lifetime. 1096 1097 In many cases, the relationship between memory location and lifecycle is straightforward. 1098 For example, stack-allocated objects being used as parameters and returns, with a sender version in one stack frame and a receiver version in another, as opposed to assignment where sender and receiver are in the same stack frame. 1099 What is crucial for lifecycle management is knowing if the receiver is initialized or uninitialized, \ie an object is or is not currently associated with management. 1100 To provide this knowledge, languages differentiate between initialization and assignment to a left-hand side. 1101 \begin{cfa} 1102 Obj obj2 = obj1; $\C[1.5in]{// initialization, obj2 is initialized}$ 1103 obj2 = obj1; $\C{// assignment, obj2 must be initialized for management to work}\CRT$ 1104 \end{cfa} 1105 Initialization occurs at declaration by value, parameter by argument, return temporary by function call. 1106 Hence, it is necessary to have two kinds of constructors: by value or object. 1107 \begin{cfa} 1108 Obj obj1{ 1, 2, 3 }; $\C[1.5in]{// by value, management is initialized}$ 1109 Obj obj2 = obj1; $\C{// by obj, management is updated}\CRT$ 1110 \end{cfa} 1111 When no object management is required, initialization copies the right-hand value. 1112 Hence, the calling convention remains uniform, where the unmanaged case uses @memcpy@ as the initialization constructor and managed uses the specified initialization constructor. 1113 1114 The \CFA RAII system supports lifecycle functions, except for returning a value from a function to a temporary. 1115 For example, in \CC: 1116 \begin{c++} 1117 struct S {...}; 1118 S identity( S s ) { return s; } 1119 S s; 1120 s = identity( s ); // S temp = identity( s ); s = temp; 1121 \end{c++} 1122 the generated code for the function call created a temporary with initialization from the function call, and then assigns the temporary to the object. 1123 This two step approach means extra storage for the temporary and two copies to get the result into the object variable. 1124 \CC{17} introduced return value-optimization (RVO)~\cite{RVO20} to ``avoid copying an object that a function returns as its value, including avoiding creation of a temporary object''. 1125 \CFA uses C semantics for function return giving direct value-assignment, which eliminates unnecessary code, but skips an essential feature needed by lifetime management. 1126 The following discusses the consequences of this semantics with respect to lifetime management of \CFA strings. 1127 1128 The present string-API contribution provides lifetime management with initialization semantics on function return. 1129 The workaround to achieve the full lifetime semantics does have a runtime performance penalty. 1130 An alternative API sacrifices return initialization semantics to recover full runtime performance. 1131 These APIs are layered, with the slower, friendlier High Level API (HL) wrapping the faster, more primitive Low Level API (LL). 1132 Both API present the same features, up to lifecycle management, with return initialization being disabled in LL and implemented with the workaround in HL. 1133 The intention is for most future code to target HL. 1134 When \CFA becomes a full compiler, it can provide return initialization with RVO optimizations. 1135 Then, programs written with the HL API will simply run faster. 1136 In the meantime, performance-critical sections of applications use LL. 1137 Subsequent performance experiments \see{\VRef{s:PerformanceAssessment}} with other string libraries has \CFA strings using the LL API. 1138 These measurement gives a fair estimate of the goal state for \CFA. 1139 1140 1141 \subsection{Memory management} 1081 1082 \section{Storage management} 1083 1084 This section discusses issues related to storage management of strings. 1085 Specifically, it is common for strings to logically overlap partially or completely. 1086 \begin{cfa} 1087 string s1 = "abcdef"; 1088 string s2 = s1; $\C{// complete overlap, s2 == "abcdef"}$ 1089 string s3 = s1.substr( 0, 3 ); $\C{// partial overlap, s3 == "abc"}$ 1090 \end{cfa} 1091 This raises the question of how strings behave when an overlapping component is changed, 1092 \begin{cfa} 1093 s3[1] = 'w'; $\C{// what happens to s1 and s2?}$ 1094 \end{cfa} 1095 which is restricted by a string's mutable or immutable property. 1096 For example, Java's immutable strings require copy-on-write when any overlapping string changes. 1097 Note, the notion of underlying string mutability is not specified by @const@; \eg in \CC: 1098 \begin{cfa} 1099 const string s1 = "abc"; 1100 \end{cfa} 1101 the @const@ applies to the @s1@ pointer to @"abc"@, and @"abc"@ is an immutable constant that is \emph{copied} into the string's storage. 1102 Hence, @s1@ is not pointing at an immutable constant, meaning its underlying string can be mutable, unless some other designation is specified, such as Java's global immutable rule. 1103 1104 1105 1106 \subsection{General implementation} 1107 \label{string-general-impl} 1142 1108 1143 1109 A centrepiece of the string module is its memory manager. 1144 1110 The management scheme defines a shared buffer for string text. 1145 Allocation in this buffer is via a bump-pointer;1146 the buffer is compacted and/or relocated with growthwhen it fills.1111 Allocation in this buffer is always via a bump-pointer; 1112 the buffer is compacted and/or relocated (to grow) when it fills. 1147 1113 A string is a smart pointer into this buffer. 1148 1114 … … 1151 1117 First, in a general purpose manager, the allocated objects may contain pointers to other objects, making the transitive reachability of these objects a crucial property. 1152 1118 Here, the allocations are text, so one allocation never keeps another alive. 1153 Second, in a general purpose manager, the handle that keeps an allocation alive is just a leanpointer.1154 For strings, a fatter representation is acceptable because th ere are fewer string head pointers versus chained pointers within nodes as for linked containers.1119 Second, in a general purpose manager, the handle that keeps an allocation alive is a bare pointer. 1120 For strings, a fatter representation is acceptable because this pseudo-pointer is only used for enty into the string-heap, not for general data-sub-structure linking around the general heap. 1155 1121 1156 1122 \begin{figure} … … 1164 1130 Normally, one global sharing context is appropriate for an entire program; 1165 1131 concurrent exceptions are discussed in \VRef{s:ControllingImplicitSharing}. 1166 A string is a handle into the buffer and linked into alist.1132 A string is a handle into the buffer and node within a linked list. 1167 1133 The list is doubly linked for $O(1)$ insertion and removal at any location. 1168 Strings are order s in the list by string-text address, where there is no overlapping, and approximately, where there is.1134 Strings are ordered in the list by text start address. 1169 1135 The header maintains a next-allocation pointer, @alloc@, pointing to the last live allocation in the buffer. 1170 1136 No external references point into the buffer and the management procedure relocates the text allocations as needed. … … 1178 1144 When the text buffer fills, \ie the next new string allocation causes @alloc@ to point beyond the end of the buffer, the strings are compacted. 1179 1145 The linked handles define all live strings in the buffer, which indirectly defines the allocated and free space in the buffer. 1180 Since the string handles are in (roughly) sorted order, the handle list can be traversed copying the first text to the start of the buffer and subsequent strings after each over.1181 After compaction, if the amount of free storage is still less than the new string allocation, a larger text buffer is heap allocated, the current buffer is copiesinto the new buffer, and the original buffer is freed.1182 Note, the list of string handles is unaffected during a compaction;1183 only the stringpointers in the handles are modified to new buffer locations.1146 Since the string handles are in sorted order, the handle list can be traversed, copying the first live text to the start of the buffer, and subsequent strings after each other. 1147 If, upon compaction, the amount of free storage would still be less than the new string allocation, a larger text buffer is heap-allocated, the current buffer is copied into the new buffer, and the original buffer is freed. 1148 Note, the list of string handles is structurally unaffected during a compaction; 1149 only the text pointers in the handles are modified to new buffer locations. 1184 1150 1185 1151 Object lifecycle events are the \emph{subscription-management} triggers in such a service. … … 1187 1153 When importing, storage comes from the end of the buffer, into which the text is copied. 1188 1154 The new string handle is inserted at the end of the handle list because the new text is at the end of the buffer. 1189 When initializing from text already in the textbuffer, the new handle is a second reference into the original run of characters.1155 When initializing from text already in the buffer, the new handle is a second reference into the original run of characters. 1190 1156 In this case, the new handle's linked-list position is after the original handle. 1191 1157 Both string initialization styles preserve the string module's internal invariant that the linked-list order matches the buffer order. 1192 1158 For string destruction, handles are removed from the list. 1193 1194 Certain string operations can results in a subset (substring) of another string. 1159 As a result, once a last handle using a run of buffer characters is destroyed, that buffer space gets excluded from the next compaction, making its character-count available in the compacted buffer. 1160 1161 Certain string operations can result in a substring of another string. 1195 1162 The resulting handle is then placed in the correct sorted position in the list, possible with a short linear search to locate the position. 1196 1163 For string operations resulting in a new string, that string is allocated at the end of the buffer. 1197 1164 For shared-edit strings, handles that originally referenced containing locations need to see the new value at the new buffer location. 1198 These strings are moved to appropriate locations at the end of the list \see{ [xref: TBD]}.1199 For nonshared-edit strings, a containing string can be moved and the nonsharedstrings can remain in the same position.1200 String assignment wor ds similarly to string initialization, maintainthe invariant of linked-list order matching buffer order.1201 1202 At the level of the memory manager, these modifications can always be explained as assignment s and appendment;1203 for example, an append is an assignment into the empty substring at the end of the buffer.1165 These strings are moved to appropriate locations at the end of the list \see{details in \VRef{sharing-impl}}. 1166 For nonshared-edit strings, a containing string can be moved and the other strings can remain in the same position. 1167 String assignment works similarly to string initialization, maintaining the invariant of linked-list order matching buffer order. 1168 1169 At the level of the memory manager, these modifications can always be explained as assignment and appendment. 1170 For example, an append is an assignment into the empty substring at the end of the buffer. 1204 1171 Favourable conditions allow for in-place editing: where there is room for the resulting value in the original buffer location, and where all handles referring to the original buffer location see the new value. 1172 One notable example of favourable conditions occurs because the most recently written string is often the last in the buffer, after which a large amount of free space occurs. 1173 So, repeated appends often occur without copying previously accumulated characters. 1205 1174 However, the general case requires a new buffer allocation: where the new value does not fit in the old place, or if other handles are still using the old value. 1206 1175 1207 1176 1177 \subsection{RAII limitations} 1178 \label{string-raii-limit} 1179 1180 Earlier work on \CFA~\cite[ch.~2]{Schluntz17} implemented object constructors and destructors for all types (basic and user defined). 1181 A constructor is a user-defined function run implicitly \emph{after} an object's storage is allocated, and a destructor is a user-defined function run \emph{before} an object's storage is deallcated. 1182 This feature, called Resource Acquisition Is Initialization (RAII)~\cite[p.~389]{Stroustrup94}, helps guarantee invariants for users before accessing an object and for the programming environment after an object terminates. 1183 1184 The purposes of these invariants goes beyond ensuring authentic values inside an object. 1185 Invariants can also track data about managed objects in other data structures. 1186 Reference counting is an example application of an invariant outside of the immediate data value. 1187 With a reference-counting smart-pointer, the constructor and destructor \emph{of a pointer type} track the lifecycle of the object it points to. 1188 Both \CC and \CFA RAII systems are powerful enough to achieve reference counting. 1189 1190 In general, a lifecycle function has access to an object by location, \ie constructors and destructors receive a @this@ parameter providing an object's memory address. 1191 \begin{cfa} 1192 struct S { int * ip; }; 1193 void ?{}( S & @this@ ) { this.ip = new(); } $\C[3in]{// default constructor}$ 1194 void ?{}( S & @this@, int i ) { (this){}; *this.ip = i; } $\C{// initializing constructor}$ 1195 void ?{}( S & @this@, S s ) { (this){*s.ip}; } $\C{// copy constructor}$ 1196 void ^?{}( S & @this@ ) { delete( this.ip ); } $\C{// destructor}\CRT$ 1197 \end{cfa} 1198 Such basic examples use the @this@ address only to gain access to the values being managed. 1199 But the lifecycle logic can use the pointer generally, too. 1200 For example, they can add @this@ object to a collection at creation and remove it at destruction. 1201 \begin{cfa} 1202 // header 1203 struct T { 1204 // private 1205 inline dlink(T); 1206 }; 1207 void ?{}( T & ); $\C[3in]{// default constructor}$ 1208 void ^?{}( T & ); $\C{// destructor}\CRT$ 1209 // implementation 1210 static dlist(T) @all_T@; 1211 void ?{}( T & this ) { insert_last(all_T, @this@) } 1212 void ^?{}( T & this ) { remove(this); } 1213 \end{cfa} 1214 A module providing the @T@ type can traverse @all_T@ at relevant times, to keep the objects ``good.'' 1215 Hence, declaring a @T@ not only ensures that it begins with an initially ``good'' value, but it also provides an implicit subscription to a service that keeps the value ``good'' in the future. 1216 Again, both \CFA and \CC support this usage style. 1217 1218 A third capability concerns \emph{implicitly} requested copies. 1219 When stack-allocated objects are used as parameter and return values, a sender's version exists in one stack frame and a receiver's version exists in another. 1220 In the parameter direction, the language's function-call handling must arrange for a copy-constructor call to happen\footnote{ 1221 \CC also offers move constructors and return-value optimization~\cite{RVO20}. 1222 These features help reduce unhelpful copy-constructor calls, which, for types like the example \lstinline{S}, would lead to extra memory allocations. 1223 \CFA does not currently have these features; adding similarly-intended features to \CFA is desirable. 1224 However, this section is about a problem in the realization of features that \CFA already supports. 1225 To understand the problem presented, the appropriate comparison is with classic versions of \CC that treated such copy-constructor calls as necessary.} 1226 at a time near the control transfer into the callee, with the source as the caller's (sender's) version and the target as the callee's (receiver's) version. 1227 (In the return direction, the roles are reversed and the copy-constructor call happens near the return of control.) 1228 \CC supports this capability without qualification. 1229 \CFA offers limited support here; simple examples work, but implicit copying does not combine successfully with the other RAII capabilities discussed. 1230 1231 To summarize the unsupported combinations, the relevant features are: 1232 \begin{enumerate} 1233 \item 1234 Object provider implements lifecycle functions to manage a resource outside of the object. 1235 \item 1236 Object provider implements lifecycle functions to store references back to the object, often originating from outside of it. 1237 \item 1238 Object user expects to pass (in either direction) an object by value for function calls. 1239 \end{enumerate} 1240 \CC supports all three simultaneously. \CFA does not currently support \#2 and \#3 on the same object, though \#1 works along with either one of \#2 or \#3. \CFA needs to be fixed to support all three simultaneously. 1241 1242 The reason that \CFA does not support \#2 with \#3 is a holdover from how \CFA function calls lowered to C, before \CFA got references and RAII. 1243 At that time, adhering to a principal of minimal intervention, this code could always be treated as passthrough: 1244 \begin{cfa} 1245 struct U {...}; 1246 // RAII to go here 1247 void f( U u ) { F_BODY(u) } 1248 U x; 1249 f( x ); 1250 \end{cfa} 1251 But adding custom RAII (at ``...here'') changes things. 1252 The common C++ lowering~\cite[Sec. 3.1.2.3]{cxx:raii-abi} proceeds differently than the present CFA lowering. 1253 1254 \noindent 1255 \begin{tabular}{l|l} 1256 \begin{cfa} 1257 // C++, likely CFA to be 1258 struct U {...}; 1259 // RAII elided 1260 void f( U * __u_orig ) { 1261 U u = * __u_orig; // call copy ctor 1262 F_BODY(u) 1263 // call dtor, u 1264 } 1265 U x; // call default ctor 1266 f( & x ) ; 1267 // call dtor, x 1268 \end{cfa} 1269 & 1270 \begin{cfa} 1271 // CFA today 1272 struct U {...}; 1273 // RAII elided 1274 void f( U u ) { 1275 F_BODY(u) 1276 } 1277 U x; // call default ctor 1278 { 1279 U __u_for_f = x; // call copy ctor 1280 f( __u_for_f ); 1281 // call dtor, __u_for_f 1282 } 1283 // call dtor, x 1284 \end{cfa} 1285 \end{tabular} 1286 1287 In the CFA-today scheme, the lowered form is still using a by-value C call. 1288 C does a @memcpy@ on structs passed by value. 1289 And so, @F_BDY@ sees the bits of @__u_for_f@ occurring at an address that has never been presented to the @U@ lifecycle functions. 1290 If @U@ is trying to have a style-\#2 invariant, it shows up broken in @F_BDY@: references that are supposed to be to @u@ are actually to the different location @__u_for_f@. 1291 The \CC scheme does not have this problem because it constructs the for-@f@ copy in the correct location. 1292 1293 Yet, the \CFA-today scheme is sufficient to deliver style-\#1 invariants (in this style-\#3 use case) because this scheme still does the correct number of lifecycle calls, using correct values, at correct times. So, reference-counting or simple ownership applications get their invariants respected under call/return-by-value. 1294 1295 % [Mike is not currently seeing how distinguishing initialization from assignment is relevant] 1296 % as opposed to assignment where sender and receiver are in the same stack frame. 1297 % What is crucial for lifecycle management is knowing if the receiver is initialized or uninitialized, \ie an object is or is not currently associated with management. 1298 % To provide this knowledge, languages differentiate between initialization and assignment to a left-hand side. 1299 % \begin{cfa} 1300 % Obj obj2 = obj1; $\C[1.5in]{// initialization, obj2 is initialized}$ 1301 % obj2 = obj1; $\C{// assignment, obj2 must be initialized for management to work}\CRT$ 1302 % \end{cfa} 1303 % Initialization occurs at declaration by value, parameter by argument, return temporary by function call. 1304 % Hence, it is necessary to have two kinds of constructors: by value or object. 1305 % \begin{cfa} 1306 % Obj obj1{ 1, 2, 3 }; $\C[1.5in]{// by value, management is initialized}$ 1307 % Obj obj2 = obj1; $\C{// by obj, management is updated}\CRT$ 1308 % \end{cfa} 1309 % When no object management is required, initialization copies the right-hand value. 1310 % Hence, the calling convention remains uniform, where the unmanaged case uses @memcpy@ as the initialization constructor and managed uses the specified initialization constructor. 1311 1312 % [Mike is currently seeing RVO as orthogonal, addressed with the footnote] 1313 % to a temporary. 1314 % For example, in \CC: 1315 % \begin{c++} 1316 % struct S {...}; 1317 % S identity( S s ) { return s; } 1318 % S s; 1319 % s = identity( s ); // S temp = identity( s ); s = temp; 1320 % \end{c++} 1321 % the generated code for the function call created a temporary with initialization from the function call, and then assigns the temporary to the object. 1322 % This two step approach means extra storage for the temporary and two copies to get the result into the object variable. 1323 % \CC{17} introduced return value-optimization (RVO)~\cite{RVO20} to ``avoid copying an object that a function returns as its value, including avoiding creation of a temporary object''. 1324 % \CFA uses C semantics for function return giving direct value-assignment, which eliminates unnecessary code, but skips an essential feature needed by lifetime management. 1325 % The following discusses the consequences of this semantics with respect to lifetime management of \CFA strings. 1326 1327 1328 The string API offers style \#3's pass-by-value in, for example, in the return of @"a" + "b"@. 1329 Its implementation uses the style-\#2 invariant of the string handles being linked to each other, helping to achieve high performance. 1330 Since these two RAII styles cannont coexist, a workaround splits the API into two layers: one that provides pass-by-value, built upon the other with inter-linked handles. 1331 The layer with pass-by-value incurs a performance penalty, while the layer without delivers the desired runtime performance. 1332 The slower, friendlier High Level API (HL, type @string@) wrapps the faster, more primitive Low Level API (LL, type @string_res@, abbreviating ``resource''). 1333 Both APIs present the same features, up to return-by-value operations being unavailable in LL and implemented via the workaround in HL. 1334 The intention is for most future code to target HL. 1335 When the RAII issue is fixed, the full HL feature set will be acheivable using the LL-style lifetime management. 1336 So then, there will be no need for two API levels; HL will be removed; LL's type will be renamed to @string@; programs written for current HL will run faster. 1337 In the meantime, performance-critical sections of applications must use LL. 1338 Subsequent performance experiments \see{\VRef{s:PerformanceAssessment}} use the LL API when comparing \CFA to other languages. 1339 This measurement gives a fair estimate of the goal state for \CFA. 1340 A separate measure of the HL overhead is also included. 1341 1342 \VRef[Section]{string-general-impl} described the goal state for \CFA. In present state, the type @string_res@ replaces its mention of @string@ as inter-linked handle. 1343 1344 To use LL, a programmer rewrites invocations that used pass-by-value APIs into invocations where the resourcing is more explicit. 1345 Many invocations are unaffected, notably including assignment and comparison. 1346 Of the capabilities listed in \VRef[Figure]{f:StrApiCompare}, only the following three cases have revisions. 1347 1348 \noindent 1349 \begin{tabular}{ll} 1350 HL & LL \\ 1351 \hline 1352 \begin{cfa} 1353 string s = "a" + "b"; 1354 \end{cfa} 1355 & 1356 \begin{cfa} 1357 string_res sr = "a"; 1358 sr += b; 1359 \end{cfa} 1360 \\ 1361 \hline 1362 \begin{cfa} 1363 string s = "abcde"; 1364 string s2 = s(2, 3); // s2 == "cde" 1365 s(2,3) = "x"; // s == "abx" && s2 == "cde" 1366 \end{cfa} 1367 & 1368 \begin{cfa} 1369 string_res sr = "abcde"; 1370 string_res sr2 = {sr, 2, 3}; // sr2 == "cde" 1371 string_res sr_mid = { sr, 2, 3, SHARE }; 1372 sr_mid = "x"; // sr == "abx" && sr2 == "cde" 1373 \end{cfa} 1374 \\ 1375 \hline 1376 \begin{cfa} 1377 string s = "abcde"; 1378 s[2] = "xxx"; // s == "abxxxde" 1379 \end{cfa} 1380 & 1381 \begin{cfa} 1382 string_res sr = "abcde"; 1383 string_res sr_mid = { sr, 2, 1, SHARE }; 1384 mid = "xxx"; // sr == "abxxxde" 1385 \end{cfa} 1386 \end{tabular} 1387 1388 The actual HL workaround is having @string@ wrap a pointer to a uniquely owned, heap-allocated @string_res@. This arrangement has @string@ being style-\#1 RAII, which is compatible with pass-by-value. 1389 1390 1391 1208 1392 \subsection{Sharing implementation} 1209 1210 The \CFA string module has two mechanisms to handle the case when string handles share a string of text. 1211 1212 The first type of sharing is the user requests both string handles be views of the same logical, modifiable string. 1393 \label{sharing-impl} 1394 1395 The \CFA string module has two mechanisms to handle the case when string handles share a run of text. 1396 1397 In the first type of sharing, the user requests that both string handles be views of the same logical, modifiable string. 1213 1398 This state is typically produced by the substring operation. 1214 1399 \begin{cfa} … … 1234 1419 1235 1420 A further abstraction, in the string module's implementation, helps distinguish the two senses of sharing. 1236 A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one string being constructed from another, with the ``share'' opt -in given.1421 A share-edit set (SES) is an equivalence class over string handles, being the reflexive, symmetric and transitive closure of the relationship of one string being constructed from another, with the ``share'' option given. 1237 1422 The SES is represented by a second linked list among the handles. 1238 1423 A string that shares edits with no other is in a SES by itself. 1239 Inside a SES, a logical modification of one substring portion may change the logical value in another , depending on whether the two actually overlap.1424 Inside a SES, a logical modification of one substring portion may change the logical value in another substring portion, depending on whether the two actually overlap. 1240 1425 Conversely, no logical value change can flow outside of a SES. 1241 1426 Even if a modification on one string handle does not reveal itself \emph{logically} to anther handle in the same SES (because they do not overlap), if the modification is length-changing, completing the modification requires visiting the second handle to adjust its location in the sliding text. … … 1305 1490 1306 1491 I assessed the \CFA string library's speed and memory usage against strings in \CC STL. 1307 The results are presented in even equivalent cases, due to either micro-optimizations foregone, or fundamental costs of the added functionality. 1308 They also show the benefits and tradeoffs, as >100\% effects, of switching to \CFA, with the tradeoff points quantified. 1309 The final test shows the overall win of the \CFA text-sharing mechanism. 1310 It exercises several operations together, showing \CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve. 1311 1312 To discuss: general goal of ... 1313 while STL makes you think about memory management, all the time, and if you do, your performance can be great ... 1314 \CFA sacrifices this advantage modestly in exchange for big wins when you're not thinking about memory management. 1315 [Does this position cover all of it?] 1316 1317 To discuss: revisit HL v LL APIs 1318 1319 To discuss: revisit no-sharing as STL emulation modes 1492 1493 Overall, this analysis shows that adding support for the features shown earlier in the chapter comes at no substantial cost in the performance of featrues common to both APIs. 1494 1495 Moreover, the results support the \CFA string's position as a high-level enabler of simplified text processing. 1496 STL makes its user think about memory management. 1497 When the user does, and is successful, STL's performance can be very good. 1498 But when the user fails to think through the consequences of the STL representation, performance becomes poor. 1499 The \CFA string lets the user work at the level of just putting the right text into right variables, with corresponding performance degradations reduced or eliminated. 1500 1501 % The final test shows the overall win of the \CFA text-sharing mechanism. 1502 % It exercises several operations together, showing \CFA enabling clean user code to achieve performance that STL requires less-clean user code to achieve. 1320 1503 1321 1504 1322 1505 \subsection{Methodology} 1323 1506 1324 These tests use a \emph{corpus} of strings (string content is immaterial). 1325 For varying-length strings, the mean length comes from a geometric distribution, which implies that lengths much longer than the mean occur frequently. 1326 The string sizes are: 1507 These tests use a \emph{corpus} of strings. 1508 Their lengths are important; the specific characters occurring in them are immaterial. 1509 In a result graph, a corpus's mean string length is often the independent variable shown on the X axis. 1510 1511 When a corpus contains strings of different lenghths, the lengths are drawn from a geometric distribution. 1512 Therefore, strings much longer than the mean occur nontrivially and strings slightly shorter than the mean occur most often. 1513 A corpus's string sizes are one of: 1327 1514 \begin{description} 1328 1515 \item [Fixed-size] all string lengths are of the stated size. 1329 \item [Varying from 1 to N] meansthe string lengths are drawn from the geometric distribution with a stated mean and all lengths occur.1330 \item [Varying from 16 to N] means string lengths are drawn from the geometric distribution with the stated mean, but only lengths 16 and above occur; thus, the stated mean is above 16.1516 \item [Varying 1 and up] the string lengths are drawn from the geometric distribution with a stated mean and all lengths occur. 1517 \item [Varying 16 and up] string lengths are drawn from the geometric distribution with the stated mean, but only lengths 16 and above occur; thus, the stated mean is above 16. \PAB{Is this one unused? May have just been for ``normalize.''} 1331 1518 \end{description} 1332 The means for the geometric distribution are the X-axis values in experiments.1333 The special treatment of length 16 deals with the short-string optimization (SSO) in STL @string@, currently not implemented in \CFA.1334 When an STL string can fit into a heap pointer, the optimization uses the pointerstorage to eliminate using the heap.1519 The special treatment of length 16 deals with the short-string optimization (SSO) in STL @string@, currently not implemented in \CFA, though a fine future improvement to \CFA. 1520 In the general case, an STL string handle is a pointer (to separately allocated text) and a length. 1521 But when the text is shorter than this representation, the optimization repurposes the handle's storage to eliminate using the heap. 1335 1522 \begin{c++} 1336 1523 class string { 1337 1524 union { 1338 struct { $\C{// long string, string storagein heap}$1525 struct { $\C{// long string, text in heap}$ 1339 1526 size_t size; 1340 1527 char * strptr; 1341 1528 } lstr; 1342 char sstr[sizeof(lstr)]; $\C{// short string 8-16 characters,in situ}$1529 char sstr[sizeof(lstr)]; $\C{// short string <16 characters, text in situ}$ 1343 1530 }; 1344 bool tag; $\C{// string kind, short or long}$ 1345 ... $\C{// other storage}$ 1531 $\C{// tagging for kind (short or long) elided}$ 1346 1532 }; 1347 1533 \end{c++} 1348 1534 1349 When success is illustrated, notwithstanding SSO, a fixed-size or from-16 distribution ensures that extra-optimized cases are not part of the mix on the STL side. 1535 A fixed-size or from-16 distribution ensures that \CC's extra-optimized cases are isolated within, or removed from, the comparison. 1536 1350 1537 In all experiments that use a corpus, its text is generated and loaded into the system under test before the timed phase begins. 1351 1538 1352 To discuss: vocabulary for reused case variables1353 1354 To discuss: common approach to iteration and quoted rates1355 1356 To discuss: hardware and such1357 1358 1539 To ensure comparable results, a common memory allocator is used for \CFA and \CC. 1359 The llheap allocator~\cite{Zulfiqar22} is embedded into \CFA and is used standalone with \CC. 1540 \CFA runs the llheap allocator~\cite{Zulfiqar22}; the test rig plugs this same allocator into \CC. 1541 1542 The operations being measured take dozens of nanoseconds, so a succession of many invocations is run and timed as a group. 1543 The experiments run with fixed duration (targeting approximately 5 seconds), stopping upon passing a goal time, as determined by re-checking @clock()@ every 10,000 invocations, which is never more often than once per 80 ms. 1544 Timing outcomes reprt mean nanoseconds per invocation, which includes harness overhead and the targeted string API execution. 1545 1546 \PAB{To discuss: hardware and such} 1547 1548 As discussed in \VRef[Section]{string-raii-limit}, general performance comparisons are made using \CFA's faster, low-level string API, whose string type is named @string_res@. 1549 1550 \VRef{s:ControllingImplicitSharing} presents an operational mode where \CFA string sharing is turned off. In this mode, the \CFA string operates similarly to \CC's, by using a distinct heap allocation for each string's text. 1551 Some experiments include measurements in this mode for baselining purposes. 1552 It is called ``\CC emulation mode'' or ``nosharing'' here. 1553 1360 1554 1361 1555 1362 1556 \subsection{Test: Append} 1363 1557 1364 This test measures the speed of appending randomly-size text onto a growing string. 1558 These tests measure the speed of appending strings from the corpus onto a larger, growing string. They show \CFA performing comparably to \CC overall, though with reduced penalties for simple API misuses for which \CC programmers may not know to watch out. 1559 1560 The basic harness is: 1561 \begin{cquote} 1562 \setlength{\tabcolsep}{20pt} 1563 \begin{cfa} 1564 START_TIMER 1565 for ( ... ) { 1566 string_res accum; 1567 for ( i; 100 ) { 1568 accum += corpus[ f(i) ]; // importing from char * here 1569 COUNT_ONE_OP_DONE 1570 } 1571 } 1572 STOP_TIMER 1573 \end{cfa} 1574 \end{cquote} 1575 The harness's outer loop executes until a sample-worthy amount of execution has happened. 1576 The inner loop builds up the desired-length string with successive appends, before the outer makes it start over from a blank accumulator. 1577 Each harness run targets a specific (mean) corpus string length and produces one data point on the result graph. 1578 1579 Three specific comparisons are made with this harness. 1580 Each picks its own independent-variable basis of comparison. 1581 1582 All three comparisons use the varying-from-1 corpus construction, \ie they allow the STL to show its advantage from small-string optimization. 1583 1584 1585 \subsubsection{Fresh vs Reuse in \CC, Emulation Baseline} 1586 1587 The first experiment compares \CFA with \CC, with \CFA operating in nosharing mode (and \CC having no other mode). 1588 This experiment simply baselines how \CFA modestly lags \CC's optimization/tuning level generally, yet reproduces a coarser phenomenon. 1589 1590 This experiment also introduces the first \CC coding pitfall, which the next experiment will show is helped by turning on \CFA sharing. By this pitfall, a \CC programmer must pay attention to string variable reuse. 1591 1365 1592 \begin{cquote} 1366 1593 \setlength{\tabcolsep}{20pt} … … 1370 1597 1371 1598 for ( ... ) { 1372 @string x;@ // fresh1599 @string_res accum;@ // fresh 1373 1600 for ( ... ) 1374 x@+=@ ...1375 } 1376 \end{cfa} 1377 & 1378 \begin{cfa} 1379 string x;1601 accum @+=@ ... 1602 } 1603 \end{cfa} 1604 & 1605 \begin{cfa} 1606 string_res accum; 1380 1607 for ( ... ) { 1381 @ x = "";@ $\C[1in]{// reuse}$1608 @accum = "";@ $\C[1in]{// reuse\CRT}$ 1382 1609 for ( ... ) 1383 x @+=@ ... $\C{// append, alternative x = x + ...}\CRT$ 1384 } 1385 \end{cfa} 1386 \end{tabular} 1387 \end{cquote} 1388 The benchmark's outer loop executes ``until a sample-worthy amount of execution has happened'' and an inner loop for ``building up the desired-length string.'' 1389 Its subcases include, 1390 \begin{enumerate}[leftmargin=*] 1391 \item 1392 \CFA nosharing/sharing \vs \CC nosharing. 1393 \item 1394 Difference between the logically equivalent operations @x += ...@ \vs @x = x + ...@. 1395 For numeric types, the generated code is equivalence, giving identical performance. 1396 However, for string types there can be a significant difference in performance, especially if this code appears in a loop iterating a large number of times. 1397 This difference might not be intuitive to beginners. 1398 \item 1399 Coding practice where the user's logical allocation is fresh \vs reused. 1400 Here, \emph{reusing a logical allocation}, means that the program variable, into which the user is concatenating, previously held a long string. 1401 In general, a user should not have to care about this difference, yet the STL performs differently in these cases. 1402 Furthermore, if a function takes a string by reference, if cannot use the fresh approach. 1403 Concretely, both cases incur the cost of copying characters into the target string, but only the allocation-fresh case incurs a further reallocation cost, which is generally paid at points of doubling the length. 1404 For the STL, this cost includes obtaining a fresh buffer from the memory allocator and copying older characters into the new buffer, while \CFA-sharing hides such a cost entirely. 1405 %The fresh \vs reuse distinction is only relevant in the \emph{append} tests. 1406 \end{enumerate} 1610 accum @+=@ ... 1611 } 1612 \end{cfa} 1613 \end{tabular} 1614 \end{cquote} 1615 1616 Both programs are doing the same thing: start with @x@ empty and build it up by appending the same chunks. 1617 A programmer should not have to consider this difference. 1618 But from under the covers, each string being an individual allocation leaks through. 1619 While the inner loop is appending text to an @x@ that had not yet grown to have a large capacity, the program is, naturally, paying to extend the variable-length allocation, occasionally. 1620 This capacity stretching is a sticky property that survives assigning a (short, empty-string) value into an existing initialization. 1621 So, the ``reuse'' version benefits from not growing the allocation on subsequent runs of the inner loop. 1622 Yet, the ``fresh'' version is constantly restarting from a small buffer. 1407 1623 1408 1624 \begin{figure} 1409 1625 \centering 1410 \includegraphics{ string-graph-peq-cppemu.pdf}1626 \includegraphics{plot-string-peq-cppemu.pdf} 1411 1627 % \includegraphics[width=\textwidth]{string-graph-peq-cppemu.png} 1412 \caption{ Average time per iteration (lower is better) with one \lstinline{x += y} invocation, comparing \CFA with STL implementations (given \CFA running in STL emulation mode), and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}1628 \caption{Fresh vs Reuse in \CC, Emulation Baseline. Average time per iteration with one \lstinline{x += y} invocation (lower is better). Comparing \CFA's STL emulation mode with STL implementations, and comparing the ``fresh'' with ``reused'' reset styles.} 1413 1629 \label{fig:string-graph-peq-cppemu} 1414 1630 \end{figure} 1415 1631 1416 This tests use the varying-from-1 corpus construction, \ie it assumes the STL's advantage of small-string optimization. 1417 \PAB{To discuss: any other case variables introduced in the performance intro} 1418 \VRef[Figure]{fig:string-graph-peq-cppemu} shows this behaviour, by the STL and by \CFA in STL emulation mode. 1419 \CFA reproduces STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case. 1420 This penalty characterizes the amount of implementation fine tuning done with STL and not done with \CFA in present state. 1421 There is a larger penalty for redeclaring the string each loop iteration (fresh) versus hosting it out of the loop and reseting it to the null string (reuse). 1422 The cost is 40\% averaged over the cases shown and minimally 24\%, and shows up consistently between the \CFA and STL implementations, and increases with larger strings. 1632 \VRef[Figure]{fig:string-graph-peq-cppemu} shows the resulting performance. 1633 The fresh \vs reuse penalty is the dominant difference. 1634 The cost is 40\% averaged over the cases shown and minimally 24\%. 1635 It shows up consistently on both the \CFA and STL implementations, and this cost is more prominent with larger strings. 1636 1637 The lesser \CFA \vs STL difference shows \CFA reproducing STL's performance, up to a 15\% penalty averaged over the cases shown, diminishing with larger strings, and 50\% in the worst case. 1638 This penalty characterizes implementation fine tuning done with STL and not done yet done with \CFA. 1639 1640 1641 \subsubsection{\CFA's Fresh-Reuse Compromise} 1642 1643 This comparison has the same setup as the last one, except that the \CFA implementation is switched to use its sharing mode. The outcome is that the fresh/reuse difference vanishes in \CFA, with \CFA consistently delivering performance that compromises between the two \CC cases. 1423 1644 1424 1645 \begin{figure} … … 1426 1647 \includegraphics{string-graph-peq-sharing.pdf} 1427 1648 % \includegraphics[width=\textwidth]{string-graph-peq-sharing.png} 1428 \caption{ Average time per iteration (lower is better) with one \lstinline{x += y} invocation, comparing \CFA (having implicit sharing activated) with STL, and comparing the ``fresh'' with ``reused'' reset styles, at various string sizes.}1649 \caption{\CFA Compromise for Fresh \vs Reuse. Average time per iteration with one \lstinline{x += y} invocation (lower is better). Comparing \CFA's sharing mode with STL, and comparing the ``fresh'' with ``reused'' reset styles. The \CC results are repeated from \ref{fig:string-graph-peq-cppemu}.} 1429 1650 \label{fig:string-graph-peq-sharing} 1430 1651 \end{figure} 1431 1652 1432 In sharing mode, \CFA makes the fresh/reuse difference disappear, as shown in \VRef[Figure]{fig:string-graph-peq-sharing}. 1433 At append lengths 5 and above, \CFA not only splits the two baseline STL cases, but its slowdown of 16\% over (STL with user-managed reuse) is close to the \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode. 1653 \VRef[Figure]{fig:string-graph-peq-sharing} has the result. 1654 At append lengths 5 and above, \CFA not only splits the two STL cases, but its slowdown of 16\% over STL with user-managed reuse is close to the baseline \CFA-v-STL implementation difference seen with \CFA in STL-emulation mode. 1655 1656 1657 \subsubsection{\CFA's low overhead for misusing \lstinline{+}} 1658 1659 A further pitfall occurs when the user writes @x = x + y@, rather than @x += y@. Again, they are logically equivalent. 1660 For numeric types, the generated code is equivalent, giving identical performance. 1661 However, for string types there can be a significant difference. 1662 This pitfall is a particularly likely hazard for beginners. 1663 1664 In earlier experiments, the choice of \CFA API among HL and LL had no impact on the functionality being tested. 1665 Here, however, the @+@ operation, which returns its result by value, is only available in HL. 1666 The \CFA @+@ number was obtained by inlining the HL implementation of @+@, which is done using LL's @+=@, into the test harness, while omitting the HL-inherent extra dynamic allocation. The HL-upon-LL @+@ implementation, is: 1667 \begin{cfa} 1668 struct string { 1669 string_res * inner; // RAII manages malloc/free, simple ownership 1670 }; 1671 void ?+=?( string & s, string s2 ) { 1672 (*s.inner) += (*s2.inner); 1673 } 1674 string @?+?@( string lhs, string rhs ) { 1675 string ret = lhs; 1676 ret @+=@ rhs; 1677 return ret; 1678 } 1679 \end{cfa} 1680 This @+@ implementation is also the goal implementation of @+@ once the HL/LL workaround is no longer needed. Inlining the induced LL steps into the test harness gives: 1681 \begin{cquote} 1682 \setlength{\tabcolsep}{20pt} 1683 \begin{tabular}{@{}ll@{}} 1684 \multicolumn{1}{c}{\textbf{Goal}} & \multicolumn{1}{c}{\textbf{Measured}} \\ 1685 \begin{cfa} 1686 1687 for ( ... ) { 1688 string accum; 1689 for ( ... ) { 1690 accum = @accum + ...@ 1691 1692 1693 } 1694 } 1695 \end{cfa} 1696 & 1697 \begin{cfa} 1698 string_res pta_ll_temp; 1699 for ( ... ) { 1700 string_res accum; 1701 for ( ... ) { 1702 pta_ll_temp = @accum@; 1703 pta_ll_temp @+= ...@; 1704 accum = pta_ll_temp; 1705 } 1706 } 1707 \end{cfa} 1708 \end{tabular} 1709 \end{cquote} 1710 Note that this ``Goal'' code functions today in HL. 1434 1711 1435 1712 \begin{figure} … … 1437 1714 \includegraphics{string-graph-pta-sharing.pdf} 1438 1715 % \includegraphics[width=\textwidth]{string-graph-pta-sharing.png} 1439 \caption{Average time per iteration (lower is better) with one \lstinline{x = x + y} invocation, comparing \CFA (having implicit sharing activated) with STL. 1440 For context, the results from \VRef[Figure]{fig:string-graph-peq-sharing} are repeated as the bottom bands. 1441 While not a design goal, and not graphed out, \CFA in STL-emulation mode outperformed STL in this case; user-managed allocation reuse did not affect any of the implementations in this case.} 1716 \caption{CFA's low overhead for misusing \lstinline{+}. Average time per iteration with one \lstinline{x += y} invocation (lower is better). Comparing \CFA (having implicit sharing activated) with STL, and comparing the \lstinline{+}-then-\lstinline{=} with the \lstinline{+=} append styles. The \lstinline{+=} results are repeated from \VRef[Figure]{fig:string-graph-peq-sharing}.} 1442 1717 \label{fig:string-graph-pta-sharing} 1443 1718 \end{figure} 1444 1719 1445 When the user takes a further step beyond the STL's optimal zone, by running @x = x + y@, as in \VRef[Figure]{fig:string-graph-pta-sharing}, the STL's penalty is above $15 \times$ while \CFA's (with sharing) is under$2 \times$, averaged across the cases shown here.1720 \VRef[Figure]{fig:string-graph-pta-sharing} gives the outcome. The STL's penalty is $8 \times$ while \CFA's is only $2 \times$, averaged across the cases shown here. 1446 1721 Moreover, the STL's gap increases with string size, while \CFA's converges. 1447 1448 1449 \subsubsection{Test: Pass argument} 1450 1451 STL has a penalty for passing a string by value, which indirectly forces users to think about memory management when communicating values to a function. 1722 So again, \CFA helps users who just want to treat strings as values, and not think about the resource management under the covers. 1723 1724 While not a design goal, and not graphed out, \CFA in STL-emulation mode heppened to outperform STL in this case. User-managed allocation reuse did not affect either implementation in this case; only ``fresh'' results are shown. 1725 1726 1727 \subsection{Test: Pass argument} 1728 1729 STL has a penalty for passing a string by value, which forces users to think about memory management when communicating values with a function. 1730 The key \CFA value-add is that a user can think of a string simply as a value; this test shows that \CC charges a stiff penalty for thining this way, while \CFA does not. 1731 This test illustrates a main advantage of the \CFA sharing algorithm (in one case). 1732 It shows STL's small-string optimization providing a successful mitigation (in the other case). 1733 1734 The basic operation considered is: 1452 1735 \begin{cfa} 1453 1736 void foo( string s ); … … 1456 1739 \end{cfa} 1457 1740 With implicit sharing active, \CFA treats this operation as normal and supported. 1458 This test illustrates a main advantage of the \CFA sharing algorithm. 1459 It also has a case in which STL's small-string optimization provides a successful mitigation. 1741 1742 Again, an HL-LL difference requires an LL mockup. This time, the fact to integrate into the test harness is that LL does not directly support pass-by-value. 1743 \begin{cquote} 1744 \setlength{\tabcolsep}{20pt} 1745 \begin{tabular}{@{}ll@{}} 1746 \multicolumn{1}{c}{\textbf{Goal}} & \multicolumn{1}{c}{\textbf{Measured}} \\ 1747 \begin{cfa} 1748 void helper( string q ) { 1749 1750 } 1751 START_TIMER 1752 for ( i; ... ) { 1753 helper( corpus[ f(i) ] ); // imported from char * previously 1754 COUNT_ONE_OP_DONE 1755 } 1756 STOP_TIMER 1757 \end{cfa} 1758 & 1759 \begin{cfa} 1760 void helper( string_res & qref ) { 1761 string_res q = { qref, COPY_VALUE }; 1762 } 1763 // rest same, elided 1764 1765 1766 1767 1768 1769 \end{cfa} 1770 \end{tabular} 1771 \end{cquote} 1772 The Goal (HL) version gives the simplest sketch of the test harness. 1773 It uses a single level of looping. 1774 Each iteration uses a corpus item as the argument to a function call. 1775 These corpus items were imported to the string heap before beginning the timed run. 1776 1460 1777 1461 1778 \begin{figure} … … 1470 1787 \end{figure} 1471 1788 1789 1472 1790 \VRef[Figure]{fig:string-graph-pbv} shows the costs for calling a function that receives a string argument by value. 1473 1791 STL's performance worsens as string length increases, while \CFA has the same performance at all sizes. 1474 1792 1475 The \CFA cost to pass a string isnontrivial.1793 While improved, the \CFA cost to pass a string is still nontrivial. 1476 1794 The contributor is adding and removing the callee's string handle from the global list. 1477 This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, given a \CFA realization of this optimization.1795 This cost is $1.5 \times$ to $2 \times$ over STL's when small-string optimization applies, though this cost should be avoidable in the same case, upon a \CFA realization of this optimization. 1478 1796 At the larger sizes, when STL has to manage storage for the string, STL runs more than $3 \times$ slower, mainly due to time spent in the general-purpose memory allocator. 1479 1480 1481 \subsubsection{Test: Allocate} 1482 1483 This test directly compares the allocation schemes of the \CFA string with sharing, compared with the STL string. 1484 It treats the \CFA scheme as a form of garbage collection, and the STL scheme as an application of malloc-free. 1485 The test shows that \CFA enables faster speed at a cost in memory usage. 1486 1487 A garbage collector, afforded the freedom of managed memory, often runs faster than malloc-free (in an amortized analysis, even though it must occasionally stop to collect) because it is able to use its collection time to move objects. 1488 (In the case of the mini-allocator powering the \CFA string library, objects are runs of text.) Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' book-keeping for such a scheme is very light. 1489 A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier book-keeping to navigate and maintain a linked structure. 1797 \PAB{Need to check that. Expecting copying to dominate.} 1798 1799 1800 \subsection{Test: Allocate} 1801 1802 This test directly compares the allocation schemes of the \CFA string (sharing active), with the STL string. 1803 It treats the \CFA scheme as a form of garbage collection (GC), and the STL scheme as an application of malloc-free. 1804 The test shows that \CFA enables faster speed in exchange for greater memory usage. 1805 1806 A garbage collector, afforded the freedom of managed memory (where it knows about all the pointers and is allowed to modify them), often runs faster than malloc-free in an amortized analysis, even though it must occasionally stop to collect. 1807 The sppedup happens because GC is able to use its collection time to move objects. 1808 (In the case of the mini-allocator powering the \CFA string library, objects are runs of text.) Moving objects lets fresh allocations consume from a large contiguous store of available memory; the ``bump pointer'' bookkeeping for such a scheme is very light. 1809 A malloc-free implementation without the freedom to move objects must, in the general case, allocate in the spaces between existing objects; doing so entails the heavier bookkeeping of maintaining a linked structure of freed allocations and/or coalescing freed allocations. 1490 1810 1491 1811 A garbage collector keeps allocations around for longer than the using program can reach them. 1492 By contrast, a program using malloc-free (correctly)releases allocations exactly when they are no longer reachable.1812 By contrast, a program (correctly) using malloc-free releases allocations exactly when they are no longer reachable. 1493 1813 Therefore, the same harness will use more memory while running under garbage collection. 1494 1814 A garbage collector can minimize the memory overhead by searching for these dead allocations aggressively, that is, by collecting more often. … … 1507 1827 The allocator will expand its text buffer during a collection if the actual fraction live exceeds this target. 1508 1828 1509 This experiment's driver allocates strings by constructing a string handle as a local variable then looping over recursive calls. 1510 The time measurement is of nanoseconds per such allocating call. 1511 The arrangement of recursive calls and their fan-out (iterations per recursion level) makes some of the strings long-lived and some of them short-lived. 1512 String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls. 1513 The run presented in this section used a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one. 1829 \begin{figure} 1830 \centering 1831 \includegraphics{string-perf-alloc.pdf} 1832 \caption{} 1833 \label{fig:string-perf-alloc} 1834 \end{figure} 1835 1836 \VRef[Figure]{fig:string-perf-alloc} shows the memory-allocation pattern that the test harness drives. 1837 Note how this pattern creates few long-lived strings and many short-lived strings. 1838 To extend this pattern to the scale of a performance test, the harness is actually recursive, to achieve the nesting (three levels in the figure), and iterative, to achieve the fan-out (factor two / binary tree in the figure). 1839 \begin{cfa} 1840 void f( int depth ) { 1841 if (depth == 0) return; 1842 string_res x = corpus[...]; // importing from char * here 1843 COUNT_ONE_OP_DONE 1844 for ( fanOut ) { 1845 // elided: terminate fixed-duration experiment 1846 f( depth - 1 ); 1847 } 1848 } 1849 START_TIMER 1850 f( launchDepth ); 1851 STOP_TIMER 1852 \end{cfa} 1853 The runs presented use a call depth of 1000 and a fan-out of 1.006, which means that approximately one call in 167 makes two recursive calls, while the rest make one. 1854 This sizing keeps an average of 836 strings live. 1514 1855 This sizing was chosen to keep the amounts of consumed memory within the machine's last-level cache. 1856 1857 The time measurement is of nanoseconds per such @f@-call, be it internal or leaf. 1858 1859 % String lifetime (measured in number of subsequent string allocations) is ?? distributed, because each node in the call tree survives as long as its descendent calls. 1860 1515 1861 1516 1862 \begin{figure} … … 1518 1864 \includegraphics{string-graph-allocn.pdf} 1519 1865 % \includegraphics[width=\textwidth]{string-graph-allocn.png} 1520 \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at (\emph{Fixed-size} corpus construction.1866 \caption{Space and time performance, under varying fraction-live targets, for the five string lengths shown, at \emph{Fixed-size} corpus construction. 1521 1867 [MISSING] The identified clusters are for the default fraction-live target, which is 30\%. 1522 1868 MISSING: STL results, typically just below the 0.5--0.9 \CFA segment. 1523 All runs keep an average of 836 strings live, and the median string lifetime is ?? allocations.}1869 } 1524 1870 \label{fig:string-graph-allocn} 1525 1871 \end{figure} 1526 1872 1527 1873 \VRef[Figure]{fig:string-graph-allocn} shows the results of this experiment. 1528 At all string sizes, varying the liveness threshold gives offersspeed-for-space tradeoffs relative to STL.1874 At all string sizes, varying the liveness threshold gives speed-for-space tradeoffs relative to STL. 1529 1875 At the default liveness threshold, all measured string sizes see a ??\%--??\% speedup for a ??\%--??\% increase in memory footprint. 1530 1876 1531 1877 1532 \subsubsection{Test: Normalize}1533 1534 This test is more applied than the earlier ones.1535 It combines the effects of several operations.1536 It also demonstrates a case of the \CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity.1537 1538 To motivate: edits being rare1539 1540 The program is doing a specialized find-replace operation on a large body of text.1541 In the program under test, the replacement is just to erase a magic character.1542 But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings.1543 The challenge is to apply this packaged function across chunks taken from the large body.1544 Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context.1545 Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context.1878 % \subsection{Test: Normalize} 1879 1880 % This test is more applied than the earlier ones. 1881 % It combines the effects of several operations. 1882 % It also demonstrates a case of the \CFA API allowing user code to perform well, while being written without overt memory management, while achieving similar performance in STL requires adding memory-management complexity. 1883 1884 % To motivate: edits being rare 1885 1886 % The program is doing a specialized find-replace operation on a large body of text. 1887 % In the program under test, the replacement is just to erase a magic character. 1888 % But in the larger software problem represented, the rewrite logic belongs to a module that was originally intended to operate on simple, modest-length strings. 1889 % The challenge is to apply this packaged function across chunks taken from the large body. 1890 % Using the \CFA string library, the most natural way to write the helper module's function also works well in the adapted context. 1891 % Using the STL string, the most natural ways to write the helper module's function, given its requirements in isolation, slow down when it is driven in the adapted context. 1546 1892 1547 1893 \begin{lstlisting}
Note:
See TracChangeset
for help on using the changeset viewer.