Index: doc/theses/mubeen_zulfiqar_MMath/conclusion.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/conclusion.tex	(revision 7b9391a15fdd03c3d6c22b6c4f47a1e1696316eb)
+++ doc/theses/mubeen_zulfiqar_MMath/conclusion.tex	(revision 45200b6785aedaf2e7031c942b400d40453ebdbe)
@@ -36,5 +36,5 @@
 
 Starting a micro-benchmark test-suite for comparing allocators, rather than relying on a suite of arbitrary programs, has been an interesting challenge.
-The current micro-benchmark allows some understand of allocator implementation properties without actually looking at the implementation.
+The current micro-benchmarks allow some understand of allocator implementation properties without actually looking at the implementation.
 For example, the memory micro-benchmark quickly identified how several of the allocators work at the global level.
 It was not possible to show how the micro-benchmarks adjustment knobs were used to tune to an interesting test point.
@@ -52,2 +52,3 @@
 
 After llheap is made available on gitHub, interacting with its users to locate problems and improvements, will make llbench a more robust memory allocator.
+As well, feedback from the \uC and \CFA projects, which have adopted llheap for their memory allocator, will provide additional feedback.
Index: doc/theses/mubeen_zulfiqar_MMath/performance.tex
===================================================================
--- doc/theses/mubeen_zulfiqar_MMath/performance.tex	(revision 7b9391a15fdd03c3d6c22b6c4f47a1e1696316eb)
+++ doc/theses/mubeen_zulfiqar_MMath/performance.tex	(revision 45200b6785aedaf2e7031c942b400d40453ebdbe)
@@ -92,4 +92,5 @@
 The each micro-benchmark is configured and run with each of the allocators,
 The less time an allocator takes to complete a benchmark the better, so lower in the graphs is better.
+All graphs use log scale on the Y-axis, except for the Memory micro-benchmark (see \VRef{s:MemoryMicroBenchmark}).
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -139,10 +140,10 @@
 \end{figure}
 
+\paragraph{Assessment}
 All allocators did well in this micro-benchmark, except for \textsf{dl} on the ARM.
-\textsf{dl}'s performace decreases and the difference with the other allocators starts increases as the number of worker threads increase.
-\textsf{je} was the fastest, although there is not much difference between \textsf{je} and rest of the allocators.
-
-llheap is slightly slower because it uses ownership, where many of the allocations have remote frees, which requires locking.
-When llheap is compiled without ownership, its performance is the same as the other allocators (not shown).
+\textsf{dl}'s is the slowest, indicating some small bottleneck with respect to the other allocators.
+\textsf{je} is the fastest, with only a small benefit over the other allocators.
+% llheap is slightly slower because it uses ownership, where many of the allocations have remote frees, which requires locking.
+% When llheap is compiled without ownership, its performance is the same as the other allocators (not shown).
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -182,10 +183,10 @@
 \end{figure}
 
+\paragraph{Assessment}
 All allocators did well in this micro-benchmark, except for \textsf{dl} and \textsf{pt3}.
-\textsf{dl} uses a single heap for all threads so it is understable that it is generating so much active false-sharing.
-Requests from different threads will be dealt with sequientially by a single heap using locks which can allocate objects to different threads on the same cache line.
-\textsf{pt3} uses multiple heaps but it is not exactly per-thread heap.
-So, it is possible that multiple threads using one heap can get objects allocated on the same cache line which might be causing active false-sharing.
-Rest of the memory allocators generate little or no active false-sharing.
+\textsf{dl} uses a single heap for all threads so it is understandable that it generates so much active false-sharing.
+Requests from different threads are dealt with sequentially by the single heap (using a single lock), which can allocate objects to different threads on the same cache line.
+\textsf{pt3} uses the T:H model, so multiple threads can use one heap, but the active false-sharing is less than \textsf{dl}.
+The rest of the memory allocators generate little or no active false-sharing.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -224,14 +225,15 @@
 \end{figure}
 
-This micro-benchmark divided the allocators in 2 groups.
-First is the group of best performers \textsf{llh}, \textsf{je}, and \textsf{rp}.
-These memory alloctors generate little or no passive false-sharing and their performance difference is negligible.
-Second is the group of the low performers which includes rest of the memory allocators.
-These memory allocators seem to preserve program-induced passive false-sharing.
-\textsf{hrd}'s performance keeps getting worst as the number of threads increase.
-
-Interestingly, allocators such as \textsf{hrd} and \textsf{glc} were among the best performers in micro-benchmark cache thrash as described in section \ref{sec:cache-thrash-perf}.
-But, these allocators were among the low performers in this micro-benchmark.
-It tells us that these allocators do not actively produce false-sharing but they may preserve program-induced passive false sharing.
+\paragraph{Assessment}
+This micro-benchmark divides the allocators into two groups.
+First is the high-performer group: \textsf{llh}, \textsf{je}, and \textsf{rp}.
+These memory allocators generate little or no passive false-sharing and their performance difference is negligible.
+Second is the low-performer group, which includes the rest of the memory allocators.
+These memory allocators have significant program-induced passive false-sharing, where \textsf{hrd}'s is the worst performing allocator.
+All of the allocator's in this group are sharing heaps among threads at some level.
+
+Interestingly, allocators such as \textsf{hrd} and \textsf{glc} performed well in micro-benchmark cache thrash (see \VRef{sec:cache-thrash-perf}).
+But, these allocators are among the low performers in the cache scratch.
+It suggests these allocators do not actively produce false-sharing but preserve program-induced passive false sharing.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -288,15 +290,14 @@
 \end{itemize}
 
-All allocators did well in this micro-benchmark across all allocation chains, except for \textsf{dl} and \textsf{pt3}.
-\textsf{dl} performed the lowest overall and its performce kept getting worse with increasing number of threads.
-\textsf{dl} uses a single heap with a global lock that can become a bottleneck.
-Multiple threads doing memory allocation in parallel can create contention on \textsf{dl}'s single heap.
-\textsf{pt3} which is a modification of \textsf{dl} for multi-threaded applications does not use per-thread heaps and may also have similar bottlenecks.
-
-There's a sudden increase in program completion time of chains that include \textsf{calloc} and all allocators perform relatively slower in these chains including \textsf{calloc}.
-\textsf{calloc} uses \textsf{memset} to set the allocated memory to zero.
-\textsf{memset} is a slow routine which takes a long time compared to the actual memory allocation.
-So, a major part of the time is taken for \textsf{memset} in performance of chains that include \textsf{calloc}.
-But the relative difference among the different memory allocators running the same chain of memory allocation operations still gives us an idea of theor relative performance.
+\paragraph{Assessment}
+This micro-benchmark divides the allocators into two groups: with and without @calloc@.
+@calloc@ uses @memset@ to set the allocated memory to zero, which dominates the cost of the allocation chain (large performance increase) and levels performance across the allocators.
+But the difference among the allocators in a @calloc@ chain still gives an idea of their relative performance.
+
+All allocators did well in this micro-benchmark across all allocation chains, except for \textsf{dl}, \textsf{pt3}, and \textsf{hrd}.
+Again, the low-performing allocators are sharing heaps among threads, so the contention causes performance increases with increasing numbers of threads.
+Furthermore, chains with @free@ can trigger coalescing, which slows the fast path.
+The high-performing allocators all illustrate low latency across the allocation chains, \ie there are no performance spikes as the chain lengths, that might be caused by contention and/or coalescing.
+Low latency is important for applications that are sensitive to unknown execution delays.
 
 %speed-3-malloc.eps
@@ -414,4 +415,5 @@
 \newpage
 \subsection{Memory Micro-Benchmark}
+\label{s:MemoryMicroBenchmark}
 
 This experiment is run with the following two configurations for each allocator.
@@ -522,18 +524,19 @@
 The Y-axis shows the memory usage in bytes.
 
-For the experiment, at a certain time in the program's life, the difference between the memory requested by the benchmark (\textit{current\_req\_mem(B)}) and the memory that the process has received from system (\textit{heap}, \textit{mmap}) should be minimum.
+For this experiment, the difference between the memory requested by the benchmark (\textit{current\_req\_mem(B)}) and the memory that the process has received from system (\textit{heap}, \textit{mmap}) should be minimum.
 This difference is the memory overhead caused by the allocator and shows the level of fragmentation in the allocator.
 
+\paragraph{Assessment}
 First, the differences in the shape of the curves between architectures (top ARM, bottom x64) is small, where the differences are in the amount of memory used.
 Hence, it is possible to focus on either the top or bottom graph.
-The heap curve is remains zero for 4 memory allocators: \textsf{hrd}, \textsf{je}, \textsf{pt3}, and \textsf{rp}.
-These memory allocators are not using the sbrk area, instead they only use mmap to get memory from the system.
-
-\textsf{hrd}, and \textsf{tbb} have higher memory footprint than the others as they use more total dynamic memory.
-One reason for that can be the usage of superblocks as both of these memory allocators create superblocks where each block contains objects of the same size.
+
+Second, the heap curve is 0 for four memory allocators: \textsf{hrd}, \textsf{je}, \textsf{pt3}, and \textsf{rp}, indicating these memory allocators only use @mmap@ to get memory from the system and ignore the @sbrk@ area.
+
+The total dynamic memory is higher for \textsf{hrd} and \textsf{tbb} than the other allocators.
+The main reason is the use of superblocks (see \VRef{s:ObjectContainers}) containing objects of the same size.
 These superblocks are maintained throughout the life of the program.
 
-\textsf{pt3} is the only memory allocator for which the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program.
-It makes pt3 the only memory allocator that gives memory back to operating system as it is freed by the program.
+\textsf{pt3} is the only memory allocator where the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program.
+It makes pt3 the only memory allocator that gives memory back to the operating system as it is freed by the program.
 
 % FOR 1 THREAD
