Changeset c9136d9 for doc/theses/mubeen_zulfiqar_MMath
- Timestamp:
- Apr 26, 2022, 8:33:15 AM (3 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
- Children:
- 1b2adec, 57af3f3
- Parents:
- cd1a5e8
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mubeen_zulfiqar_MMath/performance.tex
rcd1a5e8 rc9136d9 2 2 \label{c:Performance} 3 3 4 This chapter uses the micro-benchmarks from \VRef[Chapter]{s:Benchmarks} to test a number of current memory allocators, including llheap. 5 The goal is to see if llheap is competitive with the current best memory allocators. 6 7 4 8 \section{Machine Specification} 5 9 6 The performance experiments were run on t hree different multicore systemsto determine if there is consistency across platforms:10 The performance experiments were run on two different multi-core architectures (x86 and ARM) to determine if there is consistency across platforms: 7 11 \begin{itemize} 8 12 \item 9 {\bfNasus} AMD EPYC 7662, 64-core socket $\times$ 2, 2.0 GHz, GCC version 9.3.013 \textbf{Nasus} AMD EPYC 7662, 64-core socket $\times$ 2, 2.0 GHz, GCC version 9.3.0 10 14 \item 11 {\bfAlgol} Huawei ARM TaiShan 2280 V2 Kunpeng 920, 24-core socket $\times$ 4, 2.6 GHz, GCC version 9.4.015 \textbf{Algol} Huawei ARM TaiShan 2280 V2 Kunpeng 920, 24-core socket $\times$ 4, 2.6 GHz, GCC version 9.4.0 12 16 \end{itemize} 13 17 14 18 15 \section{Existing Memory Allocators}\label{sec:curAllocatorSec} 16 With dynamic allocation being an important feature of C, there are many stand-alone memory allocators that have been designed for different purposes. For this thesis, we chose 7 of the most popular and widely used memory allocators. 19 \section{Existing Memory Allocators} 20 \label{sec:curAllocatorSec} 21 22 With dynamic allocation being an important feature of C, there are many stand-alone memory allocators that have been designed for different purposes. 23 For this thesis, 7 of the most popular and widely used memory allocators were selected for comparison. 24 25 \subsection{glibc} 26 glibc~\cite{glibc} is the default gcc thread-safe allocator. 27 \\ 28 \textbf{Version:} Ubuntu GLIBC 2.31-0ubuntu9.7 2.31\\ 29 \textbf{Configuration:} Compiled by Ubuntu 20.04.\\ 30 \textbf{Compilation command:} N/A 17 31 18 32 \subsection{dlmalloc} 19 dlmalloc (FIX ME: cite allocator with download link) is a thread-safe allocator that is single threaded and single heap. dlmalloc maintains free-lists of different sizes to store freed dynamic memory. (FIX ME: cite wasik) 33 dlmalloc~\cite{dlmalloc} is a thread-safe allocator that is single threaded and single heap. 34 It maintains free-lists of different sizes to store freed dynamic memory. 20 35 \\ 36 \textbf{Version:} 2.8.6\\ 37 \textbf{Configuration:} Compiled with preprocessor @USE_LOCKS@.\\ 38 \textbf{Compilation command:} @gcc -g3 -O3 -Wall -Wextra -fno-builtin-malloc -fno-builtin-calloc@ @-fno-builtin-realloc -fno-builtin-free -fPIC -shared -DUSE_LOCKS -o libdlmalloc.so malloc-2.8.6.c@ 39 40 \subsection{hoard} 41 Hoard~\cite{hoard} is a thread-safe allocator that is multi-threaded and using a heap layer framework. It has per-thread heaps that have thread-local free-lists, and a global shared heap. 21 42 \\ 22 {\bf Version:} 2.8.6\\ 23 {\bf Configuration:} Compiled with pre-processor USE\_LOCKS.\\ 24 {\bf Compilation command:}\\ 25 cc -g3 -O3 -Wall -Wextra -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free -fPIC -shared -DUSE\_LOCKS -o libdlmalloc.so malloc-2.8.6.c 26 27 \subsection{hoard} 28 Hoard (FIX ME: cite allocator) is a thread-safe allocator that is multi-threaded and using a heap layer framework. It has per-thread heaps that have thread-local free-lists, and a global shared heap. (FIX ME: cite wasik) 43 \textbf{Version:} 3.13\\ 44 \textbf{Configuration:} Compiled with hoard's default configurations and @Makefile@.\\ 45 \textbf{Compilation command:} @make all@ 46 47 \subsection{jemalloc} 48 jemalloc~\cite{jemalloc} is a thread-safe allocator that uses multiple arenas. Each thread is assigned an arena. Each arena has chunks that contain contagious memory regions of same size. An arena has multiple chunks that contain regions of multiple sizes. 29 49 \\ 50 \textbf{Version:} 5.2.1\\ 51 \textbf{Configuration:} Compiled with jemalloc's default configurations and @Makefile@.\\ 52 \textbf{Compilation command:} @autogen.sh; configure; make; make install@ 53 54 \subsection{pt3malloc} 55 pt3malloc~\cite{pt3malloc} is a modification of dlmalloc. 56 It is a thread-safe multi-threaded memory allocator that uses multiple heaps. pt3malloc heap has similar design to dlmalloc's heap. 30 57 \\ 31 {\bf Version:} 3.13\\ 32 {\bf Configuration:} Compiled with hoard's default configurations and Makefile.\\ 33 {\bf Compilation command:}\\ 34 make all 35 36 \subsection{jemalloc} 37 jemalloc (FIX ME: cite allocator) is a thread-safe allocator that uses multiple arenas. Each thread is assigned an arena. Each arena has chunks that contain contagious memory regions of same size. An arena has multiple chunks that contain regions of multiple sizes. 58 \textbf{Version:} 1.8\\ 59 \textbf{Configuration:} Compiled with pt3malloc's @Makefile@ using option ``linux-shared''.\\ 60 \textbf{Compilation command:} @make linux-shared@ 61 62 \subsection{rpmalloc} 63 rpmalloc~\cite{rpmalloc} is a thread-safe allocator that is multi-threaded and uses per-thread heap. Each heap has multiple size-classes and each size-class contains memory regions of the relevant size. 38 64 \\ 65 \textbf{Version:} 1.4.1\\ 66 \textbf{Configuration:} Compiled with rpmalloc's default configurations and ninja build system.\\ 67 \textbf{Compilation command:} @python3 configure.py; ninja@ 68 69 \subsection{tbb malloc} 70 tbb malloc~\cite{tbbmallocmail 71 } is a thread-safe allocator that is multi-threaded and uses private heap for each thread. Each private-heap has multiple bins of different sizes. Each bin contains free regions of the same size. 39 72 \\ 40 {\bf Version:} 5.2.1\\ 41 {\bf Configuration:} Compiled with jemalloc's default configurations and Makefile.\\ 42 {\bf Compilation command:}\\ 43 ./autogen.sh\\ 44 ./configure\\ 45 make\\ 46 make install 47 48 \subsection{pt3malloc} 49 pt3malloc (FIX ME: cite allocator) is a modification of dlmalloc. It is a thread-safe multi-threaded memory allocator that uses multiple heaps. pt3malloc heap has similar design to dlmalloc's heap. 50 \\ 51 \\ 52 {\bf Version:} 1.8\\ 53 {\bf Configuration:} Compiled with pt3malloc's Makefile using option "linux-shared".\\ 54 {\bf Compilation command:}\\ 55 make linux-shared 56 57 \subsection{rpmalloc} 58 rpmalloc (FIX ME: cite allocator) is a thread-safe allocator that is multi-threaded and uses per-thread heap. Each heap has multiple size-classes and each size-class contains memory regions of the relevant size. 59 \\ 60 \\ 61 {\bf Version:} 1.4.1\\ 62 {\bf Configuration:} Compiled with rpmalloc's default configurations and ninja build system.\\ 63 {\bf Compilation command:}\\ 64 python3 configure.py\\ 65 ninja 66 67 \subsection{tbb malloc} 68 tbb malloc (FIX ME: cite allocator) is a thread-safe allocator that is multi-threaded and uses private heap for each thread. Each private-heap has multiple bins of different sizes. Each bin contains free regions of the same size. 69 \\ 70 \\ 71 {\bf Version:} intel tbb 2020 update 2, tbb\_interface\_version == 11102\\ 72 {\bf Configuration:} Compiled with tbbmalloc's default configurations and Makefile.\\ 73 {\bf Compilation command:}\\ 74 make 75 76 \section{Experiment Environment} 77 We used our micro becnhmark suite (FIX ME: cite mbench) to evaluate these memory allocators \ref{sec:curAllocatorSec} and our own memory allocator uHeap \ref{sec:allocatorSec}. 78 79 \section{Results} 80 FIX ME: add experiment, knobs, graphs, description+analysis 73 \textbf{Version:} intel tbb 2020 update 2, tbb\_interface\_version == 11102\\ 74 \textbf{Configuration:} Compiled with tbbmalloc's default configurations and @Makefile@.\\ 75 \textbf{Compilation command:} @make@ 76 77 % \section{Experiment Environment} 78 % We used our micro benchmark suite (FIX ME: cite mbench) to evaluate these memory allocators \ref{sec:curAllocatorSec} and our own memory allocator uHeap \ref{sec:allocatorSec}. 79 80 \section{Experiments} 81 % FIX ME: add experiment, knobs, graphs, description+analysis 81 82 82 83 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% … … 84 85 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 85 86 86 \subsection{Churn Benchmark} 87 88 Churn benchmark tested memory allocators for speed under intensive dynamic memory usage. 89 87 \subsection{Churn Micro-Benchmark} 88 89 Churn tests allocators for speed under intensive dynamic memory usage (see \VRef{s:ChurnBenchmark}). 90 90 This experiment was run with following configurations: 91 92 -maxS : 500 93 94 -minS : 50 95 96 -stepS : 50 97 98 -distroS : fisher 99 100 -objN : 100000 101 102 -cSpots : 16 103 104 -threadN : \{ 1, 2, 4, 8, 16 \} * 105 106 * Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 107 108 Results are shown in figure \ref{fig:churn} for both algol and nasus. 109 X-axis shows number of threads. Each allocator's performance for each thread is shown in different colors. 110 Y-axis shows the total time experiment took to finish. 91 \begin{description}[itemsep=0pt,parsep=0pt] 92 \item[thread:] 93 1, 2, 4, 8, 16 94 \item[spots:] 95 16 96 \item[obj:] 97 100,000 98 \item[max:] 99 500 100 \item[min:] 101 50 102 \item[step:] 103 50 104 \item[distro:] 105 fisher 106 \end{description} 107 108 % -maxS : 500 109 % -minS : 50 110 % -stepS : 50 111 % -distroS : fisher 112 % -objN : 100000 113 % -cSpots : 16 114 % -threadN : 1, 2, 4, 8, 16 115 116 \VRef[Figure]{fig:churn} shows the results for algol and nasus. 117 The X-axis shows the number of threads. 118 Each allocator's performance for each thread is shown in different colors. 119 The Y-axis shows the total experiment time. 111 120 112 121 \begin{figure} … … 118 127 \end{figure} 119 128 129 All allocators did well in this micro-benchmark, except for dmalloc on the ARM, 130 131 120 132 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 121 133 %% THRASH … … 124 136 \subsection{Cache Thrash} 125 137 126 Thrash benchmark tested memory allocators for active false sharing. 127 138 Thrash tests memory allocators for active false sharing (see \VRef{sec:benchThrashSec}). 128 139 This experiment was run with following configurations: 129 130 -cacheIt : 1000 131 132 -cacheRep : 1000000 133 134 -cacheObj : 1 135 136 -threadN : \{ 1, 2, 4, 8, 16 \} * 137 138 * Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 140 \begin{description}[itemsep=0pt,parsep=0pt] 141 \item[thread:] 142 1, 2, 4, 8, 16 143 \item[iterations:] 144 1,000 145 \item[cacheRW:] 146 1,000,000 147 \item[size:] 148 1 149 \end{description} 150 151 % * Each allocator was tested for its performance across different number of threads. 152 % Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 139 153 140 154 Results are shown in figure \ref{fig:cacheThrash} for both algol and nasus. … … 155 169 156 170 \subsection{Cache Scratch} 157 158 Scratch benchmark tested memory allocators for program induced allocator preserved passive false sharing. 159 171 \label{s:CacheScratch} 172 173 Scratch tests memory allocators for program-induced allocator-preserved passive false-sharing. 160 174 This experiment was run with following configurations: 161 162 -cacheIt : 1000 163 164 -cacheRep : 1000000 165 166 -cacheObj : 1 167 168 -threadN : \{ 1, 2, 4, 8, 16 \} * 169 170 * Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 175 \begin{description}[itemsep=0pt,parsep=0pt] 176 \item[threads:] 177 1, 2, 4, 8, 16 178 \item[iterations:] 179 1,000 180 \item[cacheRW:] 181 1,000,000 182 \item[size:] 183 1 184 \end{description} 185 186 % * Each allocator was tested for its performance across different number of threads. 187 % Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 171 188 172 189 Results are shown in figure \ref{fig:cacheScratch} for both algol and nasus. … … 186 203 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 187 204 188 \subsection{Speed Benchmark} 189 190 Speed benchmark tested memory allocators for runtime latency. 191 205 \subsection{Speed Micro-Benchmark} 206 207 Speed testa memory allocators for runtime latency (see \VRef{s:SpeedMicroBenchmark}). 192 208 This experiment was run with following configurations: 193 194 -maxS : 500 195 196 -minS : 50 197 198 -stepS : 50 199 200 -distroS : fisher 201 202 -objN : 1000000 203 204 -threadN : \{ 1, 2, 4, 8, 16 \} * 205 206 * Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 209 \begin{description}[itemsep=0pt,parsep=0pt] 210 \item[max:] 211 500 212 \item[min:] 213 50 214 \item[step:] 215 50 216 \item[distro:] 217 fisher 218 \item[objects:] 219 1,000,000 220 \item[workers:] 221 1, 2, 4, 8, 16 222 \end{description} 223 224 % -maxS : 500 225 % -minS : 50 226 % -stepS : 50 227 % -distroS : fisher 228 % -objN : 1000000 229 % -threadN : \{ 1, 2, 4, 8, 16 \} * 230 231 %* Each allocator was tested for its performance across different number of threads. 232 %Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN. 207 233 208 234 Results for speed benchmark are shown in 12 figures, one figure for each chain of speed benchmark. … … 337 363 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 338 364 339 \subsection{Memory Benchmark} 340 341 Speed benchmark tested memory allocators for their memory footprint. 342 343 This experiment was run with following two configurations for each allocator:\\ 344 365 \subsection{Memory Micro-Benchmark} 366 367 This experiment is run with following two configurations for each allocator. 368 The difference between the two configurations is the number of producers and consumers. 369 Configuration 1 has one producer and one consumer, and configuration 2 has 4 producers where each producer has 4 consumers. 370 371 \noindent 345 372 Configuration 1: 346 347 -threadA : 1 348 349 -threadF : 1 350 351 -maxS : 500 352 353 -minS : 50 354 355 -stepS : 50 356 357 -distroS : fisher 358 359 -objN : 100000 360 361 -consumeS: 100000\\ 362 373 \begin{description}[itemsep=0pt,parsep=0pt] 374 \item[producer (K):] 375 1 376 \item[consumer (M):] 377 1 378 \item[round:] 379 100,000 380 \item[max:] 381 500 382 \item[min:] 383 50 384 \item[step:] 385 50 386 \item[distro:] 387 fisher 388 \item[objects (N):] 389 100,000 390 \end{description} 391 392 % -threadA : 1 393 % -threadF : 1 394 % -maxS : 500 395 % -minS : 50 396 % -stepS : 50 397 % -distroS : fisher 398 % -objN : 100000 399 % -consumeS: 100000 400 401 \noindent 363 402 Configuration 2: 364 365 -threadA : 4 366 367 -threadF : 4 368 369 -maxS : 500 370 371 -minS : 50 372 373 -stepS : 50 374 375 -distroS : fisher 376 377 -objN : 100000 378 379 -consumeS: 100000 380 381 Difference between the two configurations is the number of producers and consumers. 382 Configuration 1 has one producer and one consumer. 383 While configuration 2 has four producers where each producer has four consumers. 384 385 \begin{table}[h!] 403 \begin{description}[itemsep=0pt,parsep=0pt] 404 \item[producer (K):] 405 4 406 \item[consumer (M):] 407 4 408 \item[round:] 409 100,000 410 \item[max:] 411 500 412 \item[min:] 413 50 414 \item[step:] 415 50 416 \item[distro:] 417 fisher 418 \item[objects (N):] 419 100,000 420 \end{description} 421 422 % -threadA : 4 423 % -threadF : 4 424 % -maxS : 500 425 % -minS : 50 426 % -stepS : 50 427 % -distroS : fisher 428 % -objN : 100000 429 % -consumeS: 100000 430 431 \begin{table}[b] 386 432 \centering 387 433 \begin{tabular}{ |c|c|c| } … … 411 457 412 458 Results for memory benchmark are shown in 16 figures, two figures for each of the 8 allocators, one for each configuration. 413 Table \ref{table:mem-benchmark-figs} shows the list of figures that contain memory benchmar results.459 Table \ref{table:mem-benchmark-figs} shows the list of figures that contain memory benchmark results. 414 460 415 461 Each figure has 2 graphs, one for each experiment environment. … … 426 472 * These statistics are gathered by monitoring the \textit{/proc/self/maps} file of the process in linux system. 427 473 428 For each subgraph, x-axis shows the time during the program lifetime at which the data point was generated.474 For each subgraph, x-axis shows the time during the program lifetime at which the data point was generated. 429 475 Y-axis shows the memory usage in bytes. 430 476 431 For the experiment, at a certain time in the pr gram's life, the difference betweemthe memory requested by the benchmark (\textit{current\_req\_mem(B)})432 and the memory that the process has rec ieved from system (\textit{heap}, \textit{mmap}) should be minimum.477 For the experiment, at a certain time in the program's life, the difference between the memory requested by the benchmark (\textit{current\_req\_mem(B)}) 478 and the memory that the process has received from system (\textit{heap}, \textit{mmap}) should be minimum. 433 479 This difference is the memory overhead caused by the allocator and shows the level of fragmentation in the allocator. 434 480 … … 438 484 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-cfa} } 439 485 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-cfa} } 440 \caption{Memory benchma ark results with 1 producer for cfa memory allocator}486 \caption{Memory benchmark results with 1 producer for cfa memory allocator} 441 487 \label{fig:mem-1-prod-1-cons-100-cfa} 442 488 \end{figure} … … 447 493 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-dl} } 448 494 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-dl} } 449 \caption{Memory benchma ark results with 1 producer for dl memory allocator}495 \caption{Memory benchmark results with 1 producer for dl memory allocator} 450 496 \label{fig:mem-1-prod-1-cons-100-dl} 451 497 \end{figure} … … 456 502 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-glc} } 457 503 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-glc} } 458 \caption{Memory benchma ark results with 1 producer for glibc memory allocator}504 \caption{Memory benchmark results with 1 producer for glibc memory allocator} 459 505 \label{fig:mem-1-prod-1-cons-100-glc} 460 506 \end{figure} … … 465 511 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-hrd} } 466 512 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-hrd} } 467 \caption{Memory benchma ark results with 1 producer for hoard memory allocator}513 \caption{Memory benchmark results with 1 producer for hoard memory allocator} 468 514 \label{fig:mem-1-prod-1-cons-100-hrd} 469 515 \end{figure} … … 474 520 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-je} } 475 521 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-je} } 476 \caption{Memory benchma ark results with 1 producer for je memory allocator}522 \caption{Memory benchmark results with 1 producer for je memory allocator} 477 523 \label{fig:mem-1-prod-1-cons-100-je} 478 524 \end{figure} … … 483 529 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-pt3} } 484 530 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-pt3} } 485 \caption{Memory benchma ark results with 1 producer for pt3 memory allocator}531 \caption{Memory benchmark results with 1 producer for pt3 memory allocator} 486 532 \label{fig:mem-1-prod-1-cons-100-pt3} 487 533 \end{figure} … … 492 538 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-rp} } 493 539 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-rp} } 494 \caption{Memory benchma ark results with 1 producer for rp memory allocator}540 \caption{Memory benchmark results with 1 producer for rp memory allocator} 495 541 \label{fig:mem-1-prod-1-cons-100-rp} 496 542 \end{figure} … … 501 547 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-1-prod-1-cons-100-tbb} } 502 548 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-tbb} } 503 \caption{Memory benchma ark results with 1 producer for tbb memory allocator}549 \caption{Memory benchmark results with 1 producer for tbb memory allocator} 504 550 \label{fig:mem-1-prod-1-cons-100-tbb} 505 551 \end{figure} … … 510 556 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-cfa} } 511 557 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-cfa} } 512 \caption{Memory benchma ark results with 4 producers for cfa memory allocator}558 \caption{Memory benchmark results with 4 producers for cfa memory allocator} 513 559 \label{fig:mem-4-prod-4-cons-100-cfa} 514 560 \end{figure} … … 519 565 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-dl} } 520 566 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-dl} } 521 \caption{Memory benchma ark results with 4 producers for dl memory allocator}567 \caption{Memory benchmark results with 4 producers for dl memory allocator} 522 568 \label{fig:mem-4-prod-4-cons-100-dl} 523 569 \end{figure} … … 528 574 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-glc} } 529 575 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-glc} } 530 \caption{Memory benchma ark results with 4 producers for glibc memory allocator}576 \caption{Memory benchmark results with 4 producers for glibc memory allocator} 531 577 \label{fig:mem-4-prod-4-cons-100-glc} 532 578 \end{figure} … … 537 583 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-hrd} } 538 584 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-hrd} } 539 \caption{Memory benchma ark results with 4 producers for hoard memory allocator}585 \caption{Memory benchmark results with 4 producers for hoard memory allocator} 540 586 \label{fig:mem-4-prod-4-cons-100-hrd} 541 587 \end{figure} … … 546 592 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-je} } 547 593 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-je} } 548 \caption{Memory benchma ark results with 4 producers for je memory allocator}594 \caption{Memory benchmark results with 4 producers for je memory allocator} 549 595 \label{fig:mem-4-prod-4-cons-100-je} 550 596 \end{figure} … … 555 601 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-pt3} } 556 602 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-pt3} } 557 \caption{Memory benchma ark results with 4 producers for pt3 memory allocator}603 \caption{Memory benchmark results with 4 producers for pt3 memory allocator} 558 604 \label{fig:mem-4-prod-4-cons-100-pt3} 559 605 \end{figure} … … 564 610 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-rp} } 565 611 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-rp} } 566 \caption{Memory benchma ark results with 4 producers for rp memory allocator}612 \caption{Memory benchmark results with 4 producers for rp memory allocator} 567 613 \label{fig:mem-4-prod-4-cons-100-rp} 568 614 \end{figure} … … 573 619 \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/mem-4-prod-4-cons-100-tbb} } 574 620 \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-tbb} } 575 \caption{Memory benchma ark results with 4 producers for tbb memory allocator}621 \caption{Memory benchmark results with 4 producers for tbb memory allocator} 576 622 \label{fig:mem-4-prod-4-cons-100-tbb} 577 623 \end{figure}
Note: See TracChangeset
for help on using the changeset viewer.