Changeset 2686bc7


Ignore:
Timestamp:
Apr 19, 2022, 3:54:18 PM (2 years ago)
Author:
JiadaL <j82liang@…>
Branches:
ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
Children:
9e7236f4, f6e6a55
Parents:
374cb117 (diff), 5b84a321 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Location:
doc/theses/mubeen_zulfiqar_MMath
Files:
203 added
4 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mubeen_zulfiqar_MMath/allocator.tex

    r374cb117 r2686bc7  
    175175More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
    176176Leaving the 1:1 model with no atomic actions along the fastpath and no special operating-system support required.
    177 The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in \VRef{}, and the greatest potential for heap blowup for certain allocation patterns.
     177The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in \VRef{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns.
    178178
    179179
     
    216216To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
    217217Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
    218 The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VRef{}).
     218The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \VPageref{p:malloc_expansion}).
    219219Furthermore, while operating-system calls are unbounded, many are now reasonably fast, so their latency is tolerable and infrequent.
    220220
     
    504504
    505505
    506 \section{Statistics and Debugging Modes}
     506\section{Statistics and Debugging}
    507507
    508508llheap can be built to accumulate fast and largely contention-free allocation statistics to help understand allocation behaviour.
     
    547547There is an unfortunate problem in detecting unfreed storage because some library routines assume their allocations have life-time duration, and hence, do not free their storage.
    548548For example, @printf@ allocates a 1024 buffer on first call and never deletes this buffer.
    549 To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see \VRef{}), and it is subtracted from the total allocate/free difference.
     549To prevent a false positive for unfreed storage, it is possible to specify an amount of storage that is never freed (see @malloc_unfreed@ \VPageref{p:malloc_unfreed}), and it is subtracted from the total allocate/free difference.
    550550Determining the amount of never-freed storage is annoying, but once done, any warnings of unfreed storage are application related.
    551551
     
    554554
    555555\section{User-level Threading Support}
     556\label{s:UserlevelThreadingSupport}
    556557
    557558The serially-reusable problem (see \VRef{s:AllocationFastpath}) occurs for kernel threads in the ``T:H model, H = number of CPUs'' model and for user threads in the ``1:1'' model, where llheap uses the ``1:1'' model.
     
    670671It is possible to zero fill or align an allocation but not both.
    671672\item
    672 It is \emph{only} possible to zero fill and array allocation.
     673It is \emph{only} possible to zero fill an array allocation.
    673674\item
    674675It is not possible to resize a memory allocation without data copying.
     
    687688void free( void * ptr );
    688689void * memalign( size_t alignment, size_t size );
     690void * aligned_alloc( size_t alignment, size_t size );
     691int posix_memalign( void ** memptr, size_t alignment, size_t size );
    689692void * valloc( size_t size );
    690693void * pvalloc( size_t size );
     694
    691695struct mallinfo mallinfo( void );
    692696int mallopt( int param, int val );
     
    707711Most allocators use @nullptr@ to indicate an allocation failure, specifically out of memory;
    708712hence the need to return an alternate value for a zero-sized allocation.
    709 The alternative is to abort a program when out of memory.
    710 In theory, notifying the programmer allows recovery;
    711 in practice, it is almost impossible to gracefully recover when out of memory, so the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen for llheap.
     713A different approach allowed by the C API is to abort a program when out of memory and return @nullptr@ for a zero-sized allocation.
     714In theory, notifying the programmer of memory failure allows recovery;
     715in practice, it is almost impossible to gracefully recover when out of memory.
     716Hence, the cheaper approach of returning @nullptr@ for a zero-sized allocation is chosen because no pseudo allocation is necessary.
    712717
    713718
    714719\subsection{C Interface}
    715720
    716 Within the C type-system, it is still possible to increase orthogonality and functionality of the dynamic-memory API to make the allocator more usable for programmers.
     721For C, it is possible to increase functionality and orthogonality of the dynamic-memory API to make allocation better for programmers.
     722
     723For existing C allocation routines:
     724\begin{itemize}
     725\item
     726@calloc@ sets the sticky zero-fill property.
     727\item
     728@memalign@, @aligned_alloc@, @posix_memalign@, @valloc@ and @pvalloc@ set the sticky alignment property.
     729\item
     730@realloc@ and @reallocarray@ preserve sticky properties.
     731\end{itemize}
     732
     733The C dynamic-memory API is extended with the following routines:
    717734
    718735\paragraph{\lstinline{void * aalloc( size_t dim, size_t elemSize )}}
    719 @aalloc@ is an extension of malloc.
    720 It allows programmer to allocate a dynamic array of objects without calculating the total size of array explicitly.
    721 The only alternate of this routine in the other allocators is @calloc@ but @calloc@ also fills the dynamic memory with 0 which makes it slower for a programmer who only wants to dynamically allocate an array of objects without filling it with 0.
    722 \paragraph{Usage}
     736extends @calloc@ for allocating a dynamic array of objects without calculating the total size of array explicitly but \emph{without} zero-filling the memory.
     737@aalloc@ is significantly faster than @calloc@, which is the only alternative.
     738
     739\noindent\textbf{Usage}
    723740@aalloc@ takes two parameters.
    724 
    725 \begin{itemize}
    726 \item
    727 @dim@: number of objects in the array
    728 \item
    729 @elemSize@: size of the object in the array.
    730 \end{itemize}
    731 It returns address of dynamic object allocated on heap that can contain dim number of objects of the size elemSize.
    732 On failure, it returns a @NULL@ pointer.
     741\begin{itemize}
     742\item
     743@dim@: number of array objects
     744\item
     745@elemSize@: size of array object
     746\end{itemize}
     747It returns the address of the dynamic array or @NULL@ if either @dim@ or @elemSize@ are zero.
    733748
    734749\paragraph{\lstinline{void * resize( void * oaddr, size_t size )}}
    735 @resize@ is an extension of relloc.
    736 It allows programmer to reuse a currently allocated dynamic object with a new size requirement.
    737 Its alternate in the other allocators is @realloc@ but relloc also copy the data in old object to the new object which makes it slower for the programmer who only wants to reuse an old dynamic object for a new size requirement but does not want to preserve the data in the old object to the new object.
    738 \paragraph{Usage}
     750extends @realloc@ for resizing an existing allocation \emph{without} copying previous data into the new allocation or preserving sticky properties.
     751@resize@ is significantly faster than @realloc@, which is the only alternative.
     752
     753\noindent\textbf{Usage}
    739754@resize@ takes two parameters.
    740 
    741 \begin{itemize}
    742 \item
    743 @oaddr@: the address of the old object that needs to be resized.
    744 \item
    745 @size@: the new size requirement of the to which the old object needs to be resized.
    746 \end{itemize}
    747 It returns an object that is of the size given but it does not preserve the data in the old object.
    748 On failure, it returns a @NULL@ pointer.
     755\begin{itemize}
     756\item
     757@oaddr@: address to be resized
     758\item
     759@size@: new allocation size (smaller or larger than previous)
     760\end{itemize}
     761It returns the address of the old or new storage with the specified new size or @NULL@ if @size@ is zero.
     762
     763\paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
     764extends @aalloc@ and @memalign@ for allocating an aligned dynamic array of objects.
     765Sets sticky alignment property.
     766
     767\noindent\textbf{Usage}
     768@amemalign@ takes three parameters.
     769\begin{itemize}
     770\item
     771@alignment@: alignment requirement
     772\item
     773@dim@: number of array objects
     774\item
     775@elemSize@: size of array object
     776\end{itemize}
     777It returns the address of the aligned dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
     778
     779\paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
     780extends @amemalign@ with zero fill and has the same usage as @amemalign@.
     781Sets sticky zero-fill and alignment property.
     782It returns the address of the aligned, zero-filled dynamic-array or @NULL@ if either @dim@ or @elemSize@ are zero.
     783
     784\paragraph{\lstinline{size_t malloc_alignment( void * addr )}}
     785returns the alignment of the dynamic object for use in aligning similar allocations.
     786
     787\noindent\textbf{Usage}
     788@malloc_alignment@ takes one parameter.
     789\begin{itemize}
     790\item
     791@addr@: address of an allocated object.
     792\end{itemize}
     793It returns the alignment of the given object, where objects not allocated with alignment return the minimal allocation alignment.
     794
     795\paragraph{\lstinline{bool malloc_zero_fill( void * addr )}}
     796returns true if the object has the zero-fill sticky property for use in zero filling similar allocations.
     797
     798\noindent\textbf{Usage}
     799@malloc_zero_fill@ takes one parameters.
     800
     801\begin{itemize}
     802\item
     803@addr@: address of an allocated object.
     804\end{itemize}
     805It returns true if the zero-fill sticky property is set and false otherwise.
     806
     807\paragraph{\lstinline{size_t malloc_size( void * addr )}}
     808returns the request size of the dynamic object (updated when an object is resized) for use in similar allocations.
     809See also @malloc_usable_size@.
     810
     811\noindent\textbf{Usage}
     812@malloc_size@ takes one parameters.
     813\begin{itemize}
     814\item
     815@addr@: address of an allocated object.
     816\end{itemize}
     817It returns the request size or zero if @addr@ is @NULL@.
     818
     819\paragraph{\lstinline{int malloc_stats_fd( int fd )}}
     820changes the file descriptor where @malloc_stats@ writes statistics (default @stdout@).
     821
     822\noindent\textbf{Usage}
     823@malloc_stats_fd@ takes one parameters.
     824\begin{itemize}
     825\item
     826@fd@: files description.
     827\end{itemize}
     828It returns the previous file descriptor.
     829
     830\paragraph{\lstinline{size_t malloc_expansion()}}
     831\label{p:malloc_expansion}
     832set the amount (bytes) to extend the heap when there is insufficient free storage to service an allocation request.
     833It returns the heap extension size used throughout a program, \ie called once at heap initialization.
     834
     835\paragraph{\lstinline{size_t malloc_mmap_start()}}
     836set the crossover between allocations occurring in the @sbrk@ area or separately mapped.
     837It returns the crossover point used throughout a program, \ie called once at heap initialization.
     838
     839\paragraph{\lstinline{size_t malloc_unfreed()}}
     840\label{p:malloc_unfreed}
     841amount subtracted to adjust for unfreed program storage (debug only).
     842It returns the new subtraction amount and called by @malloc_stats@.
     843
     844
     845\subsection{\CC Interface}
     846
     847The following extensions take advantage of overload polymorphism in the \CC type-system.
    749848
    750849\paragraph{\lstinline{void * resize( void * oaddr, size_t nalign, size_t size )}}
    751 This @resize@ is an extension of the above @resize@ (FIX ME: cite above resize).
    752 In addition to resizing the size of of an old object, it can also realign the old object to a new alignment requirement.
    753 \paragraph{Usage}
    754 This resize takes three parameters.
    755 It takes an additional parameter of nalign as compared to the above resize (FIX ME: cite above resize).
    756 
    757 \begin{itemize}
    758 \item
    759 @oaddr@: the address of the old object that needs to be resized.
    760 \item
    761 @nalign@: the new alignment to which the old object needs to be realigned.
    762 \item
    763 @size@: the new size requirement of the to which the old object needs to be resized.
    764 \end{itemize}
    765 It returns an object with the size and alignment given in the parameters.
    766 On failure, it returns a @NULL@ pointer.
    767 
    768 \paragraph{\lstinline{void * amemalign( size_t alignment, size_t dim, size_t elemSize )}}
    769 amemalign is a hybrid of memalign and aalloc.
    770 It allows programmer to allocate an aligned dynamic array of objects without calculating the total size of the array explicitly.
    771 It frees the programmer from calculating the total size of the array.
    772 \paragraph{Usage}
    773 amemalign takes three parameters.
    774 
    775 \begin{itemize}
    776 \item
    777 @alignment@: the alignment to which the dynamic array needs to be aligned.
    778 \item
    779 @dim@: number of objects in the array
    780 \item
    781 @elemSize@: size of the object in the array.
    782 \end{itemize}
    783 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
    784 The returned dynamic array is aligned to the given alignment.
    785 On failure, it returns a @NULL@ pointer.
    786 
    787 \paragraph{\lstinline{void * cmemalign( size_t alignment, size_t dim, size_t elemSize )}}
    788 cmemalign is a hybrid of amemalign and calloc.
    789 It allows programmer to allocate an aligned dynamic array of objects that is 0 filled.
    790 The current way to do this in other allocators is to allocate an aligned object with memalign and then fill it with 0 explicitly.
    791 This routine provides both features of aligning and 0 filling, implicitly.
    792 \paragraph{Usage}
    793 cmemalign takes three parameters.
    794 
    795 \begin{itemize}
    796 \item
    797 @alignment@: the alignment to which the dynamic array needs to be aligned.
    798 \item
    799 @dim@: number of objects in the array
    800 \item
    801 @elemSize@: size of the object in the array.
    802 \end{itemize}
    803 It returns a dynamic array of objects that has the capacity to contain dim number of objects of the size of elemSize.
    804 The returned dynamic array is aligned to the given alignment and is 0 filled.
    805 On failure, it returns a @NULL@ pointer.
    806 
    807 \paragraph{\lstinline{size_t malloc_alignment( void * addr )}}
    808 @malloc_alignment@ returns the alignment of a currently allocated dynamic object.
    809 It allows the programmer in memory management and personal bookkeeping.
    810 It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required alignment.
    811 \paragraph{Usage}
    812 @malloc_alignment@ takes one parameters.
    813 
    814 \begin{itemize}
    815 \item
    816 @addr@: the address of the currently allocated dynamic object.
    817 \end{itemize}
    818 @malloc_alignment@ returns the alignment of the given dynamic object.
    819 On failure, it return the value of default alignment of the llheap allocator.
    820 
    821 \paragraph{\lstinline{bool malloc_zero_fill( void * addr )}}
    822 @malloc_zero_fill@ returns whether a currently allocated dynamic object was initially zero filled at the time of allocation.
    823 It allows the programmer in memory management and personal bookkeeping.
    824 It helps the programmer in verifying the zero filled property of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was zero filled at the time of allocation.
    825 \paragraph{Usage}
    826 @malloc_zero_fill@ takes one parameters.
    827 
    828 \begin{itemize}
    829 \item
    830 @addr@: the address of the currently allocated dynamic object.
    831 \end{itemize}
    832 @malloc_zero_fill@ returns true if the dynamic object was initially zero filled and return false otherwise.
    833 On failure, it returns false.
    834 
    835 \paragraph{\lstinline{size_t malloc_size( void * addr )}}
    836 @malloc_size@ returns the request size of a currently allocated dynamic object.
    837 It allows the programmer in memory management and personal bookkeeping.
    838 It helps the programmer in verifying the alignment of a dynamic object especially in a scenario similar to producer-consumer where a producer allocates a dynamic object and the consumer needs to assure that the dynamic object was allocated with the required size.
    839 Its current alternate in the other allocators is @malloc_usable_size@.
    840 But, @malloc_size@ is different from @malloc_usable_size@ as @malloc_usabe_size@ returns the total data capacity of dynamic object including the extra space at the end of the dynamic object.
    841 On the other hand, @malloc_size@ returns the size that was given to the allocator at the allocation of the dynamic object.
    842 This size is updated when an object is realloced, resized, or passed through a similar allocator routine.
    843 \paragraph{Usage}
    844 @malloc_size@ takes one parameters.
    845 
    846 \begin{itemize}
    847 \item
    848 @addr@: the address of the currently allocated dynamic object.
    849 \end{itemize}
    850 @malloc_size@ returns the request size of the given dynamic object.
    851 On failure, it return zero.
    852 
    853 
    854 \subsection{\CC Interface}
     850extends @resize@ with an alignment re\-quirement.
     851
     852\noindent\textbf{Usage}
     853takes three parameters.
     854\begin{itemize}
     855\item
     856@oaddr@: address to be resized
     857\item
     858@nalign@: alignment requirement
     859\item
     860@size@: new allocation size (smaller or larger than previous)
     861\end{itemize}
     862It returns the address of the old or new storage with the specified new size and alignment, or @NULL@ if @size@ is zero.
    855863
    856864\paragraph{\lstinline{void * realloc( void * oaddr, size_t nalign, size_t size )}}
    857 This @realloc@ is an extension of the default @realloc@ (FIX ME: cite default @realloc@).
    858 In addition to reallocating an old object and preserving the data in old object, it can also realign the old object to a new alignment requirement.
    859 \paragraph{Usage}
    860 This @realloc@ takes three parameters.
    861 It takes an additional parameter of nalign as compared to the default @realloc@.
    862 
    863 \begin{itemize}
    864 \item
    865 @oaddr@: the address of the old object that needs to be reallocated.
    866 \item
    867 @nalign@: the new alignment to which the old object needs to be realigned.
    868 \item
    869 @size@: the new size requirement of the to which the old object needs to be resized.
    870 \end{itemize}
    871 It returns an object with the size and alignment given in the parameters that preserves the data in the old object.
    872 On failure, it returns a @NULL@ pointer.
     865extends @realloc@ with an alignment re\-quirement and has the same usage as aligned @resize@.
    873866
    874867
    875868\subsection{\CFA Interface}
    876 We added some routines to the @malloc@ interface of \CFA.
    877 These routines can only be used in \CFA and not in our stand-alone llheap allocator as these routines use some features that are only provided by \CFA and not by C.
    878 It makes the allocator even more usable to the programmers.
    879 \CFA provides the liberty to know the returned type of a call to the allocator.
    880 So, mainly in these added routines, we removed the object size parameter from the routine as allocator can calculate the size of the object from the returned type.
    881 
    882 \subsection{\lstinline{T * malloc( void )}}
    883 This @malloc@ is a simplified polymorphic form of default @malloc@ (FIX ME: cite malloc).
    884 It does not take any parameter as compared to default @malloc@ that takes one parameter.
    885 \paragraph{Usage}
    886 This @malloc@ takes no parameters.
    887 It returns a dynamic object of the size of type @T@.
    888 On failure, it returns a @NULL@ pointer.
    889 
    890 \subsection{\lstinline{T * aalloc( size_t dim )}}
    891 This @aalloc@ is a simplified polymorphic form of above @aalloc@ (FIX ME: cite aalloc).
    892 It takes one parameter as compared to the above @aalloc@ that takes two parameters.
    893 \paragraph{Usage}
    894 aalloc takes one parameters.
    895 
    896 \begin{itemize}
    897 \item
    898 @dim@: required number of objects in the array.
    899 \end{itemize}
    900 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    901 On failure, it returns a @NULL@ pointer.
    902 
    903 \subsection{\lstinline{T * calloc( size_t dim )}}
    904 This @calloc@ is a simplified polymorphic form of default @calloc@ (FIX ME: cite calloc).
    905 It takes one parameter as compared to the default @calloc@ that takes two parameters.
    906 \paragraph{Usage}
    907 This @calloc@ takes one parameter.
    908 
    909 \begin{itemize}
    910 \item
    911 @dim@: required number of objects in the array.
    912 \end{itemize}
    913 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    914 On failure, it returns a @NULL@ pointer.
    915 
    916 \subsection{\lstinline{T * resize( T * ptr, size_t size )}}
    917 This resize is a simplified polymorphic form of above resize (FIX ME: cite resize with alignment).
    918 It takes two parameters as compared to the above resize that takes three parameters.
    919 It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    920 \paragraph{Usage}
    921 This resize takes two parameters.
    922 
    923 \begin{itemize}
    924 \item
    925 @ptr@: address of the old object.
    926 \item
    927 @size@: the required size of the new object.
    928 \end{itemize}
    929 It returns a dynamic object of the size given in parameters.
    930 The returned object is aligned to the alignment of type @T@.
    931 On failure, it returns a @NULL@ pointer.
    932 
    933 \subsection{\lstinline{T * realloc( T * ptr, size_t size )}}
    934 This @realloc@ is a simplified polymorphic form of default @realloc@ (FIX ME: cite @realloc@ with align).
    935 It takes two parameters as compared to the above @realloc@ that takes three parameters.
    936 It frees the programmer from explicitly mentioning the alignment of the allocation as \CFA provides gives allocator the liberty to get the alignment of the returned type.
    937 \paragraph{Usage}
    938 This @realloc@ takes two parameters.
    939 
    940 \begin{itemize}
    941 \item
    942 @ptr@: address of the old object.
    943 \item
    944 @size@: the required size of the new object.
    945 \end{itemize}
    946 It returns a dynamic object of the size given in parameters that preserves the data in the given object.
    947 The returned object is aligned to the alignment of type @T@.
    948 On failure, it returns a @NULL@ pointer.
    949 
    950 \subsection{\lstinline{T * memalign( size_t align )}}
    951 This memalign is a simplified polymorphic form of default memalign (FIX ME: cite memalign).
    952 It takes one parameters as compared to the default memalign that takes two parameters.
    953 \paragraph{Usage}
    954 memalign takes one parameters.
    955 
    956 \begin{itemize}
    957 \item
    958 @align@: the required alignment of the dynamic object.
    959 \end{itemize}
    960 It returns a dynamic object of the size of type @T@ that is aligned to given parameter align.
    961 On failure, it returns a @NULL@ pointer.
    962 
    963 \subsection{\lstinline{T * amemalign( size_t align, size_t dim )}}
    964 This amemalign is a simplified polymorphic form of above amemalign (FIX ME: cite amemalign).
    965 It takes two parameter as compared to the above amemalign that takes three parameters.
    966 \paragraph{Usage}
    967 amemalign takes two parameters.
    968 
    969 \begin{itemize}
    970 \item
    971 @align@: required alignment of the dynamic array.
    972 \item
    973 @dim@: required number of objects in the array.
    974 \end{itemize}
    975 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    976 The returned object is aligned to the given parameter align.
    977 On failure, it returns a @NULL@ pointer.
    978 
    979 \subsection{\lstinline{T * cmemalign( size_t align, size_t dim  )}}
    980 This cmemalign is a simplified polymorphic form of above cmemalign (FIX ME: cite cmemalign).
    981 It takes two parameter as compared to the above cmemalign that takes three parameters.
    982 \paragraph{Usage}
    983 cmemalign takes two parameters.
    984 
    985 \begin{itemize}
    986 \item
    987 @align@: required alignment of the dynamic array.
    988 \item
    989 @dim@: required number of objects in the array.
    990 \end{itemize}
    991 It returns a dynamic object that has the capacity to contain dim number of objects, each of the size of type @T@.
    992 The returned object is aligned to the given parameter align and is zero filled.
    993 On failure, it returns a @NULL@ pointer.
    994 
    995 \subsection{\lstinline{T * aligned_alloc( size_t align )}}
    996 This @aligned_alloc@ is a simplified polymorphic form of default @aligned_alloc@ (FIX ME: cite @aligned_alloc@).
    997 It takes one parameter as compared to the default @aligned_alloc@ that takes two parameters.
    998 \paragraph{Usage}
    999 This @aligned_alloc@ takes one parameter.
    1000 
    1001 \begin{itemize}
    1002 \item
    1003 @align@: required alignment of the dynamic object.
    1004 \end{itemize}
    1005 It returns a dynamic object of the size of type @T@ that is aligned to the given parameter.
    1006 On failure, it returns a @NULL@ pointer.
    1007 
    1008 \subsection{\lstinline{int posix_memalign( T ** ptr, size_t align )}}
    1009 This @posix_memalign@ is a simplified polymorphic form of default @posix_memalign@ (FIX ME: cite @posix_memalign@).
    1010 It takes two parameters as compared to the default @posix_memalign@ that takes three parameters.
    1011 \paragraph{Usage}
    1012 This @posix_memalign@ takes two parameter.
    1013 
    1014 \begin{itemize}
    1015 \item
    1016 @ptr@: variable address to store the address of the allocated object.
    1017 \item
    1018 @align@: required alignment of the dynamic object.
    1019 \end{itemize}
    1020 
    1021 It stores address of the dynamic object of the size of type @T@ in given parameter ptr.
    1022 This object is aligned to the given parameter.
    1023 On failure, it returns a @NULL@ pointer.
    1024 
    1025 \subsection{\lstinline{T * valloc( void )}}
    1026 This @valloc@ is a simplified polymorphic form of default @valloc@ (FIX ME: cite @valloc@).
    1027 It takes no parameters as compared to the default @valloc@ that takes one parameter.
    1028 \paragraph{Usage}
    1029 @valloc@ takes no parameters.
    1030 It returns a dynamic object of the size of type @T@ that is aligned to the page size.
    1031 On failure, it returns a @NULL@ pointer.
    1032 
    1033 \subsection{\lstinline{T * pvalloc( void )}}
    1034 \paragraph{Usage}
    1035 @pvalloc@ takes no parameters.
    1036 It returns a dynamic object of the size that is calculated by rounding the size of type @T@.
    1037 The returned object is also aligned to the page size.
    1038 On failure, it returns a @NULL@ pointer.
    1039 
    1040 \subsection{Alloc Interface}
    1041 In addition to improve allocator interface both for \CFA and our stand-alone allocator llheap in C.
    1042 We also added a new alloc interface in \CFA that increases usability of dynamic memory allocation.
    1043 This interface helps programmers in three major ways.
    1044 
    1045 \begin{itemize}
    1046 \item
    1047 Routine Name: alloc interface frees programmers from remembering different routine names for different kind of dynamic allocations.
    1048 \item
    1049 Parameter Positions: alloc interface frees programmers from remembering parameter positions in call to routines.
    1050 \item
    1051 Object Size: alloc interface does not require programmer to mention the object size as \CFA allows allocator to determine the object size from returned type of alloc call.
    1052 \end{itemize}
    1053 
    1054 Alloc interface uses polymorphism, backtick routines (FIX ME: cite backtick) and ttype parameters of \CFA (FIX ME: cite ttype) to provide a very simple dynamic memory allocation interface to the programmers.
    1055 The new interface has just one routine name alloc that can be used to perform a wide range of dynamic allocations.
    1056 The parameters use backtick functions to provide a similar-to named parameters feature for our alloc interface so that programmers do not have to remember parameter positions in alloc call except the position of dimension (dim) parameter.
    1057 
    1058 \subsection{Routine: \lstinline{T * alloc( ...
    1059 )}}
    1060 Call to alloc without any parameter returns one object of size of type @T@ allocated dynamically.
    1061 Only the dimension (dim) parameter for array allocation has the fixed position in the alloc routine.
    1062 If programmer wants to allocate an array of objects that the required number of members in the array has to be given as the first parameter to the alloc routine.
    1063 alloc routine accepts six kinds of arguments.
    1064 Using different combinations of than parameters, different kind of allocations can be performed.
    1065 Any combination of parameters can be used together except @`realloc@ and @`resize@ that should not be used simultaneously in one call to routine as it creates ambiguity about whether to reallocate or resize a currently allocated dynamic object.
    1066 If both @`resize@ and @`realloc@ are used in a call to alloc then the latter one will take effect or unexpected resulted might be produced.
    1067 
    1068 \paragraph{Dim}
    1069 This is the only parameter in the alloc routine that has a fixed-position and it is also the only parameter that does not use a backtick function.
    1070 It has to be passed at the first position to alloc call in-case of an array allocation of objects of type @T@.
    1071 It represents the required number of members in the array allocation as in \CFA's @aalloc@ (FIX ME: cite aalloc).
    1072 This parameter should be of type @size_t@.
    1073 
    1074 Example: @int a = alloc( 5 )@
    1075 This call will return a dynamic array of five integers.
    1076 
    1077 \paragraph{Align}
    1078 This parameter is position-free and uses a backtick routine align (@`align@).
    1079 The parameter passed with @`align@ should be of type @size_t@.
    1080 If the alignment parameter is not a power of two or is less than the default alignment of the allocator (that can be found out using routine libAlign in \CFA) then the passed alignment parameter will be rejected and the default alignment will be used.
    1081 
    1082 Example: @int b = alloc( 5 , 64`align )@
    1083 This call will return a dynamic array of five integers.
    1084 It will align the allocated object to 64.
    1085 
    1086 \paragraph{Fill}
    1087 This parameter is position-free and uses a backtick routine fill (@`fill@).
    1088 In case of @realloc@, only the extra space after copying the data in the old object will be filled with given parameter.
    1089 Three types of parameters can be passed using `fill.
    1090 
    1091 \begin{itemize}
    1092 \item
    1093 @char@: A char can be passed with @`fill@ to fill the whole dynamic allocation with the given char recursively till the end of required allocation.
    1094 \item
    1095 Object of returned type: An object of type of returned type can be passed with @`fill@ to fill the whole dynamic allocation with the given object recursively till the end of required allocation.
    1096 \item
    1097 Dynamic object of returned type: A dynamic object of type of returned type can be passed with @`fill@ to fill the dynamic allocation with the given dynamic object.
    1098 In this case, the allocated memory is not filled recursively till the end of allocation.
    1099 The filling happen until the end object passed to @`fill@ or the end of requested allocation reaches.
    1100 \end{itemize}
    1101 
    1102 Example: @int b = alloc( 5 , 'a'`fill )@
    1103 This call will return a dynamic array of five integers.
    1104 It will fill the allocated object with character 'a' recursively till the end of requested allocation size.
    1105 
    1106 Example: @int b = alloc( 5 , 4`fill )@
    1107 This call will return a dynamic array of five integers.
    1108 It will fill the allocated object with integer 4 recursively till the end of requested allocation size.
    1109 
    1110 Example: @int b = alloc( 5 , a`fill )@ where @a@ is a pointer of int type
    1111 This call will return a dynamic array of five integers.
    1112 It will copy data in a to the returned object non-recursively until end of a or the newly allocated object is reached.
    1113 
    1114 \paragraph{Resize}
    1115 This parameter is position-free and uses a backtick routine resize (@`resize@).
    1116 It represents the old dynamic object (oaddr) that the programmer wants to
    1117 \begin{itemize}
    1118 \item
    1119 resize to a new size.
    1120 \item
    1121 realign to a new alignment
    1122 \item
    1123 fill with something.
    1124 \end{itemize}
    1125 The data in old dynamic object will not be preserved in the new object.
    1126 The type of object passed to @`resize@ and the returned type of alloc call can be different.
    1127 
    1128 Example: @int b = alloc( 5 , a`resize )@
    1129 This call will resize object a to a dynamic array that can contain 5 integers.
    1130 
    1131 Example: @int b = alloc( 5 , a`resize , 32`align )@
    1132 This call will resize object a to a dynamic array that can contain 5 integers.
    1133 The returned object will also be aligned to 32.
    1134 
    1135 Example: @int b = alloc( 5 , a`resize , 32`align , 2`fill )@
    1136 This call will resize object a to a dynamic array that can contain 5 integers.
    1137 The returned object will also be aligned to 32 and will be filled with 2.
    1138 
    1139 \paragraph{Realloc}
    1140 This parameter is position-free and uses a backtick routine @realloc@ (@`realloc@).
    1141 It represents the old dynamic object (oaddr) that the programmer wants to
    1142 \begin{itemize}
    1143 \item
    1144 realloc to a new size.
    1145 \item
    1146 realign to a new alignment
    1147 \item
    1148 fill with something.
    1149 \end{itemize}
    1150 The data in old dynamic object will be preserved in the new object.
    1151 The type of object passed to @`realloc@ and the returned type of alloc call cannot be different.
    1152 
    1153 Example: @int b = alloc( 5 , a`realloc )@
    1154 This call will realloc object a to a dynamic array that can contain 5 integers.
    1155 
    1156 Example: @int b = alloc( 5 , a`realloc , 32`align )@
    1157 This call will realloc object a to a dynamic array that can contain 5 integers.
    1158 The returned object will also be aligned to 32.
    1159 
    1160 Example: @int b = alloc( 5 , a`realloc , 32`align , 2`fill )@
    1161 This call will resize object a to a dynamic array that can contain 5 integers.
    1162 The returned object will also be aligned to 32.
    1163 The extra space after copying data of a to the returned object will be filled with 2.
     869
     870The following extensions take advantage of overload polymorphism in the \CFA type-system.
     871The key safety advantage of the \CFA type system is using the return type to select overloads;
     872hence, a polymorphic routine knows the returned type and its size.
     873This capability is used to remove the object size parameter and correctly cast the return storage to match the result type.
     874For example, the following is the \CFA wrapper for C @malloc@:
     875\begin{cfa}
     876forall( T & | sized(T) ) {
     877        T * malloc( void ) {
     878                if ( _Alignof(T) <= libAlign() ) return @(T *)@malloc( @sizeof(T)@ ); // C allocation
     879                else return @(T *)@memalign( @_Alignof(T)@, @sizeof(T)@ ); // C allocation
     880        } // malloc
     881\end{cfa}
     882and is used as follows:
     883\begin{lstlisting}
     884int * i = malloc();
     885double * d = malloc();
     886struct Spinlock { ... } __attribute__(( aligned(128) ));
     887Spinlock * sl = malloc();
     888\end{lstlisting}
     889where each @malloc@ call provides the return type as @T@, which is used with @sizeof@, @_Alignof@, and casting the storage to the correct type.
     890This interface removes many of the common allocation errors in C programs.
     891\VRef[Figure]{f:CFADynamicAllocationAPI} show the \CFA wrappers for the equivalent C/\CC allocation routines with same semantic behaviour.
     892
     893\begin{figure}
     894\begin{lstlisting}
     895T * malloc( void );
     896T * aalloc( size_t dim );
     897T * calloc( size_t dim );
     898T * resize( T * ptr, size_t size );
     899T * realloc( T * ptr, size_t size );
     900T * memalign( size_t align );
     901T * amemalign( size_t align, size_t dim );
     902T * cmemalign( size_t align, size_t dim  );
     903T * aligned_alloc( size_t align );
     904int posix_memalign( T ** ptr, size_t align );
     905T * valloc( void );
     906T * pvalloc( void );
     907\end{lstlisting}
     908\caption{\CFA C-Style Dynamic-Allocation API}
     909\label{f:CFADynamicAllocationAPI}
     910\end{figure}
     911
     912In addition to the \CFA C-style allocator interface, a new allocator interface is provided to further increase orthogonality and usability of dynamic-memory allocation.
     913This interface helps programmers in three ways.
     914\begin{itemize}
     915\item
     916naming: \CFA regular and @ttype@ polymorphism is used to encapsulate a wide range of allocation functionality into a single routine name, so programmers do not have to remember multiple routine names for different kinds of dynamic allocations.
     917\item
     918named arguments: individual allocation properties are specified using postfix function call, so programmers do have to remember parameter positions in allocation calls.
     919\item
     920object size: like the \CFA C-style interface, programmers do not have to specify object size or cast allocation results.
     921\end{itemize}
     922Note, postfix function call is an alternative call syntax, using backtick @`@, where the argument appears before the function name, \eg
     923\begin{cfa}
     924duration ?@`@h( int h );                // ? denote the position of the function operand
     925duration ?@`@m( int m );
     926duration ?@`@s( int s );
     927duration dur = 3@`@h + 42@`@m + 17@`@s;
     928\end{cfa}
     929@ttype@ polymorphism is similar to \CC variadic templates.
     930
     931\paragraph{\lstinline{T * alloc( ... )} or \lstinline{T * alloc( size_t dim, ... )}}
     932is overloaded with a variable number of specific allocation routines, or an integer dimension parameter followed by a variable number specific allocation routines.
     933A call without parameters returns a dynamically allocated object of type @T@ (@malloc@).
     934A call with only the dimension (dim) parameter returns a dynamically allocated array of objects of type @T@ (@aalloc@).
     935The variable number of arguments consist of allocation properties, which can be combined to produce different kinds of allocations.
     936The only restriction is for properties @realloc@ and @resize@, which cannot be combined.
     937
     938The allocation property functions are:
     939\subparagraph{\lstinline{T_align ?`align( size_t alignment )}}
     940to align the allocation.
     941The alignment parameter must be $\ge$ the default alignment (@libAlign()@ in \CFA) and a power of two, \eg:
     942\begin{cfa}
     943int * i0 = alloc( @4096`align@ );  sout | i0 | nl;
     944int * i1 = alloc( 3, @4096`align@ );  sout | i1; for (i; 3 ) sout | &i1[i]; sout | nl;
     945
     9460x555555572000
     9470x555555574000 0x555555574000 0x555555574004 0x555555574008
     948\end{cfa}
     949returns a dynamic object and object array aligned on a 4096-byte boundary.
     950
     951\subparagraph{\lstinline{S_fill(T) ?`fill ( /* various types */ )}}
     952to initialize storage.
     953There are three ways to fill storage:
     954\begin{enumerate}
     955\item
     956A char fills each byte of each object.
     957\item
     958An object of the returned type fills each object.
     959\item
     960An object array pointer fills some or all of the corresponding object array.
     961\end{enumerate}
     962For example:
     963\begin{cfa}[numbers=left]
     964int * i0 = alloc( @0n`fill@ );  sout | *i0 | nl;  // disambiguate 0
     965int * i1 = alloc( @5`fill@ );  sout | *i1 | nl;
     966int * i2 = alloc( @'\xfe'`fill@ ); sout | hex( *i2 ) | nl;
     967int * i3 = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | i3[i]; sout | nl;
     968int * i4 = alloc( 5, @0xdeadbeefN`fill@ );  for ( i; 5 ) sout | hex( i4[i] ); sout | nl;
     969int * i5 = alloc( 5, @i3`fill@ );  for ( i; 5 ) sout | i5[i]; sout | nl;
     970int * i6 = alloc( 5, @[i3, 3]`fill@ );  for ( i; 5 ) sout | i6[i]; sout | nl;
     971\end{cfa}
     972\begin{lstlisting}[numbers=left]
     9730
     9745
     9750xfefefefe
     9765 5 5 5 5
     9770xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef 0xdeadbeef
     9785 5 5 5 5
     9795 5 5 -555819298 -555819298  // two undefined values
     980\end{lstlisting}
     981Examples 1 to 3, fill an object with a value or characters.
     982Examples 4 to 7, fill an array of objects with values, another array, or part of an array.
     983
     984\subparagraph{\lstinline{S_resize(T) ?`resize( void * oaddr )}}
     985used to resize, realign, and fill, where the old object data is not copied to the new object.
     986The old object type may be different from the new object type, since the values are not used.
     987For example:
     988\begin{cfa}[numbers=left]
     989int * i = alloc( @5`fill@ );  sout | i | *i;
     990i = alloc( @i`resize@, @256`align@, @7`fill@ );  sout | i | *i;
     991double * d = alloc( @i`resize@, @4096`align@, @13.5`fill@ );  sout | d | *d;
     992\end{cfa}
     993\begin{lstlisting}[numbers=left]
     9940x55555556d5c0 5
     9950x555555570000 7
     9960x555555571000 13.5
     997\end{lstlisting}
     998Examples 2 to 3 change the alignment, fill, and size for the initial storage of @i@.
     999
     1000\begin{cfa}[numbers=left]
     1001int * ia = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | ia[i]; sout | nl;
     1002ia = alloc( 10, @ia`resize@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
     1003sout | ia; ia = alloc( 5, @ia`resize@, @512`align@, @13`fill@ ); sout | ia; for ( i; 5 ) sout | ia[i]; sout | nl;;
     1004ia = alloc( 3, @ia`resize@, @4096`align@, @2`fill@ );  sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
     1005\end{cfa}
     1006\begin{lstlisting}[numbers=left]
     10075 5 5 5 5
     10087 7 7 7 7 7 7 7 7 7
     10090x55555556d560 0x555555571a00 13 13 13 13 13
     10100x555555572000 0x555555572000 2 0x555555572004 2 0x555555572008 2
     1011\end{lstlisting}
     1012Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
     1013
     1014\subparagraph{\lstinline{S_realloc(T) ?`realloc( T * a ))}}
     1015used to resize, realign, and fill, where the old object data is copied to the new object.
     1016The old object type must be the same as the new object type, since the values used.
     1017Note, for @fill@, only the extra space after copying the data from the old object is filled with the given parameter.
     1018For example:
     1019\begin{cfa}[numbers=left]
     1020int * i = alloc( @5`fill@ );  sout | i | *i;
     1021i = alloc( @i`realloc@, @256`align@ );  sout | i | *i;
     1022i = alloc( @i`realloc@, @4096`align@, @13`fill@ );  sout | i | *i;
     1023\end{cfa}
     1024\begin{lstlisting}[numbers=left]
     10250x55555556d5c0 5
     10260x555555570000 5
     10270x555555571000 5
     1028\end{lstlisting}
     1029Examples 2 to 3 change the alignment for the initial storage of @i@.
     1030The @13`fill@ for example 3 does nothing because no extra space is added.
     1031
     1032\begin{cfa}[numbers=left]
     1033int * ia = alloc( 5, @5`fill@ );  for ( i; 5 ) sout | ia[i]; sout | nl;
     1034ia = alloc( 10, @ia`realloc@, @7`fill@ ); for ( i; 10 ) sout | ia[i]; sout | nl;
     1035sout | ia; ia = alloc( 1, @ia`realloc@, @512`align@, @13`fill@ ); sout | ia; for ( i; 1 ) sout | ia[i]; sout | nl;;
     1036ia = alloc( 3, @ia`realloc@, @4096`align@, @2`fill@ );  sout | ia; for ( i; 3 ) sout | &ia[i] | ia[i]; sout | nl;
     1037\end{cfa}
     1038\begin{lstlisting}[numbers=left]
     10395 5 5 5 5
     10405 5 5 5 5 7 7 7 7 7
     10410x55555556c560 0x555555570a00 5
     10420x555555571000 0x555555571000 5 0x555555571004 2 0x555555571008 2
     1043\end{lstlisting}
     1044Examples 2 to 4 change the array size, alignment and fill for the initial storage of @ia@.
     1045The @13`fill@ for example 3 does nothing because no extra space is added.
     1046
     1047These \CFA allocation features are used extensively in the development of the \CFA runtime.
  • doc/theses/mubeen_zulfiqar_MMath/background.tex

    r374cb117 r2686bc7  
    757757Implementing lock-free operations for more complex data-structures (queue~\cite{Valois94}/deque~\cite{Sundell08}) is correspondingly more complex.
    758758Michael~\cite{Michael04} and Gidenstam \etal \cite{Gidenstam05} have created lock-free variations of the Hoard allocator.
     759
     760
     761\subsubsection{Speed Workload}
     762The worload method uses the opposite approach. It calls the allocator's routines for a specific amount of time and measures how much work was done during that time. Then, similar to the time method, it divides the time by the workload done during that time and calculates the average time taken by the allocator's routine.
     763*** FIX ME: Insert a figure of above benchmark with description
     764
     765\paragraph{Knobs}
     766*** FIX ME: Insert Knobs
  • doc/theses/mubeen_zulfiqar_MMath/benchmarks.tex

    r374cb117 r2686bc7  
    11\chapter{Benchmarks}
    22
    3 \noindent
    4 ====================
    5 
    6 Writing Points:
     3%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     4%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     5%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Micro Benchmark Suite
     6%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     7%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     8
     9The aim of micro benchmark suite is to create a set of programs that can evaluate a memory allocator based on the
     10performance matrices described in (FIX ME: local cite). These programs can be taken as a standard to benchmark an
     11allocator's basic goals. These programs give details of an allocator's memory overhead and speed under a certain
     12allocation pattern. The speed of the allocator is benchmarked in different ways. Similarly, false sharing happening in
     13an allocator is also measured in multiple ways. These benchmarks evalute the allocator under a certain allocation
     14pattern which is configurable and can be changed using a few knobs to benchmark observe an allocator's performance
     15under a desired allocation pattern.
     16
     17Micro Benchmark Suite benchmarks an allocator's performance by allocating dynamic objects and, then, measuring specifc
     18matrices. The benchmark suite evaluates an allocator with a certain allocation pattern. Bnechmarks have different knobs
     19that can be used to change allocation pattern and evaluate an allocator under desired conditions. These can be set by
     20giving commandline arguments to the benchmark on execution.
     21
     22\section{Current Benchmarks} There are multiple benchmarks that are built individually and evaluate different aspects of
     23 a memory allocator. But, there is not a set of benchamrks that can be used to evaluate multiple aspects of memory
     24 allocators.
     25
     26\subsection{threadtest}(FIX ME: cite benchmark and hoard) Each thread repeatedly allocates and then deallocates 100,000
     27 objects. Runtime of the benchmark evaluates its efficiency.
     28
     29\subsection{shbench}(FIX ME: cite benchmark and hoard) Each thread allocates and randomly frees a number of random-sized
     30 objects. It is a stress test that also uses runtime to determine efficiency of the allocator.
     31
     32\subsection{larson}(FIX ME: cite benchmark and hoard) Larson simulates a server environment. Multiple threads are
     33 created where each thread allocator and free a number of objects within a size range. Some objects are passed from
     34 threads to the child threads to free. It caluculates memory operations per second as an indicator of memory
     35 allocator's performance.
     36
     37\section{Memory Benchmark} Memory benchmark measures memory overhead of an allocator. It allocates a number of dynamic
     38 objects. Then, by reading /self/proc/maps, gets the total memory that the allocator has reuested from the OS. It
     39 calculates the memory head by taking the difference between the memory the allocator has requested from the OS and the
     40 memory that program has allocated.
     41
     42\begin{figure}
     43\centering
     44\includegraphics[width=1\textwidth]{figures/bench-memory.eps}
     45\caption{Benchmark Memory Overhead}
     46\label{fig:benchMemoryFig}
     47\end{figure}
     48
     49Figure \ref{fig:benchMemoryFig} gives a flow of the memory benchmark. It creates a producer-consumer scenerio with K producers
     50 and each producer has M consumers. Producer has a separate buffer for each consumer. Producer allocates N objects of
     51 random sizes following the given distrubution for each consumer. Consumer frees those objects. After every memory
     52 operation, program memory usage is recorded throughout the runtime. This data then can be used to visualize the memory
     53 usage and consumption of the prigram.
     54
     55Different knobs can be adjusted to set certain thread model.\\
     56-threadA :  sets number of alloc threads (producers) for mem benchmark\\
     57-consumeS:  sets production and conumption round size\\
     58-threadF :  sets number of free threads (consumers) for mem benchmark
     59
     60Object allocation size can be changed using the knobs:\\
     61-maxS    :  sets max object size\\
     62-minS    :  sets min object size\\
     63-stepS   :  sets object size increment\\
     64-distroS :  sets object size distribution\\
     65-objN    :  sets number of objects per thread\\
     66
     67\section{Speed Benchmark} Speed benchmark measures the runtime speed of an allocator (FIX ME: cite allocator routines).
     68 Speed benchmark measures runtime speed of individual memory allocation routines. It also considers different
     69 allocation chains to measures the performance of the allocator by combining multiple allocation routines in a chain.
     70 It uses following chains and measures allocator runtime speed against them:
    771\begin{itemize}
    8 \item
    9 Performance matrices of memory allocation.
    10 \item
    11 Aim of micro benchmark suite.
    12 
    13 ----- SHOULD WE GIVE IMPLEMENTATION DETAILS HERE? -----
    14 
    15 \PAB{For the benchmarks, yes.}
    16 \item
    17 A complete list of benchmarks in micro benchmark suite.
    18 \item
    19 One detailed section for each benchmark in micro benchmark suite including:
    20 
    21 \begin{itemize}
    22 \item
    23 The introduction of the benchmark.
    24 \item
    25 Figure.
    26 \item
    27 Results with popular memory allocators.
     72\item malloc 0
     73\item free NULL
     74\item malloc
     75\item realloc
     76\item free
     77\item calloc
     78\item malloc-free
     79\item realloc-free
     80\item calloc-free
     81\item malloc-realloc
     82\item calloc-realloc
     83\item malloc-realloc-free
     84\item calloc-realloc-free
     85\item malloc-realloc-free-calloc
    2886\end{itemize}
    2987
    30 \item
    31 Summarize performance of current memory allocators.
    32 \end{itemize}
    33 
    34 \noindent
    35 ====================
    36 
    37 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    38 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    39 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Performance Matrices
    40 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    41 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    42 
    43 
    44 \section{Benchmarks}
    45 There are multiple benchmarks that are built individually and evaluate different aspects of a memory allocator. But, there is not standard set of benchamrks that can be used to evaluate multiple aspects of memory allocators.
    46 
    47 \paragraph{threadtest}
    48 (FIX ME: cite benchmark and hoard) Each thread repeatedly allocates and then deallocates 100,000 objects. Runtime of the benchmark evaluates its efficiency.
    49 
    50 \paragraph{shbench}
    51 (FIX ME: cite benchmark and hoard) Each thread allocates and randomly frees a number of random-sized objects. It is a stress test that also uses runtime to determine efficiency of the allocator.
    52 
    53 \paragraph{larson}
    54 (FIX ME: cite benchmark and hoard) Larson simulates a server environment. Multiple threads are created where each thread allocator and free a number of objects within a size range. Some objects are passed from threads to the child threads to free. It caluculates memory operations per second as an indicator of memory allocator's performance.
    55 
    56 
    57 \section{Performance Matrices of Memory Allocators}
    58 
    59 When it comes to memory allocators, there are no set standards of performance. Performance of a memory allocator depends highly on the usage pattern of the application. A memory allocator that is the best performer for a certain application X might be the worst for some other application which has completely different memory usage pattern compared to the application X. It is extremely difficult to make one universally best memory allocator which will outperform every other memory allocator for every usage pattern. So, there is a lack of a set of standard benchmarks that are used to evaluate a memory allocators's performance.
    60 
    61 If we breakdown the goals of a memory allocator, there are two basic matrices on which a memory allocator's performance is evaluated.
    62 \begin{enumerate}
    63 \item
    64 Memory Overhead
    65 \item
    66 Speed
    67 \end{enumerate}
    68 
    69 \subsection{Memory Overhead}
    70 Memory overhead is the extra memory that a memory allocator takes from OS which is not requested by the application. Ideally, an allocator should get just enough memory from OS that can fulfill application's request and should return this memory to OS as soon as applications frees it. But, allocators retain more memory compared to what application has asked for which causes memory overhead. Memory overhead can happen for various reasons.
    71 
    72 \subsubsection{Fragmentation}
    73 Fragmentation is one of the major reasons behind memory overhead. Fragmentation happens because of situations that are either necassary for proper functioning of the allocator such as internal memory management and book-keeping or are out of allocator's control such as application's usage pattern.
    74 
    75 \paragraph{Internal Fragmentation}
    76 For internal book-keeping, allocators divide raw memory given by OS into chunks, blocks, or lists that can fulfill application's requested size. Allocators use memory given by OS for creating headers, footers etc. to store information about these chunks, blocks, or lists. This increases usage of memory in-addition to the memory requested by application as the allocators need to store their book-keeping information. This extra usage of memory for allocator's own book-keeping is called Internal Fragmentation. Although it cases memory overhead but this overhead is necassary for an allocator's proper funtioning.
    77 
    78 *** FIX ME: Insert a figure of internal fragmentation with explanation
    79 
    80 \paragraph{External Fragmentation}
    81 External fragmentation is the free bits of memory between or around chunks of memory that are currently in-use of the application. Segmentation in memory due to application's usage pattern causes external fragmentation. The memory which is part of external fragmentation is completely free as it is neither used by allocator's internal book-keeping nor by the application. Ideally, an allocator should return a segment of memory back to the OS as soon as application frees it. But, this is not always the case. Allocators get memory from OS in one of the two ways.
    82 
    83 \begin{itemize}
    84 \item
    85 MMap: an allocator can ask OS for whole pages in mmap area. Then, the allocator segments the page internally and fulfills application's request.
    86 \item
    87 Heap: an allocator can ask OS for memory in heap area using system calls such as sbrk. Heap are grows downwards and shrinks upwards.
    88 \begin{itemize}
    89 \item
    90 If an allocator uses mmap area, it can only return extra memory back to OS if the whole page is free i.e. no chunk on the page is in-use of the application. Even if one chunk on the whole page is currently in-use of the application, the allocator has to retain the whole page.
    91 \item
    92 If an allocator uses the heap area, it can only return the continous free memory at the end of the heap area that is currently in allocator's possession as heap area shrinks upwards. If there are free bits of memory in-between chunks of memory that are currently in-use of the application, the allocator can not return these free bits.
    93 
    94 *** FIX ME: Insert a figure of above scenrio with explanation
    95 \item
    96 Even if the entire heap area is free except one small chunk at the end of heap area that is being used by the application, the allocator cannot return the free heap area back to the OS as it is not a continous region at the end of heap area.
    97 
    98 *** FIX ME: Insert a figure of above scenrio with explanation
    99 
    100 \item
    101 Such scenerios cause external fragmentation but it is out of the allocator's control and depend on application's usage pattern.
    102 \end{itemize}
    103 \end{itemize}
    104 
    105 \subsubsection{Internal Memory Management}
    106 Allocators such as je-malloc (FIX ME: insert reference) pro-actively get some memory from the OS and divide it into chunks of certain sizes that can be used in-future to fulfill application's request. This causes memory overhead as these chunks are made before application's request. There is also the possibility that an application may not even request memory of these sizes during their whole life-time.
    107 
    108 *** FIX ME: Insert a figure of above scenrio with explanation
    109 
    110 Allocators such as rp-malloc (FIX ME: insert reference) maintain lists or blocks of sized memory segments that is freed by the application for future use. These lists are maintained without any guarantee that application will even request these sizes again.
    111 
    112 Such tactics are usually used to gain speed as allocator will not have to get raw memory from OS and manage it at the time of application's request but they do cause memory overhead.
    113 
    114 Fragmentation and managed sized chunks of free memory can lead to Heap Blowup as the allocator may not be able to use the fragments or sized free chunks of free memory to fulfill application's requests of other sizes.
    115 
    116 \subsection{Speed}
    117 When it comes to performance evaluation of any piece of software, its runtime is usually the first thing that is evaluated. The same is true for memory allocators but, in case of memory allocators, speed does not only mean the runtime of memory allocator's routines but there are other factors too.
    118 
    119 \subsubsection{Runtime Speed}
    120 Low runtime is the main goal of a memory allocator when it comes it proving its speed. Runtime is the time that it takes for a routine of memory allocator to complete its execution. As mentioned in (FIX ME: refernce to routines' list), there four basic routines that are used in memory allocation. Ideally, each routine of a memory allocator should be fast. Some memory allocator designs use pro-active measures (FIX ME: local refernce) to gain speed when allocating some memory to the application. Some memory allocators do memory allocation faster than memory freeing (FIX ME: graph refernce) while others show similar speed whether memory is allocated or freed.
    121 
    122 \subsubsection{Memory Access Speed}
    123 Runtime speed is not the only speed matrix in memory allocators. The memory that a memory allocator has allocated to the application also needs to be accessible as quick as possible. The application should be able to read/write allocated memory quickly. The allocation method of a memory allocator may introduce some delays when it comes to memory access speed, which is specially important in concurrent applications. Ideally, a memory allocator should allocate all memory on a cache-line to only one thread and no cache-line should be shared among multiple threads. If a memory allocator allocates memory to multple threads on a same cache line, then cache may get invalidated more frequesntly when two different threads running on two different processes will try to read/write the same memory region. On the other hand, if one cache-line is used by only one thread then the cache may get invalidated less frequently. This sharing of one cache-line among multiple threads is called false sharing (FIX ME: cite wasik).
    124 
    125 \paragraph{Active False Sharing}
    126 Active false sharing is the sharing of one cache-line among multiple threads that is caused by memory allocator. It happens when two threads request memory from memory allocator and the allocator allocates memory to both of them on the same cache-line. After that, if the threads are running on different processes who have their own caches and both threads start reading/writing the allocated memory simultanously, their caches will start getting invalidated every time the other thread writes something to the memory. This will cause the application to slow down as the process has to load cache much more frequently.
    127 
    128 *** FIX ME: Insert a figure of above scenrio with explanation
    129 
    130 \paragraph{Passive False Sharing}
    131 Passive false sharing is the kind of false sharing which is caused by the application and not the memory allocator. The memory allocator may preservce passive false sharing in future instead of eradicating it. But, passive false sharing is initiated by the application.
    132 
    133 \subparagraph{Program Induced Passive False Sharing}
    134 Program induced false sharing is completely out of memory allocator's control and is purely caused by the application. When a thread in the application creates multiple objects in the dynamic area and allocator allocates memory for these objects on the same cache-line as the objects are created by the same thread. Passive false sharing will occur if this thread passes one of these objects to another thread but it retains the rest of these objects or it passes some/all of the remaining objects to some third thread(s). Now, one cache-line is shared among multiple threads but it is caused by the application and not the allocator. It is out of allocator's control and has the similar performance impact as Active False Sharing (FIX ME: cite local) if these threads, who are sharing the same cache-line, start reading/writing the given objects simultanously.
    135 
    136 *** FIX ME: Insert a figure of above scenrio 1 with explanation
    137 
    138 *** FIX ME: Insert a figure of above scenrio 2 with explanation
    139 
    140 \subparagraph{Program Induced Allocator Preserved Passive False Sharing}
    141 Program induced allocator preserved passive false sharing is another interesting case of passive false sharing. Both the application and the allocator are partially responsible for it. It starts the same as Program Induced False Sharing (FIX ME: cite local). Once, an application thread has created multiple dynamic objects on the same cache-line and ditributed these objects among multiple threads causing sharing of one cache-line among multiple threads (Program Induced Passive False Sharing). This kind of false sharing occurs when one of these threads, which got the object on the shared cache-line, frees the passed object then re-allocates another object but the allocator returns the same object (on the shared cache-line) that this thread just freed. Although, the application caused the false sharing to happen in the frst place however, to prevent furthur false sharing, the allocator should have returned the new object on some other cache-line which is only shared by the allocating thread. When it comes to performnce impact, this passive false sharing will slow down the application just like any other kind of false sharing if the threads sharing the cache-line start reading/writing the objects simultanously.
    142 
    143 
    144 *** FIX ME: Insert a figure of above scenrio with explanation
    145 
    146 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    147 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    148 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Micro Benchmark Suite
    149 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    150 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    151 
    152 \section{Micro Benchmark Suite}
    153 The aim of micro benchmark suite is to create a set of programs that can evaluate a memory allocator based on the performance matrices described in (FIX ME: local cite). These programs can be taken as a standard to benchmark an allocator's basic goals. These programs give details of an allocator's memory overhead and speed under a certain allocation pattern. The speed of the allocator is benchmarked in different ways. Similarly, false sharing happening in an allocator is also measured in multiple ways. These benchmarks evalute the allocator under a certain allocation pattern which is configurable and can be changed using a few knobs to benchmark observe an allocator's performance under a desired allocation pattern.
    154 
    155 Micro Benchmark Suite benchmarks an allocator's performance by allocating dynamic objects and, then, measuring specifc matrices. The benchmark suite evaluates an allocator with a certain allocation pattern. Bnechmarks have different knobs that can be used to change allocation pattern and evaluate an allocator under desired conditions. These can be set by giving commandline arguments to the benchmark on execution.
    156 
    157 Following is the list of avalable knobs.
    158 
    159 *** FIX ME: Add knobs items after finalize
    160 
    161 \subsection{Memory Benchmark}
    162 Memory benchmark measures memory overhead of an allocator. It allocates a number of dynamic objects. Then, by reading /self/proc/maps, gets the total memory that the allocator has reuested from the OS. Finally, it calculates the memory head by taking the difference between the memory the allocator has requested from the OS and the memory that program has allocated.
    163 *** FIX ME: Insert a figure of above benchmark with description
    164 
    165 \paragraph{Relevant Knobs}
    166 *** FIX ME: Insert Relevant Knobs
    167 
    168 \subsection{Speed Benchmark}
    169 Speed benchmark calculates the runtime speed of an allocator's functions (FIX ME: cite allocator routines). It does by measuring the runtime of allocator routines in two different ways.
    170 
    171 \subsubsection{Speed Time}
    172 The time method does a certain amount of work by calling each routine of the allocator (FIX ME: cite allocator routines) a specific time. It calculates the total time it took to perform this workload. Then, it divides the time it took by the workload and calculates the average time taken by the allocator's routine.
    173 *** FIX ME: Insert a figure of above benchmark with description
    174 
    175 \paragraph{Relevant Knobs}
    176 *** FIX ME: Insert Relevant Knobs
    177 
    178 \subsubsection{Speed Workload}
    179 The worload method uses the opposite approach. It calls the allocator's routines for a specific amount of time and measures how much work was done during that time. Then, similar to the time method, it divides the time by the workload done during that time and calculates the average time taken by the allocator's routine.
    180 *** FIX ME: Insert a figure of above benchmark with description
    181 
    182 \paragraph{Relevant Knobs}
    183 *** FIX ME: Insert Relevant Knobs
    184 
    185 \subsection{Cache Scratch}
    186 Cache Scratch benchmark measures program induced allocator preserved passive false sharing (FIX ME CITE) in an allocator. It does so in two ways.
    187 
    188 \subsubsection{Cache Scratch Time}
    189 Cache Scratch Time allocates dynamic objects. Then, it benchmarks program induced allocator preserved passive false sharing (FIX ME CITE) in an allocator by measuring the time it takes to read/write these objects.
    190 *** FIX ME: Insert a figure of above benchmark with description
    191 
    192 \paragraph{Relevant Knobs}
    193 *** FIX ME: Insert Relevant Knobs
    194 
    195 \subsubsection{Cache Scratch Layout}
    196 Cache Scratch Layout also allocates dynamic objects. Then, it benchmarks program induced allocator preserved passive false sharing (FIX ME CITE) by using heap addresses returned by the allocator. It calculates how many objects were allocated to different threads on the same cache line.
    197 *** FIX ME: Insert a figure of above benchmark with description
    198 
    199 \paragraph{Relevant Knobs}
    200 *** FIX ME: Insert Relevant Knobs
    201 
    202 \subsection{Cache Thrash}
    203 Cache Thrash benchmark measures allocator induced passive false sharing (FIX ME CITE) in an allocator. It also does so in two ways.
    204 
    205 \subsubsection{Cache Thrash Time}
    206 Cache Thrash Time allocates dynamic objects. Then, it benchmarks allocator induced false sharing (FIX ME CITE) in an allocator by measuring the time it takes to read/write these objects.
    207 *** FIX ME: Insert a figure of above benchmark with description
    208 
    209 \paragraph{Relevant Knobs}
    210 *** FIX ME: Insert Relevant Knobs
    211 
    212 \subsubsection{Cache Thrash Layout}
    213 Cache Thrash Layout also allocates dynamic objects. Then, it benchmarks allocator induced false sharing (FIX ME CITE) by using heap addresses returned by the allocator. It calculates how many objects were allocated to different threads on the same cache line.
    214 *** FIX ME: Insert a figure of above benchmark with description
    215 
    216 \paragraph{Relevant Knobs}
    217 *** FIX ME: Insert Relevant Knobs
     88\begin{figure}
     89\centering
     90\includegraphics[width=1\textwidth]{figures/bench-speed.eps}
     91\caption{Benchmark Speed}
     92\label{fig:benchSpeedFig}
     93\end{figure}
     94
     95As laid out in figure \ref{fig:benchSpeedFig}, each chain is measured separately. Each routine in the chain is called for N objects and then
     96 those allocated objects are used when call the next routine in the allocation chain. This way we can measure the
     97 complete latency of memory allocator when multiple routines are chained together e.g. malloc-realloc-free-calloc gives
     98 us the whole picture of the major allocation routines when combined together in a chain.
     99
     100For each chain, time taken is recorded which then can be used to visualize performance of a memory allocator against
     101each chain.
     102
     103Number of worker threads can be adjust using a command-line argument -threadN.
     104
     105\section{Churn Benchmark} Churn benchmark measures the overall runtime speed of an allocator in a multi-threaded
     106 scenerio where each thread extinsevly allocates and frees dynamic memory.
     107
     108\begin{figure}
     109\centering
     110\includegraphics[width=1\textwidth]{figures/bench-churn.eps}
     111\caption{Benchmark Churn}
     112\label{fig:benchChurnFig}
     113\end{figure}
     114
     115Figure \ref{fig:benchChurnFig} illustrates churn benchmark.
     116 This benchmark creates a buffer with M spots and starts K threads. Each thread randomly picks a
     117 spot out of M spots, it frees the object currently at that spot and allocates a new object for that spot. Each thread
     118 repeats this cycle for N times. Main threads measures the total time taken for the whole benchmark and that time is
     119 used to evaluate memory allocator's performance.
     120
     121Only malloc and free are used to allocate and free an object to eliminate any extra cost such as memcpy in realloc etc.
     122Malloc/free allows us to measure latency of memory allocation only without paying any extra cost. Churn simulates a
     123memory intensive program that can be tuned to create different scenerios.
     124
     125Following commandline arguments can be used to tune the benchmark.\\
     126-threadN :  sets number of threads, K\\
     127-cSpots  :  sets number of spots for churn, M\\
     128-objN    :  sets number of objects per thread, N\\
     129-maxS    :  sets max object size\\
     130-minS    :  sets min object size\\
     131-stepS   :  sets object size increment\\
     132-distroS :  sets object size distribution
     133
     134\section{Cache Thrash}\label{sec:benchThrashSec} Cache Thrash benchmark measures allocator induced active false sharing
     135 in an allocator as illustrated in figure \ref{f:AllocatorInducedActiveFalseSharing}.
     136 If memory allocator allocates memory for multiple threads on
     137 same cache line, this can slow down the program performance. If both threads, who share one cache line, frequently
     138 read/write to their object on the cache line concurrently then this will cause cache miss everytime a thread accesse
     139 the object as the other thread might have written something at their memory location on the same cache line.
     140
     141\begin{figure}
     142\centering
     143\includegraphics[width=1\textwidth]{figures/bench-cache-thrash.eps}
     144\caption{Benchmark Allocator Induced Active False Sharing}
     145\label{fig:benchThrashFig}
     146\end{figure}
     147
     148Cache thrash tries to create a scenerio that should lead to allocator induced false sharing if the underlying memory
     149allocator is allocating dynamic memory to multiple threads on same cache lines. Ideally, a memory allocator should
     150distance dynamic memory region of one thread from other threads'. Having multiple threads allocating small objects
     151simultanously should cause the memory allocator to allocate objects for multiple objects on the same cache line if its
     152not distancing the memory among different threads.
     153
     154Figure \ref{fig:benchThrashFig} lays out flow of the cache thrash benchmark.
     155 It creates K worker threads. Each worker thread allocates an object and intensively read/write
     156 it for M times to invalidate cache lines frequently to slow down other threads who might be sharing this cache line
     157 with it. Each thread repeats this for N times. Main thread measures the total time taken to for all worker threads to
     158 complete. Worker threads sharing cahche lines with each other will take longer.
     159
     160Different cache access scenerios can be created using the following commandline arguments.\\
     161-threadN :  sets number of threads, K\\
     162-cacheIt :  iterations for cache benchmark, N\\
     163-cacheRep:  repetations for cache benchmark, M\\
     164-cacheObj:  object size for cache benchmark
     165
     166\section{Cache Scratch} Cache Scratch benchmark measures allocator induced passive false sharing in an allocator. An
     167 allocator can unintentionally induce false sharing depending upon its management of the freed objects as described in
     168 figure \ref{f:AllocatorInducedPassiveFalseSharing}. If a thread A allocates multiple objects together then they will be
     169  possibly allocated on the same cache line by the memory allocator. If the thread now passes this object to another
     170  thread B then the two of them will sharing the same cache line but this scenerio is not induced by the allocator.
     171  Instead, the program induced this situation. Now it might be possible that if thread B frees this object and then
     172  allocate an object of the same size then the allocator may return the same object which is on a cache line shared
     173  with thread A. Now this false sharing is being caused by the memory allocator although it was started by the
     174  program.
     175
     176\begin{figure}
     177\centering
     178\includegraphics[width=1\textwidth]{figures/bench-cache-scratch.eps}
     179\caption{Benchmark Program Induced Passive False Sharing}
     180\label{fig:benchScratchFig}
     181\end{figure}
     182
     183Cache scratch main thread induces false sharing and creates a scenerio that should make memory allocator preserve the
     184 program-induced false sharing if it does not retur a freed object to its owner thread and, instead, re-uses it
     185 instantly. An alloator using object ownership, as described in section \ref{s:Ownership}, would be less susceptible to allocator induced passive
     186 false sharing. If the object is returned to the thread who owns it or originally allocated it then the thread B will
     187 get a new object that will be less likely to be on the same cache line as thread A.
     188
     189As in figure \ref{fig:benchScratchFig}, cache Scratch allocates K dynamic objects together, one for each of the K worker threads,
     190 possibly causing memory allocator to allocate these objects on the same cache-line. Then it create K worker threads and passes
     191 an object from the K allocated objects to each of the K threads. Each worker thread frees the object passed by the main thread.
     192 Then, it allocates an object and reads/writes it repetitively for M times causing frequent cache invalidations. Each worker
     193 repeats this for N times.
     194
     195Each thread allocating an object after freeing the original object passed by the main thread should cause the memory
     196allocator to return the same object that was initially allocated by the main thread if the allocator did not return the
     197intial object bakc to its owner (main thread). Then, intensive read/write on the shared cache line by multiple threads
     198should slow down worker threads due to to high cache invalidations and misses. Main thread measures the total time
     199taken for all the workers to complete.
     200
     201Similar to bechmark cache thrash in section \ref{sec:benchThrashSec}, different cache access scenerios can be created using the following commandline arguments.\\
     202-threadN :  sets number of threads, K\\
     203-cacheIt :  iterations for cache benchmark, N\\
     204-cacheRep:  repetations for cache benchmark, M\\
     205-cacheObj:  object size for cache benchmark
  • doc/theses/mubeen_zulfiqar_MMath/performance.tex

    r374cb117 r2686bc7  
    11\chapter{Performance}
    22\label{c:Performance}
    3 
    4 \noindent
    5 ====================
    6 
    7 Writing Points:
    8 \begin{itemize}
    9 \item
    10 Machine Specification
    11 \item
    12 Allocators and their details
    13 \item
    14 Benchmarks and their details
    15 \item
    16 Results
    17 \end{itemize}
    18 
    19 \noindent
    20 ====================
    213
    224\section{Machine Specification}
     
    257\begin{itemize}
    268\item
    27 AMD EPYC 7662, 64-core socket $\times$ 2, 2.0 GHz
     9{\bf Nasus} AMD EPYC 7662, 64-core socket $\times$ 2, 2.0 GHz, GCC version 9.3.0
    2810\item
    29 Huawei ARM TaiShan 2280 V2 Kunpeng 920, 24-core socket $\times$ 4, 2.6 GHz
    30 \item
    31 Intel Xeon Gold 5220R, 48-core socket $\times$ 2, 2.20GHz
     11{\bf Algol} Huawei ARM TaiShan 2280 V2 Kunpeng 920, 24-core socket $\times$ 4, 2.6 GHz, GCC version 9.4.0
    3212\end{itemize}
    3313
    3414
    35 \section{Existing Memory Allocators}
     15\section{Existing Memory Allocators}\label{sec:curAllocatorSec}
    3616With dynamic allocation being an important feature of C, there are many stand-alone memory allocators that have been designed for different purposes. For this thesis, we chose 7 of the most popular and widely used memory allocators.
    3717
    38 \paragraph{dlmalloc}
    39 dlmalloc (FIX ME: cite allocator) is a thread-safe allocator that is single threaded and single heap. dlmalloc maintains free-lists of different sizes to store freed dynamic memory. (FIX ME: cite wasik)
    40 
    41 \paragraph{hoard}
     18\subsection{dlmalloc}
     19dlmalloc (FIX ME: cite allocator with download link) is a thread-safe allocator that is single threaded and single heap. dlmalloc maintains free-lists of different sizes to store freed dynamic memory. (FIX ME: cite wasik)
     20\\
     21\\
     22{\bf Version:} 2.8.6\\
     23{\bf Configuration:} Compiled with pre-processor USE\_LOCKS.\\
     24{\bf Compilation command:}\\
     25cc -g3 -O3 -Wall -Wextra -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free -fPIC -shared -DUSE\_LOCKS -o libdlmalloc.so malloc-2.8.6.c
     26
     27\subsection{hoard}
    4228Hoard (FIX ME: cite allocator) is a thread-safe allocator that is multi-threaded and using a heap layer framework. It has per-thread heaps that have thread-local free-lists, and a global shared heap. (FIX ME: cite wasik)
    43 
    44 \paragraph{jemalloc}
     29\\
     30\\
     31{\bf Version:} 3.13\\
     32{\bf Configuration:} Compiled with hoard's default configurations and Makefile.\\
     33{\bf Compilation command:}\\
     34make all
     35
     36\subsection{jemalloc}
    4537jemalloc (FIX ME: cite allocator) is a thread-safe allocator that uses multiple arenas. Each thread is assigned an arena. Each arena has chunks that contain contagious memory regions of same size. An arena has multiple chunks that contain regions of multiple sizes.
    46 
    47 \paragraph{ptmalloc}
    48 ptmalloc (FIX ME: cite allocator) is a modification of dlmalloc. It is a thread-safe multi-threaded memory allocator that uses multiple heaps. ptmalloc heap has similar design to dlmalloc's heap.
    49 
    50 \paragraph{rpmalloc}
     38\\
     39\\
     40{\bf Version:} 5.2.1\\
     41{\bf Configuration:} Compiled with jemalloc's default configurations and Makefile.\\
     42{\bf Compilation command:}\\
     43./autogen.sh\\
     44./configure\\
     45make\\
     46make install
     47
     48\subsection{pt3malloc}
     49pt3malloc (FIX ME: cite allocator) is a modification of dlmalloc. It is a thread-safe multi-threaded memory allocator that uses multiple heaps. pt3malloc heap has similar design to dlmalloc's heap.
     50\\
     51\\
     52{\bf Version:} 1.8\\
     53{\bf Configuration:} Compiled with pt3malloc's Makefile using option "linux-shared".\\
     54{\bf Compilation command:}\\
     55make linux-shared
     56
     57\subsection{rpmalloc}
    5158rpmalloc (FIX ME: cite allocator) is a thread-safe allocator that is multi-threaded and uses per-thread heap. Each heap has multiple size-classes and each size-class contains memory regions of the relevant size.
    52 
    53 \paragraph{tbb malloc}
     59\\
     60\\
     61{\bf Version:} 1.4.1\\
     62{\bf Configuration:} Compiled with rpmalloc's default configurations and ninja build system.\\
     63{\bf Compilation command:}\\
     64python3 configure.py\\
     65ninja
     66
     67\subsection{tbb malloc}
    5468tbb malloc (FIX ME: cite allocator) is a thread-safe allocator that is multi-threaded and uses private heap for each thread. Each private-heap has multiple bins of different sizes. Each bin contains free regions of the same size.
    55 
    56 \paragraph{tc malloc}
    57 tcmalloc (FIX ME: cite allocator) is a thread-safe allocator. It uses per-thread cache to store free objects that prevents contention on shared resources in multi-threaded application. A central free-list is used to refill per-thread cache when it gets empty.
    58 
    59 
    60 \section{Memory Allocators}
    61 For these experiments, we used 7 memory allocators excluding our standalone memory allocator uHeapLmmm.
    62 
    63 \begin{tabularx}{0.8\textwidth} {
    64         | >{\raggedright\arraybackslash}X
    65         | >{\centering\arraybackslash}X
    66         | >{\raggedleft\arraybackslash}X |
    67 }
    68 \hline
    69 Memory Allocator & Version     & Configurations \\
    70 \hline
    71 dl               &             &  \\
    72 \hline
    73 hoard            &             &  \\
    74 \hline
    75 je               &             &  \\
    76 \hline
    77 pt3              &             &  \\
    78 \hline
    79 rp               &             &  \\
    80 \hline
    81 tbb              &             &  \\
    82 \hline
    83 tc               &             &  \\
    84 \end{tabularx}
    85 
    86 %(FIX ME: complete table)
     69\\
     70\\
     71{\bf Version:} intel tbb 2020 update 2, tbb\_interface\_version == 11102\\
     72{\bf Configuration:} Compiled with tbbmalloc's default configurations and Makefile.\\
     73{\bf Compilation command:}\\
     74make
    8775
    8876\section{Experiment Environment}
    89 We conducted these experiments ... (FIX ME: what machine and which specifications to add).
    90 
    91 We used our micro becnhmark suite (FIX ME: cite mbench) to evaluate other memory allocators (FIX ME: cite above memory allocators) and our uHeapLmmm.
     77We used our micro becnhmark suite (FIX ME: cite mbench) to evaluate these memory allocators \ref{sec:curAllocatorSec} and our own memory allocator uHeap \ref{sec:allocatorSec}.
    9278
    9379\section{Results}
     80FIX ME: add experiment, knobs, graphs, description+analysis
     81
     82%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     83%% CHURN
     84%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     85
     86\subsection{Churn Benchmark}
     87
     88Churn benchmark tested memory allocators for speed under intensive dynamic memory usage.
     89
     90This experiment was run with following configurations:
     91
     92-maxS            : 500
     93
     94-minS            : 50
     95
     96-stepS           : 50
     97
     98-distroS         : fisher
     99
     100-objN            : 100000
     101
     102-cSpots          : 16
     103
     104-threadN         : \{ 1, 2, 4, 8, 16 \} *
     105
     106* Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN.
     107
     108Results are shown in figure \ref{fig:churn} for both algol and nasus.
     109X-axis shows number of threads. Each allocator's performance for each thread is shown in different colors.
     110Y-axis shows the total time experiment took to finish.
     111
     112\begin{figure}
     113\centering
     114    \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/churn} }
     115    \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/churn} }
     116\caption{Churn}
     117\label{fig:churn}
     118\end{figure}
     119
     120%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     121%% THRASH
     122%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     123
     124\subsection{Cache Thrash}
     125
     126Thrash benchmark tested memory allocators for active false sharing.
     127
     128This experiment was run with following configurations:
     129
     130-cacheIt        : 1000
     131
     132-cacheRep       : 1000000
     133
     134-cacheObj       : 1
     135
     136-threadN        : \{ 1, 2, 4, 8, 16 \} *
     137
     138* Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN.
     139
     140Results are shown in figure \ref{fig:cacheThrash} for both algol and nasus.
     141X-axis shows number of threads. Each allocator's performance for each thread is shown in different colors.
     142Y-axis shows the total time experiment took to finish.
     143
     144\begin{figure}
     145\centering
     146    \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/cache-time-0-thrash} }
     147    \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/cache-time-0-thrash} }
     148\caption{Cache Thrash}
     149\label{fig:cacheThrash}
     150\end{figure}
     151
     152%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     153%% SCRATCH
     154%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     155
     156\subsection{Cache Scratch}
     157
     158Scratch benchmark tested memory allocators for program induced allocator preserved passive false sharing.
     159
     160This experiment was run with following configurations:
     161
     162-cacheIt        : 1000
     163
     164-cacheRep       : 1000000
     165
     166-cacheObj       : 1
     167
     168-threadN        : \{ 1, 2, 4, 8, 16 \} *
     169
     170* Each allocator was tested for its performance across different number of threads. Experiment was repeated for each allocator for 1, 2, 4, 8, and 16 threads by setting the configuration -threadN.
     171
     172Results are shown in figure \ref{fig:cacheScratch} for both algol and nasus.
     173X-axis shows number of threads. Each allocator's performance for each thread is shown in different colors.
     174Y-axis shows the total time experiment took to finish.
     175
     176\begin{figure}
     177\centering
     178    \subfigure[Algol]{ \includegraphics[width=0.9\textwidth]{evaluations/algol-perf-eps/cache-time-0-scratch} }
     179    \subfigure[Nasus]{ \includegraphics[width=0.9\textwidth]{evaluations/nasus-perf-eps/cache-time-0-scratch} }
     180\caption{Cache Scratch}
     181\label{fig:cacheScratch}
     182\end{figure}
     183
     184%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     185%% SPEED
     186%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     187
     188\subsection{Speed Benchmark}
     189
     190Speed benchmark tested memory allocators for program induced allocator preserved passive false sharing.
     191
     192This experiment was run with following configurations:
     193
     194-threadN :  sets number of threads, K\\
     195-cSpots  :  sets number of spots for churn, M\\
     196-objN    :  sets number of objects per thread, N\\
     197-maxS    :  sets max object size\\
     198-minS    :  sets min object size\\
     199-stepS   :  sets object size increment\\
     200-distroS :  sets object size distribution
     201
     202%speed-1-malloc-null.eps
     203\begin{figure}
     204\centering
     205\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-1-malloc-null}
     206\caption{speed-1-malloc-null}
     207\label{fig:speed-1-malloc-null}
     208\end{figure}
     209
     210%speed-2-free-null.eps
     211\begin{figure}
     212\centering
     213\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-2-free-null}
     214\caption{speed-2-free-null}
     215\label{fig:speed-2-free-null}
     216\end{figure}
     217
     218%speed-3-malloc.eps
     219\begin{figure}
     220\centering
     221\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-3-malloc}
     222\caption{speed-3-malloc}
     223\label{fig:speed-3-malloc}
     224\end{figure}
     225
     226%speed-4-realloc.eps
     227\begin{figure}
     228\centering
     229\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-4-realloc}
     230\caption{speed-4-realloc}
     231\label{fig:speed-4-realloc}
     232\end{figure}
     233
     234%speed-5-free.eps
     235\begin{figure}
     236\centering
     237\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-5-free}
     238\caption{speed-5-free}
     239\label{fig:speed-5-free}
     240\end{figure}
     241
     242%speed-6-calloc.eps
     243\begin{figure}
     244\centering
     245\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-6-calloc}
     246\caption{speed-6-calloc}
     247\label{fig:speed-6-calloc}
     248\end{figure}
     249
     250%speed-7-malloc-free.eps
     251\begin{figure}
     252\centering
     253\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-7-malloc-free}
     254\caption{speed-7-malloc-free}
     255\label{fig:speed-7-malloc-free}
     256\end{figure}
     257
     258%speed-8-realloc-free.eps
     259\begin{figure}
     260\centering
     261\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-8-realloc-free}
     262\caption{speed-8-realloc-free}
     263\label{fig:speed-8-realloc-free}
     264\end{figure}
     265
     266%speed-9-calloc-free.eps
     267\begin{figure}
     268\centering
     269\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-9-calloc-free}
     270\caption{speed-9-calloc-free}
     271\label{fig:speed-9-calloc-free}
     272\end{figure}
     273
     274%speed-10-malloc-realloc.eps
     275\begin{figure}
     276\centering
     277\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-10-malloc-realloc}
     278\caption{speed-10-malloc-realloc}
     279\label{fig:speed-10-malloc-realloc}
     280\end{figure}
     281
     282%speed-11-calloc-realloc.eps
     283\begin{figure}
     284\centering
     285\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-11-calloc-realloc}
     286\caption{speed-11-calloc-realloc}
     287\label{fig:speed-11-calloc-realloc}
     288\end{figure}
     289
     290%speed-12-malloc-realloc-free.eps
     291\begin{figure}
     292\centering
     293\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-12-malloc-realloc-free}
     294\caption{speed-12-malloc-realloc-free}
     295\label{fig:speed-12-malloc-realloc-free}
     296\end{figure}
     297
     298%speed-13-calloc-realloc-free.eps
     299\begin{figure}
     300\centering
     301\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-13-calloc-realloc-free}
     302\caption{speed-13-calloc-realloc-free}
     303\label{fig:speed-13-calloc-realloc-free}
     304\end{figure}
     305
     306%speed-14-{m,c,re}alloc-free.eps
     307\begin{figure}
     308\centering
     309\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/speed-14-{m,c,re}alloc-free}
     310\caption{speed-14-{m,c,re}alloc-free}
     311\label{fig:speed-14-{m,c,re}alloc-free}
     312\end{figure}
     313
     314%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     315%% MEMORY
     316%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    94317
    95318\subsection{Memory Benchmark}
    96 FIX ME: add experiment, knobs, graphs, and description
    97 
    98 \subsection{Speed Benchmark}
    99 FIX ME: add experiment, knobs, graphs, and description
    100 
    101 \subsubsection{Speed Time}
    102 FIX ME: add experiment, knobs, graphs, and description
    103 
    104 \subsubsection{Speed Workload}
    105 FIX ME: add experiment, knobs, graphs, and description
    106 
    107 \subsection{Cache Scratch}
    108 FIX ME: add experiment, knobs, graphs, and description
    109 
    110 \subsubsection{Cache Scratch Time}
    111 FIX ME: add experiment, knobs, graphs, and description
    112 
    113 \subsubsection{Cache Scratch Layout}
    114 FIX ME: add experiment, knobs, graphs, and description
    115 
    116 \subsection{Cache Thrash}
    117 FIX ME: add experiment, knobs, graphs, and description
    118 
    119 \subsubsection{Cache Thrash Time}
    120 FIX ME: add experiment, knobs, graphs, and description
    121 
    122 \subsubsection{Cache Thrash Layout}
    123 FIX ME: add experiment, knobs, graphs, and description
     319
     320%mem-1-prod-1-cons-100-cfa.eps
     321\begin{figure}
     322\centering
     323\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-cfa}
     324\caption{mem-1-prod-1-cons-100-cfa}
     325\label{fig:mem-1-prod-1-cons-100-cfa}
     326\end{figure}
     327
     328%mem-1-prod-1-cons-100-dl.eps
     329\begin{figure}
     330\centering
     331\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-dl}
     332\caption{mem-1-prod-1-cons-100-dl}
     333\label{fig:mem-1-prod-1-cons-100-dl}
     334\end{figure}
     335
     336%mem-1-prod-1-cons-100-glc.eps
     337\begin{figure}
     338\centering
     339\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-glc}
     340\caption{mem-1-prod-1-cons-100-glc}
     341\label{fig:mem-1-prod-1-cons-100-glc}
     342\end{figure}
     343
     344%mem-1-prod-1-cons-100-hrd.eps
     345\begin{figure}
     346\centering
     347\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-hrd}
     348\caption{mem-1-prod-1-cons-100-hrd}
     349\label{fig:mem-1-prod-1-cons-100-hrd}
     350\end{figure}
     351
     352%mem-1-prod-1-cons-100-je.eps
     353\begin{figure}
     354\centering
     355\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-je}
     356\caption{mem-1-prod-1-cons-100-je}
     357\label{fig:mem-1-prod-1-cons-100-je}
     358\end{figure}
     359
     360%mem-1-prod-1-cons-100-pt3.eps
     361\begin{figure}
     362\centering
     363\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-pt3}
     364\caption{mem-1-prod-1-cons-100-pt3}
     365\label{fig:mem-1-prod-1-cons-100-pt3}
     366\end{figure}
     367
     368%mem-1-prod-1-cons-100-rp.eps
     369\begin{figure}
     370\centering
     371\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-rp}
     372\caption{mem-1-prod-1-cons-100-rp}
     373\label{fig:mem-1-prod-1-cons-100-rp}
     374\end{figure}
     375
     376%mem-1-prod-1-cons-100-tbb.eps
     377\begin{figure}
     378\centering
     379\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-1-prod-1-cons-100-tbb}
     380\caption{mem-1-prod-1-cons-100-tbb}
     381\label{fig:mem-1-prod-1-cons-100-tbb}
     382\end{figure}
     383
     384%mem-4-prod-4-cons-100-cfa.eps
     385\begin{figure}
     386\centering
     387\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-cfa}
     388\caption{mem-4-prod-4-cons-100-cfa}
     389\label{fig:mem-4-prod-4-cons-100-cfa}
     390\end{figure}
     391
     392%mem-4-prod-4-cons-100-dl.eps
     393\begin{figure}
     394\centering
     395\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-dl}
     396\caption{mem-4-prod-4-cons-100-dl}
     397\label{fig:mem-4-prod-4-cons-100-dl}
     398\end{figure}
     399
     400%mem-4-prod-4-cons-100-glc.eps
     401\begin{figure}
     402\centering
     403\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-glc}
     404\caption{mem-4-prod-4-cons-100-glc}
     405\label{fig:mem-4-prod-4-cons-100-glc}
     406\end{figure}
     407
     408%mem-4-prod-4-cons-100-hrd.eps
     409\begin{figure}
     410\centering
     411\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-hrd}
     412\caption{mem-4-prod-4-cons-100-hrd}
     413\label{fig:mem-4-prod-4-cons-100-hrd}
     414\end{figure}
     415
     416%mem-4-prod-4-cons-100-je.eps
     417\begin{figure}
     418\centering
     419\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-je}
     420\caption{mem-4-prod-4-cons-100-je}
     421\label{fig:mem-4-prod-4-cons-100-je}
     422\end{figure}
     423
     424%mem-4-prod-4-cons-100-pt3.eps
     425\begin{figure}
     426\centering
     427\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-pt3}
     428\caption{mem-4-prod-4-cons-100-pt3}
     429\label{fig:mem-4-prod-4-cons-100-pt3}
     430\end{figure}
     431
     432%mem-4-prod-4-cons-100-rp.eps
     433\begin{figure}
     434\centering
     435\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-rp}
     436\caption{mem-4-prod-4-cons-100-rp}
     437\label{fig:mem-4-prod-4-cons-100-rp}
     438\end{figure}
     439
     440%mem-4-prod-4-cons-100-tbb.eps
     441\begin{figure}
     442\centering
     443\includegraphics[width=1\textwidth]{evaluations/nasus-perf-eps/mem-4-prod-4-cons-100-tbb}
     444\caption{mem-4-prod-4-cons-100-tbb}
     445\label{fig:mem-4-prod-4-cons-100-tbb}
     446\end{figure}
Note: See TracChangeset for help on using the changeset viewer.