Changeset 16d397a
- Timestamp:
- Apr 24, 2022, 10:53:10 PM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
- Children:
- a6e8f64
- Parents:
- 4f7ad4b
- git-author:
- Peter A. Buhr <pabuhr@…> (04/24/22 22:50:11)
- git-committer:
- Peter A. Buhr <pabuhr@…> (04/24/22 22:53:10)
- Location:
- doc/theses/mubeen_zulfiqar_MMath
- Files:
-
- 7 deleted
- 6 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/mubeen_zulfiqar_MMath/allocator.tex
r4f7ad4b r16d397a 651 651 The following API was created to provide interaction between the language runtime and the allocator. 652 652 \begin{lstlisting} 653 void startT ask(); $\C{// KT starts}$654 void finishT ask(); $\C{// KT ends}$653 void startThread(); $\C{// KT starts}$ 654 void finishThread(); $\C{// KT ends}$ 655 655 void startup(); $\C{// when application code starts}$ 656 656 void shutdown(); $\C{// when application code ends}$ -
doc/theses/mubeen_zulfiqar_MMath/background.tex
r4f7ad4b r16d397a 18 18 \end{comment} 19 19 20 \chapter[Background]{Background\footnote{Part of this chapter draws from similar background work in~\cite{ wasik.thesis} with many updates.}}20 \chapter[Background]{Background\footnote{Part of this chapter draws from similar background work in~\cite{Wasik08} with many updates.}} 21 21 22 22 … … 213 213 214 214 \paragraph{\newterm{Program-induced false-sharing}} occurs when one thread passes an object sharing a cache line to another thread, and both threads modify the respective objects. 215 \VRef[Figure]{f:ProgramInducedFalseSharing} shows when T ask$_1$ passes Object$_2$ to Task$_2$, a false-sharing situation forms when Task$_1$ modifies Object$_1$ and Task$_2$ modifies Object$_2$.215 \VRef[Figure]{f:ProgramInducedFalseSharing} shows when Thread$_1$ passes Object$_2$ to Thread$_2$, a false-sharing situation forms when Thread$_1$ modifies Object$_1$ and Thread$_2$ modifies Object$_2$. 216 216 Changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. 217 217 … … 237 237 238 238 \paragraph{\newterm{Allocator-induced active false-sharing}} occurs when objects are allocated within the same cache line but to different threads. 239 For example, in \VRef[Figure]{f:AllocatorInducedActiveFalseSharing}, each t askallocates an object and loads a cache-line of memory into its associated cache.239 For example, in \VRef[Figure]{f:AllocatorInducedActiveFalseSharing}, each thread allocates an object and loads a cache-line of memory into its associated cache. 240 240 Again, changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line. 241 241 242 242 \paragraph{\newterm{Allocator-induced passive false-sharing}} is another form of allocator-induced false-sharing caused by program-induced false-sharing. 243 243 When an object in a program-induced false-sharing situation is deallocated, a future allocation of that object may cause passive false-sharing. 244 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, T ask$_1$ passes Object$_2$ to Task$_2$, and Task$_2$ subsequently deallocates Object$_2$.245 Allocator-induced passive false-sharing occurs when Object$_2$ is reallocated to T ask$_2$ while Task$_1$ is still using Object$_1$.244 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, Thread$_1$ passes Object$_2$ to Thread$_2$, and Thread$_2$ subsequently deallocates Object$_2$. 245 Allocator-induced passive false-sharing occurs when Object$_2$ is reallocated to Thread$_2$ while Thread$_1$ is still using Object$_1$. 246 246 247 247 … … 461 461 Ownership prevents the classical problem where one thread performs allocations from one heap, passes the object to another thread, and the receiving thread deallocates the object to another heap, hence draining the initial heap of storage. 462 462 As well, allocator-induced passive false-sharing is eliminated because returning an object to its owner heap means it can never be allocated to another thread. 463 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, the deallocation by T ask$_2$ returns Object$_2$ back to Task$_1$'s heap;464 hence a subsequent allocation by T ask$_2$ cannot return this storage.465 The disadvantage of ownership is deallocating to another t ask's heap so heaps are no longer private and require locks to provide safe concurrent access.463 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, the deallocation by Thread$_2$ returns Object$_2$ back to Thread$_1$'s heap; 464 hence a subsequent allocation by Thread$_2$ cannot return this storage. 465 The disadvantage of ownership is deallocating to another thread's heap so heaps are no longer private and require locks to provide safe concurrent access. 466 466 467 467 Object ownership can be immediate or delayed, meaning free objects may be batched on a separate free list either by the returning or receiving thread. … … 472 472 It is possible for heaps to steal objects rather than return them and reallocating these objects when storage runs out on a heap. 473 473 However, stealing can result in passive false-sharing. 474 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to T ask$_2$'s heap initially.475 If T ask$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur.474 For example, in \VRef[Figure]{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Thread$_2$'s heap initially. 475 If Thread$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur. 476 476 477 477 … … 565 565 566 566 Additional restrictions may be applied to the movement of containers to prevent active false-sharing. 567 For example, in \VRef[Figure]{f:ContainerFalseSharing1}, a container being used by T ask$_1$ changes ownership, through the global heap.568 In \VRef[Figure]{f:ContainerFalseSharing2}, when T ask$_2$ allocates an object from the newly acquired container it is actively false-sharing even though no objects are passed among threads.569 Note, once the object is freed by T ask$_1$, no more false sharing can occur until the container changes ownership again.567 For example, in \VRef[Figure]{f:ContainerFalseSharing1}, a container being used by Thread$_1$ changes ownership, through the global heap. 568 In \VRef[Figure]{f:ContainerFalseSharing2}, when Thread$_2$ allocates an object from the newly acquired container it is actively false-sharing even though no objects are passed among threads. 569 Note, once the object is freed by Thread$_1$, no more false sharing can occur until the container changes ownership again. 570 570 To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free. 571 571 One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the operating system. … … 683 683 For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps (see~\VRef[Figure]{f:HybridPrivatePublicHeap}). 684 684 The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup. 685 In the hybrid approach, a t askfirst allocates from its private heap and second from its public heap if no free memory exists in the private heap.686 Similarly, a t askfirst deallocates an object its private heap, and second to the public heap.685 In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap. 686 Similarly, a thread first deallocates an object its private heap, and second to the public heap. 687 687 Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps. 688 688 Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap. 689 Finally, when a t ask frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing task's private heap for delayed ownership, which allows the freeing taskto temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return.689 Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return. 690 690 691 691 \begin{figure} … … 746 746 \label{s:LockFreeOperations} 747 747 748 A \newterm{lock-free algorithm} guarantees safe concurrent-access to a data structure, so that at least one thread makes progress, but an individual t askhas no execution bound and may starve~\cite[pp.~745--746]{Herlihy93}.748 A \newterm{lock-free algorithm} guarantees safe concurrent-access to a data structure, so that at least one thread makes progress, but an individual thread has no execution bound and may starve~\cite[pp.~745--746]{Herlihy93}. 749 749 (A \newterm{wait-free algorithm} puts a bound on the number of steps any thread takes to complete an operation to prevent starvation.) 750 750 Lock-free operations can be used in an allocator to reduce or eliminate the use of locks. … … 759 759 760 760 761 \subsubsection{Speed Workload}762 The worload method uses the opposite approach. It calls the allocator's routines for a specific amount of time and measures how much work was done during that time. Then, similar to the time method, it divides the time by the workload done during that time and calculates the average time taken by the allocator's routine.763 *** FIX ME: Insert a figure of above benchmark with description764 765 \paragraph{Knobs}766 *** FIX ME: Insert Knobs761 % \subsubsection{Speed Workload} 762 % The worload method uses the opposite approach. It calls the allocator's routines for a specific amount of time and measures how much work was done during that time. Then, similar to the time method, it divides the time by the workload done during that time and calculates the average time taken by the allocator's routine. 763 % *** FIX ME: Insert a figure of above benchmark with description 764 % 765 % \paragraph{Knobs} 766 % *** FIX ME: Insert Knobs -
doc/theses/mubeen_zulfiqar_MMath/figures/AllocInducedActiveFalseSharing.fig
r4f7ad4b r16d397a 1 #FIG 3.2 Produced by xfig version 3.2. 51 #FIG 3.2 Produced by xfig version 3.2.7b 2 2 Landscape 3 3 Center 4 4 Inches 5 Letter 5 Letter 6 6 100.00 7 7 Single 8 8 -2 9 9 1200 2 10 6 2 250 2400 4050 270010 6 2550 2700 4350 3000 11 11 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 12 2 250 2400 3150 2400 3150 2700 2250 2700 2250 240012 2550 2700 3450 2700 3450 3000 2550 3000 2550 2700 13 13 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 14 3 150 2400 4050 2400 4050 2700 3150 2700 3150 240015 4 1 0 50 -1 0 11 0.0000 2 1 95 870 2700 2625 Object$_1$\00116 4 1 0 50 -1 0 11 0.0000 2 1 95 870 3600 2625 Object$_2$\00114 3450 2700 4350 2700 4350 3000 3450 3000 3450 2700 15 4 1 0 50 -1 0 11 0.0000 2 165 825 3000 2925 Object$_1$\001 16 4 1 0 50 -1 0 11 0.0000 2 165 825 3900 2925 Object$_2$\001 17 17 -6 18 18 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 19 1 050 1500 1950 1500 1950 1800 1050 1800 1050 150019 1350 1800 2250 1800 2250 2100 1350 2100 1350 1800 20 20 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 21 900 900 3000 900 3000 1950 900 1950 900 90021 1200 1200 3300 1200 3300 2250 1200 2250 1200 1200 22 22 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 23 23 1 1 1.00 45.00 90.00 24 1950 1950 2700 240024 2250 2250 3000 2700 25 25 2 2 0 1 0 7 60 -1 -1 0.000 0 0 -1 0 0 5 26 1950 1500 2850 1500 2850 1800 1950 1800 1950 150026 2250 1800 3150 1800 3150 2100 2250 2100 2250 1800 27 27 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 28 4 350 1500 5250 1500 5250 1800 4350 1800 4350 150028 4650 1800 5550 1800 5550 2100 4650 2100 4650 1800 29 29 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 30 3 300 900 5400 900 5400 1950 3300 1950 3300 90030 3600 1200 5700 1200 5700 2250 3600 2250 3600 1200 31 31 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 32 32 1 1 1.00 45.00 90.00 33 4 350 1950 3600 240033 4650 2250 3900 2700 34 34 2 2 0 1 0 7 60 -1 -1 0.000 0 0 -1 0 0 5 35 3 450 1500 4350 1500 4350 1800 3450 1800 3450 150035 3750 1800 4650 1800 4650 2100 3750 2100 3750 1800 36 36 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 37 2850 1200 2850 975 2250 975 2250 1200 2850 120037 3225 1500 2475 1500 2475 1275 3225 1275 3225 1500 38 38 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 39 5 250 1200 5250 975 4650 975 4650 1200 5250 120040 4 1 0 50 -1 0 11 0.0000 2 1 95 735 2550 1125 Task$_1$\00141 4 0 0 50 -1 0 11 0.0000 2 1 95 720 975 1125 CPU$_1$\00142 4 1 0 50 -1 0 11 0.0000 2 135 4 80 1950 1425 Cache\00143 4 1 0 50 -1 0 11 0.0000 2 1 95 870 1500 1725 Object$_1$\00144 4 1 0 50 -1 0 11 0.0000 2 1 95 870 2400 1725 Object$_2$\00145 4 2 0 50 -1 2 11 0.0000 2 135 5 85 2250 2250 1. alloc\00146 4 1 0 50 -1 0 11 0.0000 2 1 95 735 4950 1125 Task$_2$\00147 4 0 0 50 -1 0 11 0.0000 2 1 95 720 3375 1125 CPU$_2$\00148 4 1 0 50 -1 0 11 0.0000 2 1 95 870 3900 1725 Object$_1$\00149 4 1 0 50 -1 0 11 0.0000 2 1 95 870 4800 1725 Object$_2$\00150 4 1 0 50 -1 0 11 0.0000 2 135 4 80 4350 1425 Cache\00151 4 2 0 50 -1 0 11 0.0000 2 1 80 630 2175 2625 Memory\00152 4 0 0 50 -1 2 11 0.0000 2 180 720 4 050 2250 4. modify\00153 4 2 0 50 -1 2 11 0.0000 2 135 5 85 3900 2175 3. alloc\00154 4 0 0 50 -1 2 11 0.0000 2 180 720 2 400 2175 2. modify\00139 5625 1500 4875 1500 4875 1275 5625 1275 5625 1500 40 4 1 0 50 -1 0 11 0.0000 2 165 855 2850 1425 Thread$_1$\001 41 4 0 0 50 -1 0 11 0.0000 2 165 720 1275 1425 CPU$_1$\001 42 4 1 0 50 -1 0 11 0.0000 2 135 435 2250 1725 Cache\001 43 4 1 0 50 -1 0 11 0.0000 2 165 825 1800 2025 Object$_1$\001 44 4 1 0 50 -1 0 11 0.0000 2 165 825 2700 2025 Object$_2$\001 45 4 2 0 50 -1 2 11 0.0000 2 135 525 2550 2550 1. alloc\001 46 4 1 0 50 -1 0 11 0.0000 2 165 855 5250 1425 Thread$_2$\001 47 4 0 0 50 -1 0 11 0.0000 2 165 720 3675 1425 CPU$_2$\001 48 4 1 0 50 -1 0 11 0.0000 2 165 825 4200 2025 Object$_1$\001 49 4 1 0 50 -1 0 11 0.0000 2 165 825 5100 2025 Object$_2$\001 50 4 1 0 50 -1 0 11 0.0000 2 135 435 4650 1725 Cache\001 51 4 2 0 50 -1 0 11 0.0000 2 165 615 2475 2925 Memory\001 52 4 0 0 50 -1 2 11 0.0000 2 180 720 4350 2550 4. modify\001 53 4 2 0 50 -1 2 11 0.0000 2 135 525 4200 2475 3. alloc\001 54 4 0 0 50 -1 2 11 0.0000 2 180 720 2700 2475 2. modify\001 -
doc/theses/mubeen_zulfiqar_MMath/figures/AllocInducedPassiveFalseSharing.fig
r4f7ad4b r16d397a 1 #FIG 3.2 Produced by xfig version 3.2. 51 #FIG 3.2 Produced by xfig version 3.2.7b 2 2 Landscape 3 3 Center 4 4 Inches 5 Letter 5 Letter 6 6 100.00 7 7 Single 8 8 -2 9 9 1200 2 10 5 1 0 1 0 7 50 -1 -1 0.000 0 0 1 0 3750.000 4062.500 2550 975 3750 750 4950 97510 5 1 0 1 0 7 50 -1 -1 0.000 0 0 1 0 4050.000 4662.500 2850 1575 4050 1350 5250 1575 11 11 1 1 1.00 45.00 90.00 12 6 2 250 2400 4050 270012 6 2550 3000 4350 3300 13 13 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 14 2 250 2400 3150 2400 3150 2700 2250 2700 2250 240014 2550 3000 3450 3000 3450 3300 2550 3300 2550 3000 15 15 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 16 3 150 2400 4050 2400 4050 2700 3150 2700 3150 240017 4 1 0 50 -1 0 11 0.0000 2 1 95 870 2700 2625 Object$_1$\00118 4 1 0 50 -1 0 11 0.0000 2 1 95 870 3600 2625 Object$_2$\00116 3450 3000 4350 3000 4350 3300 3450 3300 3450 3000 17 4 1 0 50 -1 0 11 0.0000 2 165 825 3000 3225 Object$_1$\001 18 4 1 0 50 -1 0 11 0.0000 2 165 825 3900 3225 Object$_2$\001 19 19 -6 20 20 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 21 1 050 1500 1950 1500 1950 1800 1050 1800 1050 150021 1350 2100 2250 2100 2250 2400 1350 2400 1350 2100 22 22 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 23 900 900 3000 900 3000 1950 900 1950 900 90023 1200 1500 3300 1500 3300 2550 1200 2550 1200 1500 24 24 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 25 25 1 1 1.00 45.00 90.00 26 1950 1950 2700 240026 2250 2550 3000 3000 27 27 2 2 0 1 0 7 60 -1 -1 0.000 0 0 -1 0 0 5 28 1950 1500 2850 1500 2850 1800 1950 1800 1950 150028 2250 2100 3150 2100 3150 2400 2250 2400 2250 2100 29 29 2 2 0 1 0 7 60 -1 -1 0.000 0 0 -1 0 0 5 30 3 450 1500 4350 1500 4350 1800 3450 1800 3450 150030 3750 2100 4650 2100 4650 2400 3750 2400 3750 2100 31 31 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 32 4 350 1500 5250 1500 5250 1800 4350 1800 4350 150032 4650 2100 5550 2100 5550 2400 4650 2400 4650 2100 33 33 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 34 3 300 900 5400 900 5400 1950 3300 1950 3300 90034 3600 1500 5700 1500 5700 2550 3600 2550 3600 1500 35 35 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 36 36 1 1 1.00 45.00 90.00 37 4 350 1950 3600 240037 4650 2550 3900 3000 38 38 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 39 5250 1200 5250 975 4650 975 4650 1200 5250 120039 3225 1800 2475 1800 2475 1575 3225 1575 3225 1800 40 40 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 41 2850 1200 2850 975 2250 975 2250 1200 2850 120042 4 1 0 50 -1 0 11 0.0000 2 1 95 735 2550 1125 Task$_1$\00143 4 0 0 50 -1 0 11 0.0000 2 1 95 720 975 1125 CPU$_1$\00144 4 1 0 50 -1 0 11 0.0000 2 135 4 80 1950 1425 Cache\00145 4 1 0 50 -1 0 11 0.0000 2 1 95 870 1500 1725 Object$_1$\00146 4 1 0 50 -1 0 11 0.0000 2 1 95 870 2400 1725 Object$_2$\00147 4 1 0 50 -1 0 11 0.0000 2 1 95 735 4950 1125 Task$_2$\00148 4 0 0 50 -1 0 11 0.0000 2 1 95 720 3375 1125 CPU$_2$\00149 4 1 0 50 -1 0 11 0.0000 2 1 95 870 3900 1725 Object$_1$\00150 4 1 0 50 -1 0 11 0.0000 2 1 95 870 4800 1725 Object$_2$\00151 4 1 0 50 -1 0 11 0.0000 2 135 4 80 4350 1425 Cache\00152 4 0 0 50 -1 2 11 0.0000 2 180 720 4 050 2250 6. modify\00153 4 2 0 50 -1 0 11 0.0000 2 1 80 630 2175 2625 Memory\00154 4 2 0 50 -1 2 11 0.0000 2 135 5 85 2250 2250 1. alloc\00155 4 0 0 50 -1 2 11 0.0000 2 180 720 2 400 2175 3. modify\00156 4 2 0 50 -1 2 11 0.0000 2 135 5 85 3675 2325 5. alloc\00157 4 2 0 50 -1 2 11 0.0000 2 135 7 80 3975 2175 4. dealloc\00158 4 1 0 50 -1 2 11 0.0000 2 1 95 2400 3750 675 2. pass Object$_2$ reference\00141 5625 1800 4875 1800 4875 1575 5625 1575 5625 1800 42 4 1 0 50 -1 0 11 0.0000 2 165 855 2850 1725 Thread$_1$\001 43 4 0 0 50 -1 0 11 0.0000 2 165 720 1275 1725 CPU$_1$\001 44 4 1 0 50 -1 0 11 0.0000 2 135 435 2250 2025 Cache\001 45 4 1 0 50 -1 0 11 0.0000 2 165 825 1800 2325 Object$_1$\001 46 4 1 0 50 -1 0 11 0.0000 2 165 825 2700 2325 Object$_2$\001 47 4 1 0 50 -1 0 11 0.0000 2 165 855 5250 1725 Thread$_2$\001 48 4 0 0 50 -1 0 11 0.0000 2 165 720 3675 1725 CPU$_2$\001 49 4 1 0 50 -1 0 11 0.0000 2 165 825 4200 2325 Object$_1$\001 50 4 1 0 50 -1 0 11 0.0000 2 165 825 5100 2325 Object$_2$\001 51 4 1 0 50 -1 0 11 0.0000 2 135 435 4650 2025 Cache\001 52 4 0 0 50 -1 2 11 0.0000 2 180 720 4350 2850 6. modify\001 53 4 2 0 50 -1 0 11 0.0000 2 165 615 2475 3225 Memory\001 54 4 2 0 50 -1 2 11 0.0000 2 135 525 2550 2850 1. alloc\001 55 4 0 0 50 -1 2 11 0.0000 2 180 720 2700 2775 3. modify\001 56 4 2 0 50 -1 2 11 0.0000 2 135 525 3975 2925 5. alloc\001 57 4 2 0 50 -1 2 11 0.0000 2 135 705 4275 2775 4. dealloc\001 58 4 1 0 50 -1 2 11 0.0000 2 165 2220 4050 1275 2. pass Object$_2$ reference\001 -
doc/theses/mubeen_zulfiqar_MMath/figures/ProgramFalseSharing.fig
r4f7ad4b r16d397a 1 #FIG 3.2 Produced by xfig version 3.2. 51 #FIG 3.2 Produced by xfig version 3.2.7b 2 2 Landscape 3 3 Center 4 4 Inches 5 Letter 5 Letter 6 6 100.00 7 7 Single … … 15 15 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 16 16 3450 3000 4350 3000 4350 3300 3450 3300 3450 3000 17 4 1 0 50 -1 0 11 0.0000 2 1 95 8703000 3225 Object$_1$\00118 4 1 0 50 -1 0 11 0.0000 2 1 95 8703900 3225 Object$_2$\00117 4 1 0 50 -1 0 11 0.0000 2 165 825 3000 3225 Object$_1$\001 18 4 1 0 50 -1 0 11 0.0000 2 165 825 3900 3225 Object$_2$\001 19 19 -6 20 20 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 … … 37 37 4650 2550 3900 3000 38 38 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 39 3150 1800 3150 1575 2550 1575 2550 1800 3150180039 5625 1800 5625 1575 4875 1575 4875 1800 5625 1800 40 40 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 41 5550 1800 5550 1575 4950 1575 4950 1800 5550180042 4 1 0 50 -1 0 11 0.0000 2 1 95 735 2850 1725 Task$_1$\00143 4 0 0 50 -1 0 11 0.0000 2 1 95 720 1275 1725 CPU$_1$\00144 4 1 0 50 -1 0 11 0.0000 2 135 4 802250 2025 Cache\00145 4 1 0 50 -1 0 11 0.0000 2 1 95 8701800 2325 Object$_1$\00146 4 1 0 50 -1 0 11 0.0000 2 1 95 8702700 2325 Object$_2$\00147 4 1 0 50 -1 0 11 0.0000 2 1 95 735 5250 1725 Task$_2$\00148 4 0 0 50 -1 0 11 0.0000 2 1 95 720 3675 1725 CPU$_2$\00149 4 1 0 50 -1 0 11 0.0000 2 1 95 8704200 2325 Object$_1$\00150 4 1 0 50 -1 0 11 0.0000 2 1 95 8705100 2325 Object$_2$\00151 4 1 0 50 -1 0 11 0.0000 2 135 4 804650 2025 Cache\00152 4 2 0 50 -1 0 11 0.0000 2 1 80 6302475 3225 Memory\00153 4 2 0 50 -1 2 11 0.0000 2 135 5 85 2550 2850 1. alloc\00141 3225 1800 3225 1575 2475 1575 2475 1800 3225 1800 42 4 1 0 50 -1 0 11 0.0000 2 165 855 2850 1725 Thread$_1$\001 43 4 0 0 50 -1 0 11 0.0000 2 165 720 1275 1725 CPU$_1$\001 44 4 1 0 50 -1 0 11 0.0000 2 135 435 2250 2025 Cache\001 45 4 1 0 50 -1 0 11 0.0000 2 165 825 1800 2325 Object$_1$\001 46 4 1 0 50 -1 0 11 0.0000 2 165 825 2700 2325 Object$_2$\001 47 4 1 0 50 -1 0 11 0.0000 2 165 855 5250 1725 Thread$_2$\001 48 4 0 0 50 -1 0 11 0.0000 2 165 720 3675 1725 CPU$_2$\001 49 4 1 0 50 -1 0 11 0.0000 2 165 825 4200 2325 Object$_1$\001 50 4 1 0 50 -1 0 11 0.0000 2 165 825 5100 2325 Object$_2$\001 51 4 1 0 50 -1 0 11 0.0000 2 135 435 4650 2025 Cache\001 52 4 2 0 50 -1 0 11 0.0000 2 165 615 2475 3225 Memory\001 53 4 2 0 50 -1 2 11 0.0000 2 135 525 2550 2850 1. alloc\001 54 54 4 0 0 50 -1 2 11 0.0000 2 180 720 2700 2775 3. modify\001 55 55 4 0 0 50 -1 2 11 0.0000 2 180 720 4350 2850 4. modify\001 56 4 1 0 50 -1 2 11 0.0000 2 1 95 2400 4050 1275 2. pass Object$_2$ reference\00156 4 1 0 50 -1 2 11 0.0000 2 165 2220 4050 1275 2. pass Object$_2$ reference\001 -
doc/theses/mubeen_zulfiqar_MMath/pictures/PrivatePublicHeaps.fig
r4f7ad4b r16d397a 1 #FIG 3.2 Produced by xfig version 3.2. 51 #FIG 3.2 Produced by xfig version 3.2.7b 2 2 Landscape 3 3 Center 4 4 Inches 5 Letter 5 Letter 6 6 100.00 7 7 Single 8 8 -2 9 9 1200 2 10 6 1200 1200 2400 1500 11 6 1200 1275 2400 1500 12 4 2 0 50 -1 0 11 0.0000 2 180 915 2250 1425 Public Heap\001 13 4 0 0 50 -1 0 9 0.0000 2 105 75 2275 1475 1\001 14 -6 10 6 2550 1200 3750 1500 15 11 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 16 1200 1200 2400 1200 2400 1500 1200 1500 1200 1200 17 -6 18 6 3900 1200 5100 1500 19 6 3900 1275 5100 1500 20 4 0 0 50 -1 0 9 0.0000 2 105 75 4975 1475 2\001 21 4 2 0 50 -1 0 11 0.0000 2 180 915 4950 1425 Public Heap\001 22 -6 23 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 24 3900 1200 5100 1200 5100 1500 3900 1500 3900 1200 25 -6 26 6 1425 2100 2700 2400 27 6 1425 2175 2550 2400 28 4 2 0 50 -1 0 11 0.0000 2 180 990 2550 2325 Private Heap\001 29 -6 30 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 31 1500 2100 2700 2100 2700 2400 1500 2400 1500 2100 32 4 0 0 50 -1 0 9 0.0000 2 105 75 2575 2375 1\001 33 -6 34 6 3525 2100 4800 2400 35 6 3525 2175 4650 2400 36 4 2 0 50 -1 0 11 0.0000 2 180 990 4650 2325 Private Heap\001 37 -6 38 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 39 3600 2100 4800 2100 4800 2400 3600 2400 3600 2100 40 4 0 0 50 -1 0 9 0.0000 2 105 75 4675 2375 2\001 41 -6 42 6 2550 600 3750 900 43 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 44 2550 600 3750 600 3750 900 2550 900 2550 600 45 4 1 0 50 -1 0 11 0.0000 2 180 945 3150 825 Global Heap\001 46 -6 47 6 1575 3075 2100 3300 48 4 2 0 50 -1 0 11 0.0000 2 135 375 1950 3225 Task\001 49 4 0 0 50 -1 0 9 0.0000 2 105 75 1975 3275 1\001 50 -6 51 6 4275 3075 4800 3300 52 4 2 0 50 -1 0 11 0.0000 2 135 375 4650 3225 Task\001 53 4 0 0 50 -1 0 9 0.0000 2 105 75 4675 3275 2\001 12 2550 1200 3750 1200 3750 1500 2550 1500 2550 1200 13 4 1 0 50 -1 0 11 0.0000 2 180 900 3150 1425 Global Heap\001 54 14 -6 55 15 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 56 16 1 1 1.00 45.00 90.00 57 1275 1500 1275 300017 1275 2100 1275 3600 58 18 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 59 19 1 1 1.00 45.00 90.00 60 5025 1500 5025 300020 5025 2100 5025 3600 61 21 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 62 22 1 1 1.00 45.00 90.00 63 4650 3 000 4650 240023 4650 3600 4650 3000 64 24 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 65 25 1 1 1.00 45.00 90.00 66 3975 3 075 2400 150026 3975 3675 2400 2100 67 27 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 68 28 1 1 1.00 45.00 90.00 69 2325 3 075 3900 150029 2325 3675 3900 2100 70 30 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2 71 31 1 1 1.00 45.00 90.00 72 32 1 1 1.00 45.00 90.00 73 1950 1 200 2550 90033 1950 1800 2550 1500 74 34 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2 75 35 1 1 1.00 45.00 90.00 76 36 1 1 1.00 45.00 90.00 77 3750 900 4350 120037 3750 1500 4350 1800 78 38 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2 79 39 1 1 1.00 45.00 90.00 80 40 1 1 1.00 45.00 90.00 81 2550 2 100 2550 90041 2550 2700 2550 1500 82 42 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2 83 43 1 1 1.00 45.00 90.00 84 44 1 1 1.00 45.00 90.00 85 3750 2 100 3750 90045 3750 2700 3750 1500 86 46 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 87 47 1 1 1.00 45.00 90.00 88 4650 2 100 4650 150048 4650 2700 4650 2100 89 49 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 90 50 1 1 1.00 45.00 90.00 91 1650 2400 1650 300051 1650 3000 1650 3600 92 52 2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 93 53 1 1 1.00 45.00 90.00 94 1650 2 100 1650 150054 1650 2700 1650 2100 95 55 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 96 56 1 1 1.00 45.00 90.00 97 2025 3 000 2025 240057 2025 3600 2025 3000 98 58 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 99 59 1 1 1.00 45.00 90.00 100 4275 2400 4275 300060 4275 3000 4275 3600 101 61 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 102 2325 3300 2325 3000 1200 3000 1200 3300 2325 330062 5100 3900 5100 3600 3975 3600 3975 3900 5100 3900 103 63 2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5 104 5100 3300 5100 3000 3975 3000 3975 3300 5100 3300 105 4 1 0 50 -1 0 11 0.0000 2 180 540 3150 1425 locking\001 106 4 1 0 50 -1 0 11 1.5708 2 135 360 1200 2250 alloc\001 107 4 1 0 50 -1 0 11 4.7124 2 135 540 4725 2700 dealloc\001 108 4 1 0 50 -1 0 11 4.7124 2 135 360 5100 2250 alloc\001 109 4 1 0 50 -1 0 11 5.4803 2 135 540 3375 2775 dealloc\001 110 4 1 0 50 -1 0 11 0.8029 2 180 780 2700 2625 ownership\001 111 4 1 0 50 -1 0 11 0.8029 2 135 540 2925 2775 dealloc\001 112 4 1 0 50 -1 0 11 5.4803 2 180 780 3600 2625 ownership\001 113 4 1 0 50 -1 0 11 4.7124 2 135 540 4725 1800 dealloc\001 114 4 1 0 50 -1 0 11 1.5708 2 135 540 1575 2700 dealloc\001 115 4 1 0 50 -1 0 11 1.5708 2 135 540 1575 1800 dealloc\001 116 4 1 0 50 -1 0 11 1.5708 2 135 360 1950 2700 alloc\001 117 4 1 0 50 -1 0 11 4.7124 2 135 360 4350 2700 alloc\001 64 2325 3900 2325 3600 1200 3600 1200 3900 2325 3900 65 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 66 1500 2700 2700 2700 2700 3000 1500 3000 1500 2700 67 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 68 1200 1800 2400 1800 2400 2100 1200 2100 1200 1800 69 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 70 3900 1800 5100 1800 5100 2100 3900 2100 3900 1800 71 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 72 3600 2700 4800 2700 4800 3000 3600 3000 3600 2700 73 4 1 0 50 -1 0 11 0.0000 2 180 525 3150 2025 locking\001 74 4 1 0 50 -1 0 11 1.5708 2 120 330 1200 2850 alloc\001 75 4 1 0 50 -1 0 11 4.7124 2 135 495 4725 3300 dealloc\001 76 4 1 0 50 -1 0 11 4.7124 2 120 330 5100 2850 alloc\001 77 4 1 0 50 -1 0 11 5.4803 2 135 495 3375 3375 dealloc\001 78 4 1 0 50 -1 0 11 0.8029 2 180 750 2700 3225 ownership\001 79 4 1 0 50 -1 0 11 0.8029 2 135 495 2925 3375 dealloc\001 80 4 1 0 50 -1 0 11 5.4803 2 180 750 3600 3225 ownership\001 81 4 1 0 50 -1 0 11 4.7124 2 135 495 4725 2400 dealloc\001 82 4 1 0 50 -1 0 11 1.5708 2 135 495 1575 3300 dealloc\001 83 4 1 0 50 -1 0 11 1.5708 2 135 495 1575 2400 dealloc\001 84 4 1 0 50 -1 0 11 1.5708 2 120 330 1950 3300 alloc\001 85 4 1 0 50 -1 0 11 4.7124 2 120 330 4350 3300 alloc\001 86 4 1 0 50 -1 0 11 0.0000 2 165 855 1800 3825 Thread$_1$\001 87 4 1 0 50 -1 0 11 0.0000 2 165 855 4500 3825 Thread$_2$\001 88 4 1 0 50 -1 0 11 0.0000 2 180 1230 1800 2025 Public Heap$_1$\001 89 4 1 0 50 -1 0 11 0.0000 2 180 1275 2100 2925 Private Heap$_1$\001 90 4 1 0 50 -1 0 11 0.0000 2 180 1230 4500 2025 Public Heap$_2$\001 91 4 1 0 50 -1 0 11 0.0000 2 180 1275 4200 2925 Private Heap$_2$\001
Note: See TracChangeset
for help on using the changeset viewer.