Changeset 8a930c03 for doc


Ignore:
Timestamp:
Jun 12, 2023, 12:05:58 PM (3 years ago)
Author:
JiadaL <j82liang@…>
Branches:
master, stuck-waitfor-destruct
Children:
fec8bd1
Parents:
2b78949 (diff), 38e266ca (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'master' of plg.uwaterloo.ca:software/cfa/cfa-cc

Location:
doc
Files:
24 added
1 deleted
22 edited

Legend:

Unmodified
Added
Removed
  • doc/bibliography/pl.bib

    r2b78949 r8a930c03  
    12091209    year        = 2018,
    12101210    pages       = {2111-2146},
    1211     note        = {\href{http://dx.doi.org/10.1002/spe.2624}{http://\-dx.doi.org/\-10.1002/\-spe.2624}},
     1211    optnote     = {\href{http://dx.doi.org/10.1002/spe.2624}{http://\-dx.doi.org/\-10.1002/\-spe.2624}},
    12121212}
    12131213
     
    18701870    month       = sep,
    18711871    year        = 2020,
    1872     note        = {\href{https://plg.uwaterloo.ca/~usystem/pub/uSystem/uC++.pdf}{https://\-plg.uwaterloo.ca/\-$\sim$usystem/\-pub/\-uSystem/uC++.pdf}},
     1872    note        = {\url{https://plg.uwaterloo.ca/~usystem/pub/uSystem/uC++.pdf}},
    18731873}
    18741874
     
    20042004    number      = 5,
    20052005    pages       = {1005-1042},
    2006     note        = {\href{https://onlinelibrary.wiley.com/doi/10.1002/spe.2925}{https://\-onlinelibrary.wiley.com/\-doi/\-10.1002/\-spe.2925}},
     2006    optnote     = {\href{https://onlinelibrary.wiley.com/doi/10.1002/spe.2925}{https://\-onlinelibrary.wiley.com/\-doi/\-10.1002/\-spe.2925}},
    20072007}
    20082008
     
    42234223    title       = {Implementing Lock-Free Queues},
    42244224    booktitle   = {Seventh International Conference on Parallel and Distributed Computing Systems},
     4225    organization= {International Society for Computers and Their Applications},
    42254226    address     = {Las Vegas, Nevada, U.S.A.},
    42264227    year        = {1994},
     
    50865087}
    50875088
    5088 @manual{MMTk,
     5089@misc{MMTk,
    50895090    keywords    = {Java memory management},
    50905091    contributer = {pabuhr@plg},
     
    50935094    month       = sep,
    50945095    year        = 2006,
    5095     note        = {\href{http://cs.anu.edu.au/~Robin.Garner/mmtk-guide.pdf}
    5096                   {http://cs.anu.edu.au/\-$\sim$Robin.Garner/\-mmtk-guide.pdf}},
     5096    howpublished= {\url{http://cs.anu.edu.au/~Robin.Garner/mmtk-guide.pdf}},
    50975097}
    50985098
     
    74027402}
    74037403
     7404@misc{rpmalloc,
     7405    author      = {Mattias Jansson},
     7406    title       = {rpmalloc version 1.4.1},
     7407    month       = apr,
     7408    year        = 2022,
     7409    howpublished= {\href{https://github.com/mjansson/rpmalloc}{https://\-github.com/\-mjansson/\-rpmalloc}},
     7410}
     7411
    74047412@manual{Rust,
    74057413    keywords    = {Rust programming language},
     
    74567464    booktitle   = {PLDI '04: Proceedings of the ACM SIGPLAN 2004 Conference on Programming Language Design and Implementation},
    74577465    location    = {Washington DC, USA},
    7458     publisher   = {ACM},
     7466    organization= {ACM},
    74597467    address     = {New York, NY, USA},
    74607468    volume      = 39,
  • doc/papers/llheap/Paper.tex

    r2b78949 r8a930c03  
    252252Dynamic code/data memory is managed by the dynamic loader for libraries loaded at runtime, which is complex especially in a multi-threaded program~\cite{Huang06}.
    253253However, changes to the dynamic code/data space are typically infrequent, many occurring at program startup, and are largely outside of a program's control.
    254 Stack memory is managed by the program call/return-mechanism using a simple LIFO technique, which works well for sequential programs.
    255 For stackful coroutines and user threads, a new stack is commonly created in dynamic-allocation memory.
     254Stack memory is managed by the program call/return-mechanism using a LIFO technique, which works well for sequential programs.
     255For stackful coroutines and user threads, a new stack is commonly created in the dynamic-allocation memory.
    256256This work focuses solely on management of the dynamic-allocation memory.
    257257
     
    293293\begin{enumerate}[leftmargin=*,itemsep=0pt]
    294294\item
    295 Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC and \CFA using user-level threads running on multiple kernel threads (M:N threading).
    296 
    297 \item
    298 Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill, and allocation alignment.
     295Implementation of a new stand-alone concurrent low-latency memory-allocator ($\approx$1,200 lines of code) for C/\CC programs using kernel threads (1:1 threading), and specialized versions of the allocator for the programming languages \uC~\cite{uC++} and \CFA~\cite{Moss18,Delisle21} using user-level threads running on multiple kernel threads (M:N threading).
     296
     297\item
     298Extend the standard C heap functionality by preserving with each allocation: its request size plus the amount allocated, whether an allocation is zero fill and/or allocation alignment.
    299299
    300300\item
     
    365365
    366366The following discussion is a quick overview of the moving-pieces that affect the design of a memory allocator and its performance.
    367 It is assumed that dynamic allocates and deallocates acquire storage for a program variable, referred to as an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC.
     367Dynamic acquires and releases obtain storage for a program variable, called an \newterm{object}, through calls such as @malloc@ and @free@ in C, and @new@ and @delete@ in \CC.
    368368Space for each allocated object comes from the dynamic-allocation zone.
    369369
     
    378378
    379379Figure~\ref{f:AllocatorComponents} shows the two important data components for a memory allocator, management and storage, collectively called the \newterm{heap}.
    380 The \newterm{management data} is a data structure located at a known memory address and contains all information necessary to manage the storage data.
    381 The management data starts with fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
     380The \newterm{management data} is a data structure located at a known memory address and contains fixed-sized information in the static-data memory that references components in the dynamic-allocation memory.
    382381For multi-threaded programs, additional management data may exist in \newterm{thread-local storage} (TLS) for each kernel thread executing the program.
    383382The \newterm{storage data} is composed of allocated and freed objects, and \newterm{reserved memory}.
     
    385384\ie only the program knows the location of allocated storage not the memory allocator.
    386385Freed objects (white) represent memory deallocated by the program, which are linked into one or more lists facilitating easy location of new allocations.
    387 Reserved memory (dark grey) is one or more blocks of memory obtained from the operating system but not yet allocated to the program;
    388 if there are multiple reserved blocks, they are also chained together, usually internally.
     386Reserved memory (dark grey) is one or more blocks of memory obtained from the \newterm{operating system} (OS) but not yet allocated to the program;
     387if there are multiple reserved blocks, they are also chained together.
    389388
    390389\begin{figure}
     
    395394\end{figure}
    396395
    397 In most allocator designs, allocated objects have management data embedded within them.
     396In many allocator designs, allocated objects and reserved blocks have management data embedded within them (see also Section~\ref{s:ObjectContainers}).
    398397Figure~\ref{f:AllocatedObject} shows an allocated object with a header, trailer, and optional spacing around the object.
    399398The header contains information about the object, \eg size, type, etc.
     
    404403When padding and spacing are necessary, neither can be used to satisfy a future allocation request while the current allocation exists.
    405404
    406 A free object also contains management data, \eg size, pointers, etc.
     405A free object often contains management data, \eg size, pointers, etc.
    407406Often the free list is chained internally so it does not consume additional storage, \ie the link fields are placed at known locations in the unused memory blocks.
    408407For internal chaining, the amount of management data for a free node defines the minimum allocation size, \eg if 16 bytes are needed for a free-list node, allocation requests less than 16 bytes are rounded up.
    409 The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new management information and/or program data.
     408The information in an allocated or freed object is overwritten when it transitions from allocated to freed and vice-versa by new program data and/or management information.
    410409
    411410\begin{figure}
     
    428427\label{s:Fragmentation}
    429428
    430 Fragmentation is memory requested from the operating system but not used by the program;
     429Fragmentation is memory requested from the OS but not used by the program;
    431430hence, allocated objects are not fragmentation.
    432431Figure~\ref{f:InternalExternalFragmentation} shows fragmentation is divided into two forms: internal or external.
     
    443442An allocator should strive to keep internal management information to a minimum.
    444443
    445 \newterm{External fragmentation} is all memory space reserved from the operating system but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
     444\newterm{External fragmentation} is all memory space reserved from the OS but not allocated to the program~\cite{Wilson95,Lim98,Siebert00}, which includes all external management data, freed objects, and reserved memory.
    446445This memory is problematic in two ways: heap blowup and highly fragmented memory.
    447446\newterm{Heap blowup} occurs when freed memory cannot be reused for future allocations leading to potentially unbounded external fragmentation growth~\cite{Berger00}.
    448 Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become very small.
     447Memory can become \newterm{highly fragmented} after multiple allocations and deallocations of objects, resulting in a checkerboard of adjacent allocated and free areas, where the free blocks have become to small to service requests.
    449448% Figure~\ref{f:MemoryFragmentation} shows an example of how a small block of memory fragments as objects are allocated and deallocated over time.
    450449Heap blowup can occur due to allocator policies that are too restrictive in reusing freed memory (the allocated size cannot use a larger free block) and/or no coalescing of free storage.
     
    452451% Memory is highly fragmented when most free blocks are unusable because of their sizes.
    453452% For example, Figure~\ref{f:Contiguous} and Figure~\ref{f:HighlyFragmented} have the same quantity of external fragmentation, but Figure~\ref{f:HighlyFragmented} is highly fragmented.
    454 % If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the operating system.
     453% If there is a request to allocate a large object, Figure~\ref{f:Contiguous} is more likely to be able to satisfy it with existing free memory, while Figure~\ref{f:HighlyFragmented} likely has to request more memory from the OS.
    455454
    456455% \begin{figure}
     
    475474The first approach is a \newterm{sequential-fit algorithm} with one list of free objects that is searched for a block large enough to fit a requested object size.
    476475Different search policies determine the free object selected, \eg the first free object large enough or closest to the requested size.
    477 Any storage larger than the request can become spacing after the object or be split into a smaller free object.
     476Any storage larger than the request can become spacing after the object or split into a smaller free object.
    478477% The cost of the search depends on the shape and quality of the free list, \eg a linear versus a binary-tree free-list, a sorted versus unsorted free-list.
    479478
     
    489488
    490489The third approach is \newterm{splitting} and \newterm{coalescing algorithms}.
    491 When an object is allocated, if there are no free objects of the requested size, a larger free object may be split into two smaller objects to satisfy the allocation request without obtaining more memory from the operating system.
    492 For example, in the \newterm{buddy system}, a block of free memory is split into two equal chunks, one of those chunks is again split into two equal chunks, and so on until a block just large enough to fit the requested object is created.
    493 When an object is deallocated it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger object.
     490When an object is allocated, if there are no free objects of the requested size, a larger free object is split into two smaller objects to satisfy the allocation request rather than obtaining more memory from the OS.
     491For example, in the \newterm{buddy system}, a block of free memory is split into equal chunks, one of those chunks is again split, and so on until a minimal block is created that fits the requested object.
     492When an object is deallocated, it is coalesced with the objects immediately before and after it in memory, if they are free, turning them into one larger block.
    494493Coalescing can be done eagerly at each deallocation or lazily when an allocation cannot be fulfilled.
    495 In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays during coalescing.
     494In all cases, coalescing increases allocation latency, hence some allocations can cause unbounded delays.
    496495While coalescing does not reduce external fragmentation, the coalesced blocks improve fragmentation quality so future allocations are less likely to cause heap blowup.
    497496% Splitting and coalescing can be used with other algorithms to avoid highly fragmented memory.
     
    501500\label{s:Locality}
    502501
    503 The principle of locality recognizes that programs tend to reference a small set of data, called a working set, for a certain period of time, where a working set is composed of temporal and spatial accesses~\cite{Denning05}.
     502The principle of locality recognizes that programs tend to reference a small set of data, called a \newterm{working set}, for a certain period of time, composed of temporal and spatial accesses~\cite{Denning05}.
    504503% Temporal clustering implies a group of objects are accessed repeatedly within a short time period, while spatial clustering implies a group of objects physically close together (nearby addresses) are accessed repeatedly within a short time period.
    505504% Temporal locality commonly occurs during an iterative computation with a fixed set of disjoint variables, while spatial locality commonly occurs when traversing an array.
    506 Hardware takes advantage of temporal and spatial locality through multiple levels of caching, \ie memory hierarchy.
     505Hardware takes advantage of the working set through multiple levels of caching, \ie memory hierarchy.
    507506% When an object is accessed, the memory physically located around the object is also cached with the expectation that the current and nearby objects will be referenced within a short period of time.
    508 For example, entire cache lines are transferred between memory and cache and entire virtual-memory pages are transferred between disk and memory.
     507For example, entire cache lines are transferred between cache and memory, and entire virtual-memory pages are transferred between memory and disk.
    509508% A program exhibiting good locality has better performance due to fewer cache misses and page faults\footnote{With the advent of large RAM memory, paging is becoming less of an issue in modern programming.}.
    510509
     
    532531\label{s:MutualExclusion}
    533532
    534 \newterm{Mutual exclusion} provides sequential access to the shared management data of the heap.
     533\newterm{Mutual exclusion} provides sequential access to the shared-management data of the heap.
    535534There are two performance issues for mutual exclusion.
    536535First is the overhead necessary to perform (at least) a hardware atomic operation every time a shared resource is accessed.
    537536Second is when multiple threads contend for a shared resource simultaneously, and hence, some threads must wait until the resource is released.
    538537Contention can be reduced in a number of ways:
    539 1) Using multiple fine-grained locks versus a single lock, spreading the contention across a number of locks.
     5381) Using multiple fine-grained locks versus a single lock to spread the contention across a number of locks.
    5405392) Using trylock and generating new storage if the lock is busy, yielding a classic space versus time tradeoff.
    5415403) Using one of the many lock-free approaches for reducing contention on basic data-structure operations~\cite{Oyama99}.
     
    551550a memory allocator can only affect the latter two.
    552551
    553 Assume two objects, object$_1$ and object$_2$, share a cache line.
    554 \newterm{Program-induced false-sharing} occurs when thread$_1$ passes a reference to object$_2$ to thread$_2$, and then threads$_1$ modifies object$_1$ while thread$_2$ modifies object$_2$.
     552Specifically, assume two objects, O$_1$ and O$_2$, share a cache line, with threads, T$_1$ and T$_2$.
     553\newterm{Program-induced false-sharing} occurs when T$_1$ passes a reference to O$_2$ to T$_2$, and then T$_1$ modifies O$_1$ while T$_2$ modifies O$_2$.
    555554% Figure~\ref{f:ProgramInducedFalseSharing} shows when Thread$_1$ passes Object$_2$ to Thread$_2$, a false-sharing situation forms when Thread$_1$ modifies Object$_1$ and Thread$_2$ modifies Object$_2$.
    556555% Changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line.
     
    574573% \label{f:FalseSharing}
    575574% \end{figure}
    576 \newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when object$_1$ and object$_2$ are heap allocated and their references are passed to thread$_1$ and thread$_2$, which modify the objects.
     575\newterm{Allocator-induced active false-sharing}\label{s:AllocatorInducedActiveFalseSharing} occurs when O$_1$ and O$_2$ are heap allocated and their references are passed to T$_1$ and T$_2$, which modify the objects.
    577576% For example, in Figure~\ref{f:AllocatorInducedActiveFalseSharing}, each thread allocates an object and loads a cache-line of memory into its associated cache.
    578577% Again, changes to Object$_1$ invalidate CPU$_2$'s cache line, and changes to Object$_2$ invalidate CPU$_1$'s cache line.
     
    580579% is another form of allocator-induced false-sharing caused by program-induced false-sharing.
    581580% When an object in a program-induced false-sharing situation is deallocated, a future allocation of that object may cause passive false-sharing.
    582 when thread$_1$ passes object$_2$ to thread$_2$, and thread$_2$ subsequently deallocates object$_2$, and then object$_2$ is reallocated to thread$_2$ while thread$_1$ is still using object$_1$.
     581when T$_1$ passes O$_2$ to T$_2$, and T$_2$ subsequently deallocates O$_2$, and then O$_2$ is reallocated to T$_2$ while T$_1$ is still using O$_1$.
    583582
    584583
     
    593592\label{s:MultiThreadedMemoryAllocatorFeatures}
    594593
    595 The following features are used in the construction of multi-threaded memory-allocators:
    596 \begin{enumerate}[itemsep=0pt]
    597 \item multiple heaps: with or without a global heap, or with or without heap ownership.
    598 \item object containers: with or without ownership, fixed or variable sized, global or local free-lists.
    599 \item hybrid private/public heap
    600 \item allocation buffer
    601 \item lock-free operations
    602 \end{enumerate}
     594The following features are used in the construction of multi-threaded memory-allocators: multiple heaps, user-level threading, ownership, object containers, allocation buffer, lock-free operations.
    603595The first feature, multiple heaps, pertains to different kinds of heaps.
    604596The second feature, object containers, pertains to the organization of objects within the storage area.
     
    606598
    607599
    608 \subsection{Multiple Heaps}
     600\subsubsection{Multiple Heaps}
    609601\label{s:MultipleHeaps}
    610602
    611603A multi-threaded allocator has potentially multiple threads and heaps.
    612604The multiple threads cause complexity, and multiple heaps are a mechanism for dealing with the complexity.
    613 The spectrum ranges from multiple threads using a single heap, denoted as T:1 (see Figure~\ref{f:SingleHeap}), to multiple threads sharing multiple heaps, denoted as T:H (see Figure~\ref{f:SharedHeaps}), to one thread per heap, denoted as 1:1 (see Figure~\ref{f:PerThreadHeap}), which is almost back to a single-threaded allocator.
     605The spectrum ranges from multiple threads using a single heap, denoted as T:1, to multiple threads sharing multiple heaps, denoted as T:H, to one thread per heap, denoted as 1:1, which is almost back to a single-threaded allocator.
    614606
    615607\begin{figure}
     
    635627\end{figure}
    636628
    637 \paragraph{T:1 model} where all threads allocate and deallocate objects from one heap.
    638 Memory is obtained from the freed objects, or reserved memory in the heap, or from the operating system (OS);
    639 the heap may also return freed memory to the operating system.
     629\paragraph{T:1 model (see Figure~\ref{f:SingleHeap})} where all threads allocate and deallocate objects from one heap.
     630Memory is obtained from the freed objects, or reserved memory in the heap, or from the OS;
     631the heap may also return freed memory to the OS.
    640632The arrows indicate the direction memory conceptually moves for each kind of operation: allocation moves memory along the path from the heap/operating-system to the user application, while deallocation moves memory along the path from the application back to the heap/operating-system.
    641633To safely handle concurrency, a single lock may be used for all heap operations or fine-grained locking for different operations.
    642634Regardless, a single heap may be a significant source of contention for programs with a large amount of memory allocation.
    643635
    644 \paragraph{T:H model} where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps.
     636\paragraph{T:H model (see Figure~\ref{f:SharedHeaps})} where each thread allocates storage from several heaps depending on certain criteria, with the goal of reducing contention by spreading allocations/deallocations across the heaps.
    645637The decision on when to create a new heap and which heap a thread allocates from depends on the allocator design.
    646638To determine which heap to access, each thread must point to its associated heap in some way.
     
    673665An alternative implementation is for all heaps to share one reserved memory, which requires a separate lock for the reserved storage to ensure mutual exclusion when acquiring new memory.
    674666Because multiple threads can allocate/free/reallocate adjacent storage, all forms of false sharing may occur.
    675 Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the operating system.
     667Other storage-management options are to use @mmap@ to set aside (large) areas of virtual memory for each heap and suballocate each heap's storage within that area, pushing part of the storage management complexity back to the OS.
    676668
    677669% \begin{figure}
     
    684676Multiple heaps increase external fragmentation as the ratio of heaps to threads increases, which can lead to heap blowup.
    685677The external fragmentation experienced by a program with a single heap is now multiplied by the number of heaps, since each heap manages its own free storage and allocates its own reserved memory.
    686 Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the operating system, which can be expensive.
    687 Depending on how the operating system provides dynamic storage to an application, returning storage may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.
    688 In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused.
     678Additionally, objects freed by one heap cannot be reused by other threads without increasing the cost of the memory operations, except indirectly by returning free memory to the OS (see Section~\ref{s:Ownership}).
     679Returning storage to the OS may be difficult or impossible, \eg the contiguous @sbrk@ area in Unix.
     680% In the worst case, a program in which objects are allocated from one heap but deallocated to another heap means these freed objects are never reused.
    689681
    690682Adding a \newterm{global heap} (G) attempts to reduce the cost of obtaining/returning memory among heaps (sharing) by buffering storage within the application address-space.
    691 Now, each heap obtains and returns storage to/from the global heap rather than the operating system.
     683Now, each heap obtains and returns storage to/from the global heap rather than the OS.
    692684Storage is obtained from the global heap only when a heap allocation cannot be fulfilled, and returned to the global heap when a heap's free memory exceeds some threshold.
    693 Similarly, the global heap buffers this memory, obtaining and returning storage to/from the operating system as necessary.
     685Similarly, the global heap buffers this memory, obtaining and returning storage to/from the OS as necessary.
    694686The global heap does not have its own thread and makes no internal allocation requests;
    695687instead, it uses the application thread, which called one of the multiple heaps and then the global heap, to perform operations.
    696688Hence, the worst-case cost of a memory operation includes all these steps.
    697 With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the operating system to achieve the same goal and is independent of the mechanism used by the operating system to present dynamic memory to an address space.
    698 
     689With respect to heap blowup, the global heap provides an indirect mechanism to move free memory among heaps, which usually has a much lower cost than interacting with the OS to achieve the same goal and is independent of the mechanism used by the OS to present dynamic memory to an address space.
    699690However, since any thread may indirectly perform a memory operation on the global heap, it is a shared resource that requires locking.
    700691A single lock can be used to protect the global heap or fine-grained locking can be used to reduce contention.
    701692In general, the cost is minimal since the majority of memory operations are completed without the use of the global heap.
    702693
    703 
    704 \paragraph{1:1 model (thread heaps)} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see ownership in Section~\ref{s:Ownership}).
     694\paragraph{1:1 model (see Figure~\ref{f:PerThreadHeap})} where each thread has its own heap eliminating most contention and locking because threads seldom access another thread's heap (see Section~\ref{s:Ownership}).
    705695An additional benefit of thread heaps is improved locality due to better memory layout.
    706696As each thread only allocates from its heap, all objects are consolidated in the storage area for that heap, better utilizing each CPUs cache and accessing fewer pages.
     
    708698Thread heaps can also eliminate allocator-induced active false-sharing, if memory is acquired so it does not overlap at crucial boundaries with memory for another thread's heap.
    709699For example, assume page boundaries coincide with cache line boundaries, if a thread heap always acquires pages of memory then no two threads share a page or cache line unless pointers are passed among them.
    710 Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps.
     700% Hence, allocator-induced active false-sharing cannot occur because the memory for thread heaps never overlaps.
    711701
    712702When a thread terminates, there are two options for handling its thread heap.
     
    720710
    721711It is possible to use any of the heap models with user-level (M:N) threading.
    722 However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the operating system, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
     712However, an important goal of user-level threading is for fast operations (creation/termination/context-switching) by not interacting with the OS, which allows the ability to create large numbers of high-performance interacting threads ($>$ 10,000).
    723713It is difficult to retain this goal, if the user-threading model is directly involved with the heap model.
    724714Figure~\ref{f:UserLevelKernelHeaps} shows that virtually all user-level threading systems use whatever kernel-level heap-model is provided by the language runtime.
     
    732722\end{figure}
    733723
    734 Adopting this model results in a subtle problem with shared heaps.
    735 With kernel threading, an operation that is started by a kernel thread is always completed by that thread.
    736 For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
     724Adopting user threading results in a subtle problem with shared heaps.
     725With kernel threading, an operation started by a kernel thread is always completed by that thread.
     726For example, if a kernel thread starts an allocation/deallocation on a shared heap, it always completes that operation with that heap, even if preempted, \ie any locking correctness associated with the shared heap is preserved across preemption.
    737727However, this correctness property is not preserved for user-level threading.
    738728A user thread can start an allocation/deallocation on one kernel thread, be preempted (time slice), and continue running on a different kernel thread to complete the operation~\cite{Dice02}.
    739729When the user thread continues on the new kernel thread, it may have pointers into the previous kernel-thread's heap and hold locks associated with it.
    740730To get the same kernel-thread safety, time slicing must be disabled/\-enabled around these operations, so the user thread cannot jump to another kernel thread.
    741 However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption does not happen that frequently.
     731However, eagerly disabling/enabling time-slicing on the allocation/deallocation fast path is expensive, because preemption is infrequent (milliseconds).
    742732Instead, techniques exist to lazily detect this case in the interrupt handler, abort the preemption, and return to the operation so it can complete atomically.
    743 Occasionally ignoring a preemption should be benign, but a persistent lack of preemption can result in both short and long term starvation;
    744 techniques like rollforward can be used to force an eventual preemption.
     733Occasional ignoring of a preemption should be benign, but a persistent lack of preemption can result in starvation;
     734techniques like rolling forward the preemption to the next context switch can be used.
    745735
    746736
     
    800790% For example, in Figure~\ref{f:AllocatorInducedPassiveFalseSharing}, Object$_2$ may be deallocated to Thread$_2$'s heap initially.
    801791% If Thread$_2$ reallocates Object$_2$ before it is returned to its owner heap, then passive false-sharing may occur.
     792
     793For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps.% (see~Figure~\ref{f:HybridPrivatePublicHeap}).
     794The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup.
     795In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap.
     796Similarly, a thread first deallocates an object to its private heap, and second to the public heap.
     797Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps.
     798% Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap.
     799Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which does allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return.
     800
     801% \begin{figure}
     802% \centering
     803% \input{PrivatePublicHeaps.pstex_t}
     804% \caption{Hybrid Private/Public Heap for Per-thread Heaps}
     805% \label{f:HybridPrivatePublicHeap}
     806% \vspace{10pt}
     807% \input{RemoteFreeList.pstex_t}
     808% \caption{Remote Free-List}
     809% \label{f:RemoteFreeList}
     810% \end{figure}
     811
     812% As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
     813% For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
     814% To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
     815% Since the remote free-list is occasionally cleared during an allocation, this adds to that cost.
     816% Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object.
     817 
     818% If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking.
     819% In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur.
     820% If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations.
     821 
     822% The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator.
     823% Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used.
     824% In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps.
     825% For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach.
    802826
    803827
     
    817841
    818842
    819 \subsection{Object Containers}
     843\subsubsection{Object Containers}
    820844\label{s:ObjectContainers}
    821845
     
    827851\eg an object is accessed by the program after it is allocated, while the header is accessed by the allocator after it is free.
    828852
    829 The alternative factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks} in~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}.
     853An alternative approach factors common header data to a separate location in memory and organizes associated free storage into blocks called \newterm{object containers} (\newterm{superblocks}~\cite{Berger00}), as in Figure~\ref{f:ObjectContainer}.
    830854The header for the container holds information necessary for all objects in the container;
    831855a trailer may also be used at the end of the container.
     
    862886
    863887
    864 \subsubsection{Container Ownership}
     888\paragraph{Container Ownership}
    865889\label{s:ContainerOwnership}
    866890
     
    894918
    895919Additional restrictions may be applied to the movement of containers to prevent active false-sharing.
    896 For example, if a container changes ownership through the global heap, then when a thread allocates an object from the newly acquired container it is actively false-sharing even though no objects are passed among threads.
     920For example, if a container changes ownership through the global heap, then a thread allocating from the newly acquired container is actively false-sharing even though no objects are passed among threads.
    897921Note, once the thread frees the object, no more false sharing can occur until the container changes ownership again.
    898922To prevent this form of false sharing, container movement may be restricted to when all objects in the container are free.
    899 One implementation approach that increases the freedom to return a free container to the operating system involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the operating system.
     923One implementation approach that increases the freedom to return a free container to the OS involves allocating containers using a call like @mmap@, which allows memory at an arbitrary address to be returned versus only storage at the end of the contiguous @sbrk@ area, again pushing storage management complexity back to the OS.
    900924
    901925% \begin{figure}
     
    930954
    931955
    932 \subsubsection{Container Size}
     956\paragraph{Container Size}
    933957\label{s:ContainerSize}
    934958
     
    941965However, with more objects in a container, there may be more objects that are unallocated, increasing external fragmentation.
    942966With smaller containers, not only are there more containers, but a second new problem arises where objects are larger than the container.
    943 In general, large objects, \eg greater than 64\,KB, are allocated directly from the operating system and are returned immediately to the operating system to reduce long-term external fragmentation.
     967In general, large objects, \eg greater than 64\,KB, are allocated directly from the OS and are returned immediately to the OS to reduce long-term external fragmentation.
    944968If the container size is small, \eg 1\,KB, then a 1.5\,KB object is treated as a large object, which is likely to be inappropriate.
    945969Ideally, it is best to use smaller containers for smaller objects, and larger containers for medium objects, which leads to the issue of locating the container header.
     
    970994
    971995
    972 \subsubsection{Container Free-Lists}
     996\paragraph{Container Free-Lists}
    973997\label{s:containersfreelists}
    974998
     
    10051029
    10061030
    1007 \subsubsection{Hybrid Private/Public Heap}
    1008 \label{s:HybridPrivatePublicHeap}
    1009 
    1010 Section~\ref{s:Ownership} discusses advantages and disadvantages of public heaps (T:H model and with ownership) and private heaps (thread heaps with ownership).
    1011 For thread heaps with ownership, it is possible to combine these approaches into a hybrid approach with both private and public heaps (see~Figure~\ref{f:HybridPrivatePublicHeap}).
    1012 The main goal of the hybrid approach is to eliminate locking on thread-local allocation/deallocation, while providing ownership to prevent heap blowup.
    1013 In the hybrid approach, a thread first allocates from its private heap and second from its public heap if no free memory exists in the private heap.
    1014 Similarly, a thread first deallocates an object to its private heap, and second to the public heap.
    1015 Both private and public heaps can allocate/deallocate to/from the global heap if there is no free memory or excess free memory, although an implementation may choose to funnel all interaction with the global heap through one of the heaps.
    1016 Note, deallocation from the private to the public (dashed line) is unlikely because there is no obvious advantages unless the public heap provides the only interface to the global heap.
    1017 Finally, when a thread frees an object it does not own, the object is either freed immediately to its owner's public heap or put in the freeing thread's private heap for delayed ownership, which allows the freeing thread to temporarily reuse an object before returning it to its owner or batch objects for an owner heap into a single return.
    1018 
    1019 \begin{figure}
    1020 \centering
    1021 \input{PrivatePublicHeaps.pstex_t}
    1022 \caption{Hybrid Private/Public Heap for Per-thread Heaps}
    1023 \label{f:HybridPrivatePublicHeap}
    1024 % \vspace{10pt}
    1025 % \input{RemoteFreeList.pstex_t}
    1026 % \caption{Remote Free-List}
    1027 % \label{f:RemoteFreeList}
    1028 \end{figure}
    1029 
    1030 As mentioned, an implementation may have only one heap interact with the global heap, so the other heap can be simplified.
    1031 For example, if only the private heap interacts with the global heap, the public heap can be reduced to a lock-protected free-list of objects deallocated by other threads due to ownership, called a \newterm{remote free-list}.
    1032 To avoid heap blowup, the private heap allocates from the remote free-list when it reaches some threshold or it has no free storage.
    1033 Since the remote free-list is occasionally cleared during an allocation, this adds to that cost.
    1034 Clearing the remote free-list is $O(1)$ if the list can simply be added to the end of the private-heap's free-list, or $O(N)$ if some action must be performed for each freed object.
    1035 
    1036 If only the public heap interacts with other threads and the global heap, the private heap can handle thread-local allocations and deallocations without locking.
    1037 In this scenario, the private heap must deallocate storage after reaching a certain threshold to the public heap (and then eventually to the global heap from the public heap) or heap blowup can occur.
    1038 If the public heap does the major management, the private heap can be simplified to provide high-performance thread-local allocations and deallocations.
    1039 
    1040 The main disadvantage of each thread having both a private and public heap is the complexity of managing two heaps and their interactions in an allocator.
    1041 Interestingly, heap implementations often focus on either a private or public heap, giving the impression a single versus a hybrid approach is being used.
    1042 In many case, the hybrid approach is actually being used, but the simpler heap is just folded into the complex heap, even though the operations logically belong in separate heaps.
    1043 For example, a remote free-list is actually a simple public-heap, but may be implemented as an integral component of the complex private-heap in an allocator, masking the presence of a hybrid approach.
    1044 
    1045 
    1046 \subsection{Allocation Buffer}
     1031\subsubsection{Allocation Buffer}
    10471032\label{s:AllocationBuffer}
    10481033
    10491034An allocation buffer is reserved memory (see Section~\ref{s:AllocatorComponents}) not yet allocated to the program, and is used for allocating objects when the free list is empty.
    10501035That is, rather than requesting new storage for a single object, an entire buffer is requested from which multiple objects are allocated later.
    1051 Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or operating system, respectively.
     1036Any heap may use an allocation buffer, resulting in allocation from the buffer before requesting objects (containers) from the global heap or OS, respectively.
    10521037The allocation buffer reduces contention and the number of global/operating-system calls.
    10531038For coalescing, a buffer is split into smaller objects by allocations, and recomposed into larger buffer areas during deallocations.
     
    10621047
    10631048Allocation buffers may increase external fragmentation, since some memory in the allocation buffer may never be allocated.
    1064 A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or operating system.
     1049A smaller allocation buffer reduces the amount of external fragmentation, but increases the number of calls to the global heap or OS.
    10651050The allocation buffer also slightly increases internal fragmentation, since a pointer is necessary to locate the next free object in the buffer.
    10661051
     
    10681053For example, when a container is created, rather than placing all objects within the container on the free list, the objects form an allocation buffer and are allocated from the buffer as allocation requests are made.
    10691054This lazy method of constructing objects is beneficial in terms of paging and caching.
    1070 For example, although an entire container, possibly spanning several pages, is allocated from the operating system, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache.
    1071 
    1072 
    1073 \subsection{Lock-Free Operations}
     1055For example, although an entire container, possibly spanning several pages, is allocated from the OS, only a small part of the container is used in the working set of the allocator, reducing the number of pages and cache lines that are brought into higher levels of cache.
     1056
     1057
     1058\subsubsection{Lock-Free Operations}
    10741059\label{s:LockFreeOperations}
    10751060
     
    11941179% A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable}
    11951180% \end{quote}
    1196 % If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
     1181% If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
    11971182% Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
    1198 % Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the operating system is providing the second thread via the signal handler.
     1183% Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler.
    11991184%
    12001185% Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted.
     
    12561241A sequence of code that is guaranteed to run to completion before being invoked to accept another input is called serially-reusable code.~\cite{SeriallyReusable}\label{p:SeriallyReusable}
    12571242\end{quote}
    1258 If a KT is preempted during an allocation operation, the operating system can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
     1243If a KT is preempted during an allocation operation, the OS can schedule another KT on the same CPU, which can begin an allocation operation before the previous operation associated with this CPU has completed, invalidating heap correctness.
    12591244Note, the serially-reusable problem can occur in sequential programs with preemption, if the signal handler calls the preempted function, unless the function is serially reusable.
    1260 Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the operating system is providing the second thread via the signal handler.
     1245Essentially, the serially-reusable problem is a race condition on an unprotected critical subsection, where the OS is providing the second thread via the signal handler.
    12611246
    12621247Library @librseq@~\cite{librseq} was used to perform a fast determination of the CPU and to ensure all memory operations complete on one CPU using @librseq@'s restartable sequences, which restart the critical subsection after undoing its writes, if the critical subsection is preempted.
     
    12731258For the T:H=CPU and 1:1 models, locking is eliminated along the allocation fastpath.
    12741259However, T:H=CPU has poor operating-system support to determine the CPU id (heap id) and prevent the serially-reusable problem for KTs.
    1275 More operating system support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
     1260More OS support is required to make this model viable, but there is still the serially-reusable problem with user-level threading.
    12761261So the 1:1 model had no atomic actions along the fastpath and no special operating-system support requirements.
    12771262The 1:1 model still has the serially-reusable problem with user-level threading, which is addressed in Section~\ref{s:UserlevelThreadingSupport}, and the greatest potential for heap blowup for certain allocation patterns.
     
    13081293A primary goal of llheap is low latency, hence the name low-latency heap (llheap).
    13091294Two forms of latency are internal and external.
    1310 Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the operating system.
     1295Internal latency is the time to perform an allocation, while external latency is time to obtain/return storage from/to the OS.
    13111296Ideally latency is $O(1)$ with a small constant.
    13121297
     
    13141299The mitigating factor is that most programs have well behaved allocation patterns, where the majority of allocation operations can be $O(1)$, and heap blowup does not occur without coalescing (although the allocation footprint may be slightly larger).
    13151300
    1316 To obtain $O(1)$ external latency means obtaining one large storage area from the operating system and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
     1301To obtain $O(1)$ external latency means obtaining one large storage area from the OS and subdividing it across all program allocations, which requires a good guess at the program storage high-watermark and potential large external fragmentation.
    13171302Excluding real-time operating-systems, operating-system operations are unbounded, and hence some external latency is unavoidable.
    13181303The mitigating factor is that operating-system calls can often be reduced if a programmer has a sense of the storage high-watermark and the allocator is capable of using this information (see @malloc_expansion@ \pageref{p:malloc_expansion}).
     
    13291314headers per allocation versus containers,
    13301315no coalescing to minimize latency,
    1331 global heap memory (pool) obtained from the operating system using @mmap@ to create and reuse heaps needed by threads,
     1316global heap memory (pool) obtained from the OS using @mmap@ to create and reuse heaps needed by threads,
    13321317local reserved memory (pool) per heap obtained from global pool,
    1333 global reserved memory (pool) obtained from the operating system using @sbrk@ call,
     1318global reserved memory (pool) obtained from the OS using @sbrk@ call,
    13341319optional fast-lookup table for converting allocation requests into bucket sizes,
    13351320optional statistic-counters table for accumulating counts of allocation operations.
     
    13581343Each heap uses segregated free-buckets that have free objects distributed across 91 different sizes from 16 to 4M.
    13591344All objects in a bucket are of the same size.
    1360 The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the operating system.
     1345The number of buckets used is determined dynamically depending on the crossover point from @sbrk@ to @mmap@ allocation using @mallopt( M_MMAP_THRESHOLD )@, \ie small objects managed by the program and large objects managed by the OS.
    13611346Each free bucket of a specific size has two lists.
    136213471) A free stack used solely by the KT heap-owner, so push/pop operations do not require locking.
     
    13671352Algorithm~\ref{alg:heapObjectAlloc} shows the allocation outline for an object of size $S$.
    13681353First, the allocation is divided into small (@sbrk@) or large (@mmap@).
    1369 For large allocations, the storage is mapped directly from the operating system.
     1354For large allocations, the storage is mapped directly from the OS.
    13701355For small allocations, $S$ is quantized into a bucket size.
    13711356Quantizing is performed using a binary search over the ordered bucket array.
     
    13781363heap's local pool,
    13791364global pool,
    1380 operating system (@sbrk@).
     1365OS (@sbrk@).
    13811366
    13821367\begin{algorithm}
     
    14431428Algorithm~\ref{alg:heapObjectFreeOwn} shows the de-allocation (free) outline for an object at address $A$ with ownership.
    14441429First, the address is divided into small (@sbrk@) or large (@mmap@).
    1445 For large allocations, the storage is unmapped back to the operating system.
     1430For large allocations, the storage is unmapped back to the OS.
    14461431For small allocations, the bucket associated with the request size is retrieved.
    14471432If the bucket is local to the thread, the allocation is pushed onto the thread's associated bucket.
     
    30443029
    30453030\textsf{pt3} is the only memory allocator where the total dynamic memory goes down in the second half of the program lifetime when the memory is freed by the benchmark program.
    3046 It makes pt3 the only memory allocator that gives memory back to the operating system as it is freed by the program.
     3031It makes pt3 the only memory allocator that gives memory back to the OS as it is freed by the program.
    30473032
    30483033% FOR 1 THREAD
  • doc/papers/llheap/figures/AllocatorComponents.fig

    r2b78949 r8a930c03  
    88-2
    991200 2
    10 6 1275 2025 2700 2625
    11106 2400 2025 2700 2625
    12112 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     
    14132 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
    1514         2700 2025 2700 2325 2400 2325 2400 2025 2700 2025
    16 -6
    17 4 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2400 Management\001
    1815-6
    19162 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
     
    61582 2 0 1 0 7 60 -1 13 0.000 0 0 -1 0 0 5
    6259         3300 2700 6300 2700 6300 3000 3300 3000 3300 2700
    63 4 0 0 50 -1 2 11 0.0000 2 165 585 3300 1725 Storage\001
     604 0 0 50 -1 2 11 0.0000 2 165 1005 3300 1725 Storage Data\001
    64614 2 0 50 -1 0 11 0.0000 2 165 810 3000 1875 free objects\001
    65624 2 0 50 -1 0 11 0.0000 2 135 1140 3000 2850 reserve memory\001
    66634 1 0 50 -1 0 11 0.0000 2 120 795 2325 1500 Static Zone\001
    67644 1 0 50 -1 0 11 0.0000 2 165 1845 4800 1500 Dynamic-Allocation Zone\001
     654 2 0 50 -1 2 11 0.0000 2 165 1005 2325 2325 Management\001
     664 2 0 50 -1 2 11 0.0000 2 135 375 2325 2525 Data\001
  • doc/theses/colby_parsons_MMAth/Makefile

    r2b78949 r8a930c03  
    9898
    9999${BASE}.dvi : Makefile ${GRAPHS} ${PROGRAMS} ${PICTURES} ${FIGURES} ${SOURCES} ${DATA} \
    100                 style/style.tex ${Macros}/common.tex ${Macros}/indexstyle local.bib ../../bibliography/pl.bib | ${Build}
     100                glossary.tex style/style.tex ${Macros}/common.tex ${Macros}/indexstyle local.bib ../../bibliography/pl.bib | ${Build}
    101101        # Must have *.aux file containing citations for bibtex
    102102        if [ ! -r ${basename $@}.aux ] ; then ${LaTeX} ${basename $@}.tex ; fi
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/cfa/balance.cfa

    r2b78949 r8a930c03  
    3131
    3232d_actor ** actor_arr;
    33 Allocation receive( d_actor & this, start_msg & msg ) with( this ) {
     33allocation receive( d_actor & this, start_msg & msg ) with( this ) {
    3434    for ( i; Set ) {
    3535        *actor_arr[i + gstart] << shared_msg;
     
    3838}
    3939
    40 Allocation receive( d_actor & this, d_msg & msg ) with( this ) {
     40allocation receive( d_actor & this, d_msg & msg ) with( this ) {
    4141    if ( recs == rounds ) return Delete;
    4242    if ( recs % Batch == 0 ) {
     
    5050}
    5151
    52 Allocation receive( filler & this, d_msg & msg ) { return Delete; }
     52allocation receive( filler & this, d_msg & msg ) { return Delete; }
    5353
    5454int main( int argc, char * argv[] ) {
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/cfa/dynamic.cfa

    r2b78949 r8a930c03  
    2424
    2525uint64_t start_time;
    26 Allocation receive( derived_actor & receiver, derived_msg & msg ) {
     26allocation receive( derived_actor & receiver, derived_msg & msg ) {
    2727    if ( msg.cnt >= Times ) {
    2828        printf("%.2f\n", ((double)(bench_time() - start_time)) / ((double)Times) ); // ns
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/cfa/executor.cfa

    r2b78949 r8a930c03  
    2525struct d_msg { inline message; } shared_msg;
    2626
    27 Allocation receive( d_actor & this, d_msg & msg ) with( this ) {
     27allocation receive( d_actor & this, d_msg & msg ) with( this ) {
    2828    if ( recs == rounds ) return Finished;
    2929    if ( recs % Batch == 0 ) {
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/cfa/matrix.cfa

    r2b78949 r8a930c03  
    2424}
    2525
    26 Allocation receive( derived_actor & receiver, derived_msg & msg ) {
     26allocation receive( derived_actor & receiver, derived_msg & msg ) {
    2727    for ( unsigned int i = 0; i < yc; i += 1 ) { // multiply X_row by Y_col and sum products
    2828        msg.Z[i] = 0;
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/cfa/repeat.cfa

    r2b78949 r8a930c03  
    4646
    4747Client * cl;
    48 Allocation receive( Server & this, IntMsg & msg ) { msg.val = 7; *cl << msg; return Nodelete; }
    49 Allocation receive( Server & this, CharMsg & msg ) { msg.val = 'x'; *cl << msg; return Nodelete; }
    50 Allocation receive( Server & this, StateMsg & msg ) { return Finished; }
     48allocation receive( Server & this, IntMsg & msg ) { msg.val = 7; *cl << msg; return Nodelete; }
     49allocation receive( Server & this, CharMsg & msg ) { msg.val = 'x'; *cl << msg; return Nodelete; }
     50allocation receive( Server & this, StateMsg & msg ) { return Finished; }
    5151
    5252void terminateServers( Client & this ) with(this) {
     
    5656}
    5757
    58 Allocation reset( Client & this ) with(this) {
     58allocation reset( Client & this ) with(this) {
    5959    times += 1;
    6060    if ( times == Times ) { terminateServers( this ); return Finished; }
     
    6464}
    6565
    66 Allocation process( Client & this ) with(this) {
     66allocation process( Client & this ) with(this) {
    6767    this.results++;
    6868    if ( results == 2 * Messages ) { return reset( this ); }
     
    7070}
    7171
    72 Allocation receive( Client & this, IntMsg & msg ) { return process( this ); }
    73 Allocation receive( Client & this, CharMsg & msg ) { return process( this ); }
    74 Allocation receive( Client & this, StateMsg & msg ) with(this) {
     72allocation receive( Client & this, IntMsg & msg ) { return process( this ); }
     73allocation receive( Client & this, CharMsg & msg ) { return process( this ); }
     74allocation receive( Client & this, StateMsg & msg ) with(this) {
    7575    for ( i; Messages ) {
    7676        servers[i] << intmsg[i];
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/cfa/static.cfa

    r2b78949 r8a930c03  
    2323
    2424uint64_t start_time;
    25 Allocation receive( derived_actor & receiver, derived_msg & msg ) {
     25allocation receive( derived_actor & receiver, derived_msg & msg ) {
    2626    if ( msg.cnt >= Times ) {
    2727        printf("%.2f\n", ((double)(bench_time() - start_time)) / ((double)Times) ); // ns
  • doc/theses/colby_parsons_MMAth/benchmarks/actors/plotData.py

    r2b78949 r8a930c03  
    160160
    161161                if currVariant == numVariants:
    162                     fig, ax = plt.subplots()
     162                    fig, ax = plt.subplots(layout='constrained')
    163163                    plt.title(name + " Benchmark")
    164164                    plt.ylabel("Runtime (seconds)")
  • doc/theses/colby_parsons_MMAth/benchmarks/channels/plotData.py

    r2b78949 r8a930c03  
    124124
    125125            if currVariant == numVariants:
    126                 fig, ax = plt.subplots()
     126                fig, ax = plt.subplots(layout='constrained')
    127127                plt.title(name + " Benchmark")
    128128                plt.ylabel("Throughput (channel operations)")
  • doc/theses/colby_parsons_MMAth/benchmarks/mutex_stmt/plotData.py

    r2b78949 r8a930c03  
    9797
    9898            if currVariant == numVariants:
    99                 fig, ax = plt.subplots()
     99                fig, ax = plt.subplots(layout='constrained')
    100100                plt.title(name + " Benchmark: " + str(currLocks) + " Locks")
    101101                plt.ylabel("Throughput (entries)")
  • doc/theses/colby_parsons_MMAth/code/basic_actor_example.cfa

    r2b78949 r8a930c03  
    1919}
    2020
    21 Allocation receive( derived_actor & receiver, derived_msg & msg ) {
     21allocation receive( derived_actor & receiver, derived_msg & msg ) {
    2222    printf("The message contained the string: %s\n", msg.word);
    2323    return Finished; // Return allocation status of Finished now that the actor is done work
  • doc/theses/colby_parsons_MMAth/glossary.tex

    r2b78949 r8a930c03  
    3232% Examples from template above
    3333
    34 \newabbreviation{raii}{RAII}{Resource Acquisition Is Initialization}
    35 \newabbreviation{rtti}{RTTI}{Run-Time Type Information}
    36 \newabbreviation{fcfs}{FCFS}{First Come First Served}
    37 \newabbreviation{toctou}{TOCTOU}{time-of-check to time-of-use}
     34\newabbreviation{raii}{RAII}{\Newterm{resource acquisition is initialization}}
     35\newabbreviation{rtti}{RTTI}{\Newterm{run-time type information}}
     36\newabbreviation{fcfs}{FCFS}{\Newterm{first-come first-served}}
     37\newabbreviation{toctou}{TOCTOU}{\Newterm{time-of-check to time-of-use}}
    3838
    3939\newglossaryentry{actor}
  • doc/theses/colby_parsons_MMAth/local.bib

    r2b78949 r8a930c03  
    9595@misc{go:select,
    9696  author = "The Go Programming Language",
    97   title = "src/runtime/chan.go",
     97  title = "src/runtime/select.go",
    9898  howpublished = {\href{https://go.dev/src/runtime/select.go}},
    9999  note = "[Online; accessed 23-May-2023]"
    100100}
    101101
     102@misc{go:selectref,
     103  author = "The Go Programming Language Specification",
     104  title = "Select statements",
     105  howpublished = {\href{https://go.dev/ref/spec#Select\_statements}},
     106  note = "[Online; accessed 23-May-2023]"
     107}
     108
     109@misc{boost:channel,
     110  author = "Boost C++ Libraries",
     111  title = "experimental::basic\_concurrent\_channel",
     112  howpublished = {\href{https://www.boost.org/doc/libs/master/doc/html/boost\_asio/reference/experimental\__basic\_concurrent\_channel.html}},
     113  note = "[Online; accessed 23-May-2023]"
     114}
     115
     116@misc{rust:channel,
     117  author = "The Rust Standard Library",
     118  title = "std::sync::mpsc::sync\_channel",
     119  howpublished = {\href{https://doc.rust-lang.org/std/sync/mpsc/fn.sync\_channel.html}},
     120  note = "[Online; accessed 23-May-2023]"
     121}
     122
     123@misc{rust:select,
     124  author = "The Rust Standard Library",
     125  title = "Macro futures::select",
     126  howpublished = {\href{https://docs.rs/futures/latest/futures/macro.select.html}},
     127  note = "[Online; accessed 23-May-2023]"
     128}
     129
     130@misc{ocaml:channel,
     131  author = "The OCaml Manual",
     132  title = "OCaml library : Event",
     133  howpublished = {\href{https://v2.ocaml.org/api/Event.html}},
     134  note = "[Online; accessed 23-May-2023]"
     135}
     136
     137@misc{haskell:channel,
     138  author = "The Haskell Package Repository",
     139  title = "Control.Concurrent.Chan",
     140  howpublished = {\href{https://hackage.haskell.org/package/base-4.18.0.0/docs/Control-Concurrent-Chan.html}},
     141  note = "[Online; accessed 23-May-2023]"
     142}
     143
     144@misc{linux:select,
     145  author = "Linux man pages",
     146  title = "select(2) — Linux manual page",
     147  howpublished = {\href{https://man7.org/linux/man-pages/man2/select.2.html}},
     148  note = "[Online; accessed 23-May-2023]"
     149}
     150
     151@misc{linux:poll,
     152  author = "Linux man pages",
     153  title = "poll(2) — Linux manual page",
     154  howpublished = {\href{https://man7.org/linux/man-pages/man2/poll.2.html}},
     155  note = "[Online; accessed 23-May-2023]"
     156}
     157
     158@misc{linux:epoll,
     159  author = "Linux man pages",
     160  title = "epoll(7) — Linux manual page",
     161  howpublished = {\href{https://man7.org/linux/man-pages/man7/epoll.7.html}},
     162  note = "[Online; accessed 23-May-2023]"
     163}
     164
     165@article{Ichbiah79,
     166  title={Preliminary Ada reference manual},
     167  author={Ichbiah, Jean D},
     168  journal={ACM Sigplan Notices},
     169  volume={14},
     170  number={6a},
     171  pages={1--145},
     172  year={1979},
     173  publisher={ACM New York, NY, USA}
     174}
     175
     176@misc{cpp:whenany,
     177  author = "C++ reference",
     178  title = "std::experimental::when\_any",
     179  howpublished = {\href{https://en.cppreference.com/w/cpp/experimental/when\_any}},
     180  note = "[Online; accessed 23-May-2023]"
     181}
     182
     183
     184
  • doc/theses/colby_parsons_MMAth/style/style.tex

    r2b78949 r8a930c03  
    1515\newsavebox{\myboxB}
    1616
     17\lstnewenvironment{Golang}[1][]
     18{\lstset{language=Go,literate={<-}{\makebox[2ex][c]{\textless\raisebox{0.4ex}{\rule{0.8ex}{0.075ex}}}}2,
     19        moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}
     20{}
     21
    1722\lstnewenvironment{java}[1][]
    1823{\lstset{language=java,moredelim=**[is][\protect\color{red}]{@}{@}}\lstset{#1}}
  • doc/theses/colby_parsons_MMAth/text/channels.tex

    r2b78949 r8a930c03  
    1717Additionally all channel operations in CSP are synchronous (no buffering).
    1818Advanced channels as a programming language feature has been popularized in recent years by the language Go~\cite{Go}, which encourages the use of channels as its fundamental concurrent feature.
    19 It was the popularity of Go channels that lead to their implemention in \CFA.
     19It was the popularity of Go channels that lead to their implementation in \CFA.
    2020Neither Go nor \CFA channels have the restrictions of the early channel-based concurrent systems.
     21
     22Other popular languages and libraries that provide channels include C++ Boost~\cite{boost:channel}, Rust~\cite{rust:channel}, Haskell~\cite{haskell:channel}, and OCaml~\cite{ocaml:channel}.
     23Boost channels only support asynchronous (non-blocking) operations, and Rust channels are limited to only having one consumer per channel.
     24Haskell channels are unbounded in size, and OCaml channels are zero-size.
     25These restrictions in Haskell and OCaml are likely due to their functional approach, which results in them both using a list as the underlying data structure for their channel.
     26These languages and libraries are not discussed further, as their channel implementation is not comparable to the bounded-buffer style channels present in Go and \CFA.
    2127
    2228\section{Producer-Consumer Problem}
     
    6167\section{Channel Implementation}
    6268Currently, only the Go programming language provides user-level threading where the primary communication mechanism is channels.
    63 Experiments were conducted that varied the producer-consumer problem algorithm and lock type used inside the channel.
     69Experiments were conducted that varied the producer-consumer algorithm and lock type used inside the channel.
    6470With the exception of non-\gls{fcfs} or non-FIFO algorithms, no algorithm or lock usage in the channel implementation was found to be consistently more performant that Go's choice of algorithm and lock implementation.
    6571Performance of channels can be improved by sharding the underlying buffer \cite{Dice11}.
    66 In doing so the FIFO property is lost, which is undesireable for user-facing channels.
     72However, the FIFO property is lost, which is undesirable for user-facing channels.
    6773Therefore, the low-level channel implementation in \CFA is largely copied from the Go implementation, but adapted to the \CFA type and runtime systems.
    6874As such the research contributions added by \CFA's channel implementation lie in the realm of safety and productivity features.
    6975
    70 The Go channel implementation utilitizes cooperation between threads to achieve good performance~\cite{go:chan}.
    71 The cooperation between threads only occurs when producers or consumers need to block due to the buffer being full or empty.
    72 In these cases the blocking thread stores their relevant data in a shared location and the signalling thread will complete their operation before waking them.
    73 This helps improve performance in a few ways.
    74 First, each thread interacting with the channel with only acquire and release the internal channel lock exactly once.
    75 This decreases contention on the internal lock, as only entering threads will compete for the lock since signalled threads never reacquire the lock.
    76 The other advantage of the cooperation approach is that it eliminates the potential bottleneck of waiting for signalled threads.
    77 The property of acquiring/releasing the lock only once can be achieved without cooperation by \Newterm{baton passing} the lock.
    78 Baton passing is when one thread acquires a lock but does not release it, and instead signals a thread inside the critical section conceptually "passing" the mutual exclusion to the signalled thread.
    79 While baton passing is useful in some algorithms, it results in worse performance than the cooperation approach in channel implementations since all entering threads then need to wait for the blocked thread to reach the front of the ready queue and run before other operations on the channel can proceed.
     76The Go channel implementation utilizes cooperation among threads to achieve good performance~\cite{go:chan}.
     77This cooperation only occurs when producers or consumers need to block due to the buffer being full or empty.
     78In these cases, a blocking thread stores their relevant data in a shared location and the signalling thread completes the blocking thread's operation before waking them;
     79\ie the blocking thread has no work to perform after it unblocks because the signalling threads has done this work.
     80This approach is similar to wait morphing for locks~\cite[p.~82]{Butenhof97} and improves performance in a few ways.
     81First, each thread interacting with the channel only acquires and releases the internal channel lock once.
     82As a result, contention on the internal lock is decreased, as only entering threads compete for the lock as unblocking threads do not reacquire the lock.
     83The other advantage of Go's wait-morphing approach is that it eliminates the bottleneck of waiting for signalled threads to run.
     84Note, the property of acquiring/releasing the lock only once can also be achieved with a different form of cooperation, called \Newterm{baton passing}.
     85Baton passing occurs when one thread acquires a lock but does not release it, and instead signals a thread inside the critical section, conceptually ``passing'' the mutual exclusion from the signalling thread to the signalled thread.
     86The baton-passing approach has threads cooperate to pass mutual exclusion without additional lock acquires or releases;
     87the wait-morphing approach has threads cooperate by completing the signalled thread's operation, thus removing a signalled thread's need for mutual exclusion after unblocking.
     88While baton passing is useful in some algorithms, it results in worse channel performance than the Go approach.
     89In the baton-passing approach, all threads need to wait for the signalled thread to reach the front of the ready queue, context switch, and run before other operations on the channel can proceed, since the signalled thread holds mutual exclusion;
     90in the wait-morphing approach, since the operation is completed before the signal, other threads can continue to operate on the channel without waiting for the signalled thread to run.
    8091
    8192In this work, all channel sizes \see{Sections~\ref{s:ChannelSize}} are implemented with bounded buffers.
     
    100111\subsection{Toggle-able Statistics}
    101112As discussed, a channel is a concurrent layer over a bounded buffer.
    102 To achieve efficient buffering users should aim for as few blocking operations on a channel as possible.
    103 Often to achieve this users may change the buffer size, shard a channel into multiple channels, or tweak the number of producer and consumer threads.
    104 Fo users to be able to make informed decisions when tuning channel usage, toggle-able channel statistics are provided.
    105 The statistics are toggled at compile time via the @CHAN_STATS@ macro to ensure that they are entirely elided when not used.
    106 When statistics are turned on, four counters are maintained per channel, two for producers and two for consumers.
     113To achieve efficient buffering, users should aim for as few blocking operations on a channel as possible.
     114Mechanisms to reduce blocking are: change the buffer size, shard a channel into multiple channels, or tweak the number of producer and consumer threads.
     115For users to be able to make informed decisions when tuning channel usage, toggle-able channel statistics are provided.
     116The statistics are toggled on during the \CFA build by defining the @CHAN_STATS@ macro, which guarantees zero cost when not using this feature.
     117When statistics are turned on, four counters are maintained per channel, two for inserting (producers) and two for removing (consumers).
    107118The two counters per type of operation track the number of blocking operations and total operations.
    108 In the channel destructor the counters are printed out aggregated and also per type of operation.
    109 An example use case of the counters follows.
    110 A user is buffering information between producer and consumer threads and wants to analyze channel performance.
    111 Via the statistics they see that producers block for a large percentage of their operations while consumers do not block often.
    112 They then can use this information to adjust their number of producers/consumers or channel size to achieve a larger percentage of non-blocking producer operations, thus increasing their channel throughput.
     119In the channel destructor, the counters are printed out aggregated and also per type of operation.
     120An example use case is noting that producer inserts are blocking often while consumer removes do not block often.
     121This information can be used to increase the number of consumers to decrease the blocking producer operations, thus increasing the channel throughput.
     122Whereas, increasing the channel size in this scenario is unlikely to produce a benefit because the consumers can never keep up with the producers.
    113123
    114124\subsection{Deadlock Detection}
    115 The deadlock detection in the \CFA channels is fairly basic.
    116 It only detects the case where threads are blocked on the channel during deallocation.
    117 This case is guaranteed to deadlock since the list holding the blocked thread is internal to the channel and will be deallocated.
    118 If a user maintained a separate reference to a thread and unparked it outside the channel they could avoid the deadlock, but would run into other runtime errors since the thread would access channel data after waking that is now deallocated.
    119 More robust deadlock detection surrounding channel usage would have to be implemented separate from the channel implementation since it would require knowledge about the threading system and other channel/thread state.
     125The deadlock detection in the \CFA channels is fairly basic but detects a very common channel mistake during termination.
     126That is, it detects the case where threads are blocked on the channel during channel deallocation.
     127This case is guaranteed to deadlock since there are no other threads to supply or consume values needed by the waiting threads.
     128Only if a user maintained a separate reference to the blocked threads and manually unblocks them outside the channel could the deadlock be avoid.
     129However, without special semantics, this unblocking would generate other runtime errors where the unblocked thread attempts to access non-existing channel data or even a deallocated channel.
     130More robust deadlock detection needs to be implemented separate from channels since it requires knowledge about the threading system and other channel/thread state.
    120131
    121132\subsection{Program Shutdown}
    122133Terminating concurrent programs is often one of the most difficult parts of writing concurrent code, particularly if graceful termination is needed.
    123 The difficulty of graceful termination often arises from the usage of synchronization primitives that need to be handled carefully during shutdown.
     134Graceful termination can be difficult to achieve with synchronization primitives that need to be handled carefully during shutdown.
    124135It is easy to deadlock during termination if threads are left behind on synchronization primitives.
    125136Additionally, most synchronization primitives are prone to \gls{toctou} issues where there is race between one thread checking the state of a concurrent object and another thread changing the state.
    126137\gls{toctou} issues with synchronization primitives often involve a race between one thread checking the primitive for blocked threads and another thread blocking on it.
    127138Channels are a particularly hard synchronization primitive to terminate since both sending and receiving to/from a channel can block.
    128 Thus, improperly handled \gls{toctou} issues with channels often result in deadlocks as threads trying to perform the termination may end up unexpectedly blocking in their attempt to help other threads exit the system.
    129 
    130 \paragraph{Go channels} provide a set of tools to help with concurrent shutdown~\cite{go:chan}.
    131 Channels in Go have a @close@ operation and a \Go{select} statement that both can be used to help threads terminate.
     139Thus, improperly handled \gls{toctou} issues with channels often result in deadlocks as threads performing the termination may end up unexpectedly blocking in their attempt to help other threads exit the system.
     140
     141\paragraph{Go channels} provide a set of tools to help with concurrent shutdown~\cite{go:chan} using a @close@ operation in conjunction with the \Go{select} statement.
    132142The \Go{select} statement is discussed in \ref{s:waituntil}, where \CFA's @waituntil@ statement is compared with the Go \Go{select} statement.
    133143
     
    143153Note, panics in Go can be caught, but it is not the idiomatic way to write Go programs.
    144154
    145 While Go's channel closing semantics are powerful enough to perform any concurrent termination needed by a program, their lack of ease of use leaves much to be desired.
     155While Go's channel-closing semantics are powerful enough to perform any concurrent termination needed by a program, their lack of ease of use leaves much to be desired.
    146156Since both closing and sending panic once a channel is closed, a user often has to synchronize the senders (producers) before the channel can be closed to avoid panics.
    147157However, in doing so it renders the @close@ operation nearly useless, as the only utilities it provides are the ability to ensure receivers no longer block on the channel and receive zero-valued elements.
    148158This functionality is only useful if the zero-typed element is recognized as a sentinel value, but if another sentinel value is necessary, then @close@ only provides the non-blocking feature.
    149159To avoid \gls{toctou} issues during shutdown, a busy wait with a \Go{select} statement is often used to add or remove elements from a channel.
    150 Due to Go's asymmetric approach to channel shutdown, separate synchronization between producers and consumers of a channel has to occur during shutdown.
     160Hence, due to Go's asymmetric approach to channel shutdown, separate synchronization between producers and consumers of a channel has to occur during shutdown.
    151161
    152162\paragraph{\CFA channels} have access to an extensive exception handling mechanism~\cite{Beach21}.
     
    161171When a channel in \CFA is closed, all subsequent calls to the channel raise a resumption exception at the caller.
    162172If the resumption is handled, the caller attempts to complete the channel operation.
    163 However, if channel operation would block, a termination exception is thrown.
     173However, if the channel operation would block, a termination exception is thrown.
    164174If the resumption is not handled, the exception is rethrown as a termination.
    165175These termination exceptions allow for non-local transfer that is used to great effect to eagerly and gracefully shut down a thread.
    166176When a channel is closed, if there are any blocked producers or consumers inside the channel, they are woken up and also have a resumption thrown at them.
    167 The resumption exception, @channel_closed@, has a couple fields to aid in handling the exception.
    168 The exception contains a pointer to the channel it was thrown from, and a pointer to an element.
    169 In exceptions thrown from remove the element pointer will be null.
    170 In the case of insert the element pointer points to the element that the thread attempted to insert.
     177The resumption exception, @channel_closed@, has internal fields to aid in handling the exception.
     178The exception contains a pointer to the channel it is thrown from and a pointer to a buffer element.
     179For exceptions thrown from @remove@, the buffer element pointer is null.
     180For exceptions thrown from @insert@, the element pointer points to the buffer element that the thread attempted to insert.
     181Utility routines @bool is_insert( channel_closed & e );@ and @bool is_remove( channel_closed & e );@ are provided for convenient checking of the element pointer.
    171182This element pointer allows the handler to know which operation failed and also allows the element to not be lost on a failed insert since it can be moved elsewhere in the handler.
    172 Furthermore, due to \CFA's powerful exception system, this data can be used to choose handlers based which channel and operation failed.
    173 Exception handlers in \CFA have an optional predicate after the exception type which can be used to optionally trigger or skip handlers based on the content of an exception.
    174 It is worth mentioning that the approach of exceptions for termination may incur a larger performance cost during termination that the approach used in Go.
    175 This should not be an issue, since termination is rarely an fast-path of an application and ensuring that termination can be implemented correctly with ease is the aim of the exception approach.
     183Furthermore, due to \CFA's powerful exception system, this data can be used to choose handlers based on which channel and operation failed.
     184For example, exception handlers in \CFA have an optional predicate which can be used to trigger or skip handlers based on the content of the matching exception.
     185It is worth mentioning that using exceptions for termination may incur a larger performance cost than the Go approach.
     186However, this should not be an issue, since termination is rarely on the fast-path of an application.
     187In contrast, ensuring termination can be easily implemented correctly is the aim of the exception approach.
    176188
    177189\section{\CFA / Go channel Examples}
    178 To highlight the differences between \CFA's and Go's close semantics, three examples will be presented.
     190To highlight the differences between \CFA's and Go's close semantics, three examples are presented.
    179191The first example is a simple shutdown case, where there are producer threads and consumer threads operating on a channel for a fixed duration.
    180 Once the duration ends, producers and consumers terminate without worrying about any leftover values in the channel.
    181 The second example extends the first example by requiring the channel to be empty upon shutdown.
     192Once the duration ends, producers and consumers terminate immediately leaving unprocessed elements in the channel.
     193The second example extends the first by requiring the channel to be empty after shutdown.
    182194Both the first and second example are shown in Figure~\ref{f:ChannelTermination}.
    183 
    184 
    185 First the Go solutions to these examples shown in Figure~\ref{l:go_chan_term} are discussed.
    186 Since some of the elements being passed through the channel are zero-valued, closing the channel in Go does not aid in communicating shutdown.
    187 Instead, a different mechanism to communicate with the consumers and producers needs to be used.
    188 This use of an additional flag or communication method is common in Go channel shutdown code, since to avoid panics on a channel, the shutdown of a channel often has to be communicated with threads before it occurs.
    189 In this example, a flag is used to communicate with producers and another flag is used for consumers.
    190 Producers and consumers need separate avenues of communication both so that producers terminate before the channel is closed to avoid panicking, and to avoid the case where all the consumers terminate first, which can result in a deadlock for producers if the channel is full.
    191 The producer flag is set first, then after producers terminate the consumer flag is set and the channel is closed.
    192 In the second example where all values need to be consumed, the main thread iterates over the closed channel to process any remaining values.
    193 
    194 
    195 In the \CFA solutions in Figure~\ref{l:cfa_chan_term}, shutdown is communicated directly to both producers and consumers via the @close@ call.
    196 In the first example where all values do not need to be consumed, both producers and consumers do not handle the resumption and finish once they receive the termination exception.
    197 The second \CFA example where all values must be consumed highlights how resumption is used with channel shutdown.
    198 The @Producer@ thread-main knows to stop producing when the @insert@ call on a closed channel raises exception @channel_closed@.
    199 The @Consumer@ thread-main knows to stop consuming after all elements of a closed channel are removed and the call to @remove@ would block.
    200 Hence, the consumer knows the moment the channel closes because a resumption exception is raised, caught, and ignored, and then control returns to @remove@ to return another item from the buffer.
    201 Only when the buffer is drained and the call to @remove@ would block, a termination exception is raised to stop consuming.
    202 The \CFA semantics allow users to communicate channel shutdown directly through the channel, without having to share extra state between threads.
    203 Additionally, when the channel needs to be drained, \CFA provides users with easy options for processing the leftover channel values in the main thread or in the consumer threads.
    204 If one wishes to consume the leftover values in the consumer threads in Go, extra synchronization between the main thread and the consumer threads is needed.
    205195
    206196\begin{figure}
     
    208198
    209199\begin{lrbox}{\myboxA}
     200\begin{Golang}[aboveskip=0pt,belowskip=0pt]
     201var channel chan int = make( chan int, 128 )
     202var prodJoin chan int = make( chan int, 4 )
     203var consJoin chan int = make( chan int, 4 )
     204var cons_done, prod_done bool = false, false;
     205func producer() {
     206        for {
     207                if prod_done { break }
     208                channel <- 5
     209        }
     210        prodJoin <- 0 // synch with main thd
     211}
     212
     213func consumer() {
     214        for {
     215                if cons_done { break }
     216                <- channel
     217        }
     218        consJoin <- 0 // synch with main thd
     219}
     220
     221
     222func main() {
     223        for j := 0; j < 4; j++ { go consumer() }
     224        for j := 0; j < 4; j++ { go producer() }
     225        time.Sleep( time.Second * 10 )
     226        prod_done = true
     227        for j := 0; j < 4 ; j++ { <- prodJoin }
     228        cons_done = true
     229        close(channel) // ensure no cons deadlock
     230        @for elem := range channel {@
     231                // process leftover values
     232        @}@
     233        for j := 0; j < 4; j++ { <- consJoin }
     234}
     235\end{Golang}
     236\end{lrbox}
     237
     238\begin{lrbox}{\myboxB}
    210239\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    211 channel( size_t ) Channel{ ChannelSize };
    212 
     240channel( size_t ) chan{ 128 };
    213241thread Consumer {};
     242thread Producer {};
     243
     244void main( Producer & this ) {
     245        try {
     246                for ()
     247                        insert( chan, 5 );
     248        } catch( channel_closed * ) {
     249                // unhandled resume or full
     250        }
     251}
    214252void main( Consumer & this ) {
    215     try {
    216         for ( ;; )
    217             remove( Channel );
    218     @} catchResume( channel_closed * ) { @
    219     // handled resume => consume from chan
    220     } catch( channel_closed * ) {
    221         // empty or unhandled resume
    222     }
    223 }
    224 
    225 thread Producer {};
    226 void main( Producer & this ) {
    227     size_t count = 0;
    228     try {
    229         for ( ;; )
    230             insert( Channel, count++ );
    231     } catch ( channel_closed * ) {
    232         // unhandled resume or full
    233     }
    234 }
    235 
    236 int main( int argc, char * argv[] ) {
    237     Consumer c[Consumers];
    238     Producer p[Producers];
    239     sleep(Duration`s);
    240     close( Channel );
    241     return 0;
    242 }
     253        try {
     254                for () { int i = remove( chan ); }
     255        @} catchResume( channel_closed * ) {@
     256                // handled resume => consume from chan
     257        } catch( channel_closed * ) {
     258                // empty or unhandled resume
     259        }
     260}
     261int main() {
     262        Consumer c[4];
     263        Producer p[4];
     264        sleep( 10`s );
     265        close( chan );
     266}
     267
     268
     269
     270
     271
     272
     273
    243274\end{cfa}
    244275\end{lrbox}
    245276
    246 \begin{lrbox}{\myboxB}
    247 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    248 var cons_done, prod_done bool = false, false;
    249 var prodJoin chan int = make(chan int, Producers)
    250 var consJoin chan int = make(chan int, Consumers)
    251 
    252 func consumer( channel chan uint64 ) {
    253     for {
    254         if cons_done { break }
    255         <-channel
    256     }
    257     consJoin <- 0 // synch with main thd
    258 }
    259 
    260 func producer( channel chan uint64 ) {
    261     var count uint64 = 0
    262     for {
    263         if prod_done { break }
    264         channel <- count++
    265     }
    266     prodJoin <- 0 // synch with main thd
    267 }
    268 
    269 func main() {
    270     channel = make(chan uint64, ChannelSize)
    271     for j := 0; j < Consumers; j++ {
    272         go consumer( channel )
    273     }
    274     for j := 0; j < Producers; j++ {
    275         go producer( channel )
    276     }
    277     time.Sleep(time.Second * Duration)
    278     prod_done = true
    279     for j := 0; j < Producers ; j++ {
    280         <-prodJoin // wait for prods
    281     }
    282     cons_done = true
    283     close(channel) // ensure no cons deadlock
    284     @for elem := range channel { @
    285         // process leftover values
    286     @}@
    287     for j := 0; j < Consumers; j++{
    288         <-consJoin // wait for cons
    289     }
    290 }
    291 \end{cfa}
    292 \end{lrbox}
    293 
    294 \subfloat[\CFA style]{\label{l:cfa_chan_term}\usebox\myboxA}
     277\subfloat[Go style]{\label{l:go_chan_term}\usebox\myboxA}
    295278\hspace*{3pt}
    296279\vrule
    297280\hspace*{3pt}
    298 \subfloat[Go style]{\label{l:go_chan_term}\usebox\myboxB}
     281\subfloat[\CFA style]{\label{l:cfa_chan_term}\usebox\myboxB}
    299282\caption{Channel Termination Examples 1 and 2. Code specific to example 2 is highlighted.}
    300283\label{f:ChannelTermination}
    301284\end{figure}
    302285
    303 The final shutdown example uses channels to implement a barrier.
    304 It is shown in Figure~\ref{f:ChannelBarrierTermination}.
    305 The problem of implementing a barrier is chosen since threads are both producers and consumers on the barrier-internal channels, which removes the ability to easily synchronize producers before consumers during shutdown.
    306 As such, while the shutdown details will be discussed with this problem in mind, they are also applicable to other problems taht have individual threads both producing and consuming from channels.
    307 Both of these examples are implemented using \CFA syntax so that they can be easily compared.
    308 Figure~\ref{l:cfa_chan_bar} uses \CFA-style channel close semantics and Figure~\ref{l:go_chan_bar} uses Go-style close semantics.
    309 In this example it is infeasible to use the Go @close@ call since all threads are both potentially producers and consumers, causing panics on close to be unavoidable without complex synchronization.
    310 As such in Figure~\ref{l:go_chan_bar} to implement a flush routine for the buffer, a sentinel value of @-1@ has to be used to indicate to threads that they need to leave the barrier.
    311 This sentinel value has to be checked at two points.
     286Figure~\ref{l:go_chan_term} shows the Go solution.
     287Since some of the elements being passed through the channel are zero-valued, closing the channel in Go does not aid in communicating shutdown.
     288Instead, a different mechanism to communicate with the consumers and producers needs to be used.
     289Flag variables are common in Go-channel shutdown-code to avoid panics on a channel, meaning the channel shutdown has to be communicated with threads before it occurs.
     290Hence, the two flags @cons_done@ and @prod_done@ are used to communicate with the producers and consumers, respectively.
     291Furthermore, producers and consumers need to shutdown separately to ensure that producers terminate before the channel is closed to avoid panicking, and to avoid the case where all the consumers terminate first, which can result in a deadlock for producers if the channel is full.
     292The producer flag is set first;
     293then after all producers terminate, the consumer flag is set and the channel is closed leaving elements in the buffer.
     294To purge the buffer, a loop is added (red) that iterates over the closed channel to process any remaining values.
     295
     296Figure~\ref{l:cfa_chan_term} shows the \CFA solution.
     297Here, shutdown is communicated directly to both producers and consumers via the @close@ call.
     298A @Producer@ thread knows to stop producing when the @insert@ call on a closed channel raises exception @channel_closed@.
     299If a @Consumer@ thread ignores the first resumption exception from the @close@, the exception is reraised as a termination exception and elements are left in the buffer.
     300If a @Consumer@ thread handles the resumptions exceptions (red), control returns to complete the remove.
     301A @Consumer@ thread knows to stop consuming after all elements of a closed channel are removed and the consumer would block, which causes a termination raise of @channel_closed@.
     302The \CFA semantics allow users to communicate channel shutdown directly through the channel, without having to share extra state between threads.
     303Additionally, when the channel needs to be drained, \CFA provides users with easy options for processing the leftover channel values in the main thread or in the consumer threads.
     304
     305Figure~\ref{f:ChannelBarrierTermination} shows a final shutdown example using channels to implement a barrier.
     306A Go and \CFA style solution are presented but both are implemented using \CFA syntax so they can be easily compared.
     307Implementing a barrier is interesting because threads are both producers and consumers on the barrier-internal channels, @entryWait@ and @barWait@.
     308The outline for the barrier implementation starts by initially filling the @entryWait@ channel with $N$ tickets in the barrier constructor, allowing $N$ arriving threads to remove these values and enter the barrier.
     309After @entryWait@ is empty, arriving threads block when removing.
     310However, the arriving threads that entered the barrier cannot leave the barrier until $N$ threads have arrived.
     311Hence, the entering threads block on the empty @barWait@ channel until the $N$th arriving thread inserts $N-1$ elements into @barWait@ to unblock the $N-1$ threads calling @remove@.
     312The race between these arriving threads blocking on @barWait@ and the $N$th thread inserting values into @barWait@ does not affect correctness;
     313\ie an arriving thread may or may not block on channel @barWait@ to get its value.
     314Finally, the last thread to remove from @barWait@ with ticket $N-2$, refills channel @entryWait@ with $N$ values to start the next group into the barrier.
     315
     316Now, the two channels makes termination synchronization between producers and consumers difficult.
     317Interestingly, the shutdown details for this problem are also applicable to other problems with threads producing and consuming from the same channel.
     318The Go-style solution cannot use the Go @close@ call since all threads are both potentially producers and consumers, causing panics on close to be unavoidable without complex synchronization.
     319As such in Figure \ref{l:go_chan_bar}, a flush routine is needed to insert a sentinel value, @-1@, to inform threads waiting in the buffer they need to leave the barrier.
     320This sentinel value has to be checked at two points along the fast-path and sentinel values daisy-chained into the buffers.
    312321Furthermore, an additional flag @done@ is needed to communicate to threads once they have left the barrier that they are done.
    313 
    314 In the \CFA version~\ref{l:cfa_chan_bar}, the barrier shutdown results in an exception being thrown at threads operating on it, which informs the threads that they must terminate.
     322Also note that in the Go version~\ref{l:go_chan_bar}, the size of the barrier channels has to be larger than in the \CFA version to ensure that the main thread does not block when attempting to clear the barrier.
     323For The \CFA solution~\ref{l:cfa_chan_bar}, the barrier shutdown results in an exception being thrown at threads operating on it, to inform waiting threads they must leave the barrier.
    315324This avoids the need to use a separate communication method other than the barrier, and avoids extra conditional checks on the fast path of the barrier implementation.
    316 Also note that in the Go version~\ref{l:go_chan_bar}, the size of the barrier channels has to be larger than in the \CFA version to ensure that the main thread does not block when attempting to clear the barrier.
    317325
    318326\begin{figure}
     
    320328
    321329\begin{lrbox}{\myboxA}
     330\begin{cfa}[aboveskip=0pt,belowskip=0pt]
     331struct barrier {
     332        channel( int ) barWait, entryWait;
     333        int size;
     334};
     335void ?{}( barrier & this, int size ) with(this) {
     336        barWait{size + 1};   entryWait{size + 1};
     337        this.size = size;
     338        for ( i; size )
     339                insert( entryWait, i );
     340}
     341void wait( barrier & this ) with(this) {
     342        int ticket = remove( entryWait );
     343        @if ( ticket == -1 ) { insert( entryWait, -1 ); return; }@
     344        if ( ticket == size - 1 ) {
     345                for ( i; size - 1 )
     346                        insert( barWait, i );
     347                return;
     348        }
     349        ticket = remove( barWait );
     350        @if ( ticket == -1 ) { insert( barWait, -1 ); return; }@
     351        if ( size == 1 || ticket == size - 2 ) { // last ?
     352                for ( i; size )
     353                        insert( entryWait, i );
     354        }
     355}
     356void flush(barrier & this) with(this) {
     357        @insert( entryWait, -1 );   insert( barWait, -1 );@
     358}
     359enum { Threads = 4 };
     360barrier b{Threads};
     361@bool done = false;@
     362thread Thread {};
     363void main( Thread & this ) {
     364        for () {
     365          @if ( done ) break;@
     366                wait( b );
     367        }
     368}
     369int main() {
     370        Thread t[Threads];
     371        sleep(10`s);
     372        done = true;
     373        flush( b );
     374} // wait for threads to terminate
     375\end{cfa}
     376\end{lrbox}
     377
     378\begin{lrbox}{\myboxB}
    322379\begin{cfa}[aboveskip=0pt,belowskip=0pt]
    323380struct barrier {
     
    368425\end{lrbox}
    369426
    370 \begin{lrbox}{\myboxB}
    371 \begin{cfa}[aboveskip=0pt,belowskip=0pt]
    372 struct barrier {
    373         channel( int ) barWait, entryWait;
    374         int size;
    375 };
    376 void ?{}( barrier & this, int size ) with(this) {
    377         barWait{size + 1};   entryWait{size + 1};
    378         this.size = size;
    379         for ( i; size )
    380                 insert( entryWait, i );
    381 }
    382 void wait( barrier & this ) with(this) {
    383         int ticket = remove( entryWait );
    384         @if ( ticket == -1 ) { insert( entryWait, -1 ); return; }@
    385         if ( ticket == size - 1 ) {
    386                 for ( i; size - 1 )
    387                         insert( barWait, i );
    388                 return;
    389         }
    390         ticket = remove( barWait );
    391         @if ( ticket == -1 ) { insert( barWait, -1 ); return; }@
    392         if ( size == 1 || ticket == size - 2 ) { // last ?
    393                 for ( i; size )
    394                         insert( entryWait, i );
    395         }
    396 }
    397 void flush(barrier & this) with(this) {
    398         @insert( entryWait, -1 );   insert( barWait, -1 );@
    399 }
    400 enum { Threads = 4 };
    401 barrier b{Threads};
    402 @bool done = false;@
    403 thread Thread {};
    404 void main( Thread & this ) {
    405         for () {
    406           @if ( done ) break;@
    407                 wait( b );
    408         }
    409 }
    410 int main() {
    411         Thread t[Threads];
    412         sleep(10`s);
    413         done = true;
    414         flush( b );
    415 } // wait for threads to terminate
    416 \end{cfa}
    417 \end{lrbox}
    418 
    419 \subfloat[\CFA style]{\label{l:cfa_chan_bar}\usebox\myboxA}
     427\subfloat[Go style]{\label{l:go_chan_bar}\usebox\myboxA}
    420428\hspace*{3pt}
    421429\vrule
    422430\hspace*{3pt}
    423 \subfloat[Go style]{\label{l:go_chan_bar}\usebox\myboxB}
     431\subfloat[\CFA style]{\label{l:cfa_chan_bar}\usebox\myboxB}
    424432\caption{Channel Barrier Termination}
    425433\label{f:ChannelBarrierTermination}
  • doc/theses/colby_parsons_MMAth/text/waituntil.tex

    r2b78949 r8a930c03  
    1414The ability to wait for the first stall available without spinning can be done with concurrent tools that provide \gls{synch_multiplex}, the ability to wait synchronously for a resource or set of resources.
    1515
    16 % C_TODO: fill in citations in following section
    1716\section{History of Synchronous Multiplexing}
    1817There is a history of tools that provide \gls{synch_multiplex}.
    19 Some of the most well known include the set or unix system utilities signal(2)\cite{}, poll(2)\cite{}, and epoll(7)\cite{}, and the select statement provided by Go\cite{}.
     18Some of the most well known include the set of unix system utilities: select(2)\cite{linux:select}, poll(2)\cite{linux:poll}, and epoll(7)\cite{linux:epoll}, and the select statement provided by Go\cite{go:selectref}.
    2019
    2120Before one can examine the history of \gls{synch_multiplex} implementations in detail, the preceding theory must be discussed.
     
    2726If a guard is false then the resource it guards is considered to not be in the set of resources being waited on.
    2827Guards can be simulated using if statements, but to do so requires \[2^N\] if cases, where @N@ is the number of guards.
    29 This transformation from guards to if statements will be discussed further in Section~\ref{}. % C_TODO: fill ref when writing semantics section later
     28The equivalence between guards and exponential if statements comes from an Occam ALT statement rule~\cite{Roscoe88}, which is presented in \CFA syntax in Figure~\ref{f:wu_if}.
     29Providing guards allows for easy toggling of waituntil clauses without introducing repeated code.
     30
     31\begin{figure}
     32\begin{cfa}
     33when( predicate ) waituntil( A ) {}
     34or waituntil( B ) {}
     35// ===
     36if ( predicate ) {
     37    waituntil( A ) {}
     38    or waituntil( B ) {}
     39} else {
     40    waituntil( B ) {}
     41}
     42\end{cfa}
     43\caption{Occam's guard to if statement equivalence shown in \CFA syntax.}
     44\label{f:wu_if}
     45\end{figure}
    3046
    3147Switching to implementations, it is important to discuss the resources being multiplexed.
     
    4460It is worth noting these \gls{synch_multiplex} tools mentioned so far interact directly with the operating system and are often used to communicate between processes.
    4561Later \gls{synch_multiplex} started to appear in user-space to support fast multiplexed concurrent communication between threads.
    46 An early example of \gls{synch_multiplex} is the select statement in Ada.
     62An early example of \gls{synch_multiplex} is the select statement in Ada~\cite[\S~9.7]{Ichbiah79}.
    4763The select statement in Ada allows a task to multiplex over some subset of its own methods that it would like to @accept@ calls to.
    4864Tasks in Ada can be thought of as threads which are an object of a specific class, and as such have methods, fields, etc.
     
    5369The @else@ changes the synchronous multiplexing to asynchronous multiplexing.
    5470If an @else@ clause is in a select statement and no calls to the @accept@ed methods are immediately available the code block associated with the @else@ is run and the task does not block.
    55 The most popular example of user-space \gls{synch_multiplex} is Go with their select statement.
     71
     72A popular example of user-space \gls{synch_multiplex} is Go with their select statement~\cite{go:selectref}.
    5673Go's select statement operates on channels and has the same exclusive-or semantics as the ALT primitive from Occam, and has associated code blocks for each clause like ALT and Ada.
    5774However, unlike Ada and ALT, Go does not provide any guards for their select statement cases.
    5875Go provides a timeout utility and also provides a @default@ clause which has the same semantics as Ada's @else@ clause.
     76
     77\uC provides \gls{synch_multiplex} over futures with their @_Select@ statement and Ada-style \gls{synch_multiplex} over monitor methods with their @_Accept@ statement~\cite{uC++}.
     78Their @_Accept@ statement builds upon the select statement offered by Ada, by offering both @and@ and @or@ semantics, which can be used together in the same statement.
     79These semantics are also supported for \uC's @_Select@ statement.
     80This enables fully expressive \gls{synch_multiplex} predicates.
     81
     82There are many other languages that provide \gls{synch_multiplex}, including Rust's @select!@ over futures~\cite{rust:select}, OCaml's @select@ over channels~\cite{ocaml:channe}, and C++14's @when_any@ over futures~\cite{cpp:whenany}.
     83Note that while C++14 and Rust provide \gls{synch_multiplex}, their implemetations leave much to be desired as they both rely on busy-waiting polling to wait on multiple resources.
    5984
    6085\section{Other Approaches to Synchronous Multiplexing}
     
    6994If the requests for the other resources need to be retracted, the burden falls on the programmer to determine how to synchronize appropriately to ensure that only one resource is delivered.
    7095
    71 
    7296\section{\CFA's Waituntil Statement}
    73 
    74 
    75 
     97The new \CFA \gls{synch_multiplex} utility introduced in this work is the @waituntil@ statement.
     98There is a @waitfor@ statement in \CFA that supports Ada-style \gls{synch_multiplex} over monitor methods, so this @waituntil@ focuses on synchronizing over other resources.
     99All of the \gls{synch_multiplex} features mentioned so far are monomorphic, only supporting one resource to wait on, select(2) supports file descriptors, Go's select supports channel operations, \uC's select supports futures, and Ada's select supports monitor method calls.
     100The waituntil statement in \CFA is polymorphic and provides \gls{synch_multiplex} over any objects that satisfy the trait in Figure~\ref{f:wu_trait}.
     101
     102\begin{figure}
     103\begin{cfa}
     104forall(T & | sized(T))
     105trait is_selectable {
     106    // For registering a waituntil stmt on a selectable type
     107    bool register_select( T &, select_node & );
     108
     109    // For unregistering a waituntil stmt from a selectable type
     110    bool unregister_select( T &, select_node & );
     111
     112    // on_selected is run on the selecting thread prior to executing the statement associated with the select_node
     113    void on_selected( T &, select_node & );
     114};
     115\end{cfa}
     116\caption{Trait for types that can be passed into \CFA's waituntil statement.}
     117\label{f:wu_trait}
     118\end{figure}
     119
     120Currently locks, channels, futures and timeouts are supported by the waituntil statement, but this will be expanded as other use cases arise.
     121The waituntil statement supports guarded clauses, like Ada, and Occam, supports both @or@, and @and@ semantics, like \uC, and provides an @else@ for asynchronous multiplexing. An example of \CFA waituntil usage is shown in Figure~\ref{f:wu_example}. In Figure~\ref{f:wu_example} the waituntil statement is waiting for either @Lock@ to be available or for a value to be read from @Channel@ into @i@ and for @Future@ to be fulfilled. The semantics of the waituntil statement will be discussed in detail in the next section.
     122
     123\begin{figure}
     124\begin{cfa}
     125future(int) Future;
     126channel(int) Channel;
     127owner_lock Lock;
     128int i = 0;
     129
     130waituntil( Lock ) { ... }
     131or when( i == 0 ) waituntil( i << Channel ) { ... }
     132and waituntil( Future ) { ... }
     133\end{cfa}
     134\caption{Example of \CFA's waituntil statement}
     135\label{f:wu_example}
     136\end{figure}
     137
     138\section{Waituntil Semantics}
     139There are two parts of the waituntil semantics to discuss, the semantics of the statement itself, \ie @and@, @or@, @when@ guards, and @else@ semantics, and the semantics of how the waituntil interacts with types like channels, locks and futures.
     140To start, the semantics of the statement itself will be discussed.
     141
     142\subsection{Waituntil Statement Semantics}
     143The @or@ semantics are the most straightforward and nearly match those laid out in the ALT statement from Occam, the clauses have an exclusive-or relationship where the first one to be available will be run and only one clause is run.
     144\CFA's @or@ semantics differ from ALT semantics in one respect, instead of randomly picking a clause when multiple are available, the clause that appears first in the order of clauses will be picked.
     145\eg in the following example, if @foo@ and @bar@ are both available, @foo@ will always be selected since it comes first in the order of waituntil clauses.
     146\begin{cfa}
     147future(int) bar;
     148future(int) foo;
     149waituntil( foo ) { ... }
     150or waituntil( bar ) { ... }
     151\end{cfa}
     152
     153The @and@ semantics match the @and@ semantics used by \uC.
     154When multiple clauses are joined by @and@, the waituntil will make a thread wait for all to be available, but will run the corresponding code blocks \emph{as they become available}.
     155As @and@ clauses are made available, the thread will be woken to run those clauses' code blocks and then the thread will wait again until all clauses have been run.
     156This allows work to be done in parallel while synchronizing over a set of resources, and furthermore gives a good reason to use the @and@ operator.
     157If the @and@ operator waited for all clauses to be available before running, it would not provide much more use that just acquiring those resources one by one in subsequent lines of code.
     158The @and@ operator binds more tightly than the @or@ operator.
     159To give an @or@ operator higher precedence brackets can be used.
     160\eg the following waituntil unconditionally waits for @C@ and one of either @A@ or @B@, since the @or@ is given higher precendence via brackets.
     161\begin{cfa}
     162(waituntil( A ) { ... }
     163or waituntil( B ) { ... } )
     164and waituntil( C ) { ... }
     165\end{cfa}
     166
     167The guards in the waituntil statement are called @when@ clauses.
     168The @when@ clause is passed a boolean expression.
     169All the @when@ boolean expressions are evaluated before the waituntil statement is run.
     170The guards in Occam's ALT effectively toggle clauses on and off, where a clause will only be evaluated and waited on if the corresponding guard is @true@.
     171The guards in the waituntil statement operate the same way, but require some nuance since both @and@ and @or@ operators are supported.
     172When a guard is false and a clause is removed, it can be thought of as removing that clause and its preceding operator from the statement.
     173\eg in the following example the two waituntil statements are semantically the same.
     174\begin{cfa}
     175when(true) waituntil( A ) { ... }
     176or when(false) waituntil( B ) { ... }
     177and waituntil( C ) { ... }
     178// ===
     179waituntil( A ) { ... }
     180and waituntil( C ) { ... }
     181\end{cfa}
     182
     183The @else@ clause on the waituntil has identical semantics to the @else@ clause in Ada.
     184If all resources are not immediately available and there is an @else@ clause, the @else@ clause is run and the thread will not block.
     185
     186\subsection{Waituntil Type Semantics}
     187As described earlier, to support interaction with the waituntil statement a type must support the trait shown in Figure~\ref{f:wu_trait}.
     188The waituntil statement expects types to register and unregister themselves via calls to @register_select@ and @unregister_select@ respectively.
     189When a resource becomes available, @on_selected@ is run.
     190Many types may not need @on_selected@, but it is provided since some types may need to check and set things before the resource can be accessed in the code block.
     191The register/unregister routines in the trait return booleans.
     192The return value of @register_select@ is @true@ if the resource is immediately available, and @false@ otherwise.
     193The return value of @unregister_select@ is @true@ if the corresponding code block should be run after unregistration and @false@ otherwise.
     194The routine @on_selected@, and the return value of @unregister_select@ were needed to support channels as a resource.
     195More detail on channels and their interaction with waituntil will be discussed in Section~\ref{s:wu_chans}.
     196
     197\section{Waituntil Implementation}
     198The waituntil statement is not inherently complex, and can be described as a few steps.
     199The complexity of the statement comes from the consideration of race conditions and synchronization needed when supporting various primitives.
     200The basic steps that the waituntil statement follows are the following.
     201
     202First the waituntil statement creates a @select_node@ per resource that is being waited on.
     203The @select_node@ is an object that stores the waituntil data pertaining to one of the resources.
     204Then, each @select_node@ is then registered with the corresponding resource.
     205The thread executing the waituntil then enters a loop that will loop until the entire waituntil statement being satisfied.
     206In each iteration of the loop the thread attempts to block.
     207If any clauses are satified the block will fail and the thread will proceed, otherwise the block succeeds.
     208After proceeding past the block all clauses are checked for completion and the completed clauses have their code blocks run.
     209Once the thread escapes the loop, the @select_nodes@ are unregistered from the resources.
     210In the case where the block suceeds, the thread will be woken by the thread that marks one of the resources as available.
     211Pseudocode detailing these steps is presented in the following code block.
     212
     213\begin{cfa}
     214select_nodes s[N]; // N select nodes
     215for ( node in s )
     216    register_select( resource, node );
     217while( statement not satisfied ) {
     218    // try to block
     219    for ( resource in waituntil statement )
     220        if ( resource is avail ) run code block
     221}
     222for ( node in s )
     223    unregister_select( resource, node );
     224\end{cfa}
     225
     226These steps give a basic, but mildly inaccurate overview of how the statement works.
     227Digging into some parts of the implementation will shed light on more of the specifics and provide some accuracy.
     228
     229\subsection{Locks}
     230Locks are one of the resources supported in the waituntil statement.
     231When a thread waits on multiple locks via a waituntil, it enqueues a @select_node@ in each of the lock's waiting queues.
     232When a @select_node@ reaches the front of the queue and gains ownership of a lock, the blocked thread is notified.
     233The lock will be held until the node is unregistered.
     234To prevent the waiting thread from holding many locks at once and potentially introducing a deadlock, the node is unregistered right after the corresponding code block is executed.
     235This prevents deadlocks since the waiting thread will never hold a lock while waiting on another resource.
     236As such the only nodes unregistered at the end are the ones that have not run.
     237
     238\subsection{Timeouts}
     239Timeouts in the waituntil take the form of a duration being passed to a @sleep@ or @timeout@ call.
     240An example is shown in the following code.
     241
     242\begin{cfa}
     243waituntil( sleep( 1`ms ) ) {}
     244waituntil( timeout( 1`s ) ) {} or waituntil( timeout( 2`s ) ) {}
     245waituntil( timeout( 1`ns ) ) {} and waituntil( timeout( 2`s ) ) {}
     246\end{cfa}
     247
     248The timeout implementation highlights a key part of the waituntil semantics, the expression is evaluated before the waituntil runs.
     249As such calls to @sleep@ and @timeout@ do not block, but instead return a type that supports the @is_selectable@ trait.
     250This mechanism is needed for types that want to support multiple operations such as channels that support reading and writing.
     251
     252\subsection{Channels}\label{s:wu_chans}
     253To support both waiting on both reading and writing to channels, the opperators @?<<?@ and @?>>?@ are used to show reading and writing to a channel respectively, where the lefthand operand is the value and the righthand operand is the channel.
     254Channels require significant complexity to wait on for a few reasons.
     255The first reason is that reading or writing to a channel is a mutating operation.
     256What this means is that if a read or write to a channel occurs, the state of the channel has changed.
     257In comparison, for standard locks and futures, if a lock is acquired then released or a future is ready but not accessed, the states of the lock and the future are not modified.
     258In this way if a waituntil over locks or futures have some resources available that were not consumed, it is not an issue.
     259However, if a thread modifies a channel on behalf of a thread blocked on a waituntil statement, it is important that the corresponding waituntil code block is run, otherwise there is a potentially erroneous mismatch between the channel state and associated side effects.
     260As such, the @unregister_select@ routine has a boolean return that is used by channels to indicate when the operation was completed but the block was not run yet.
     261As such some channel code blocks may be run as part of the unregister.
     262Furthermore if there are both @and@ and @or@ operators, the @or@ operators stop behaving like exclusive-or semantics since this race between operations and unregisters exists.
     263
     264It was deemed important that exclusive-or semantics were maintained when only @or@ operators were used, so this situation has been special-cased, and is handled by having all clauses race to set a value \emph{before} operating on the channel.
     265This approach is infeasible in the case where @and@ and @or@ operators are used.
     266To show this consider the following waituntil statement.
     267
     268\begin{cfa}
     269waituntil( i >> A ) {} and waituntil( i >> B ) {}
     270or waituntil( i >> C ) {} and waituntil( i >> D ) {}
     271\end{cfa}
     272
     273If exclusive-or semantics were followed, this waituntil would only run the code blocks for @A@ and @B@, or the code blocks for @C@ and @D@.
     274However, to race before operation completion in this case introduces a race whose complexity increases with the size of the waituntil statement.
     275In the example above, for @i@ to be inserted into @C@, to ensure the exclusive-or it must be ensured that @i@ can also be inserted into @D@.
     276Furthermore, the race for the @or@ would also need to be won.
     277However, due to TOCTOU issues, one cannot know that all resources are available without acquiring all the internal locks of channels in the subtree.
     278This is not a good solution for two reasons.
     279It is possible that once all the locks are acquired that the subtree is not satisfied and they must all be released.
     280This would incur high cost for signalling threads and also heavily increase contention on internal channel locks.
     281Furthermore, the waituntil statement is polymorphic and can support resources that do not have internal locks, which also makes this approach infeasible.
     282As such, the exclusive-or semantics are lost when using both @and@ and @or@ operators since they can not be supported without significant complexity and hits to waituntil statement performance.
     283
     284The mechanism by which the predicate of the waituntil is checked is discussed in more detail in Section~\ref{s:wu_guards}.
     285
     286Another consideration introduced by channels is that supporting both reading and writing to a channel in a waituntil means that one waituntil clause may be the notifier for another waituntil clause.
     287This becomes a problem when dealing with the special-cased @or@ where the clauses need to win a race to operate on a channel.
     288When you have both a special-case @or@ inserting on one thread and another special-case @or@ consuming is blocked on another thread there is not one but two races that need to be consolidated by the inserting thread.
     289(The race can occur in the opposite case with a blocked producer and signalling consumer too.)
     290For them to know that the insert succeeded, they need to win the race for their own waituntil and win the race for the other waituntil.
     291Go solves this problem in their select statement by acquiring the internal locks of all channels before registering the select on the channels.
     292This eliminates the race since no other threads can operate on the blocked channel since its lock will be held.
     293
     294This approach is not used in \CFA since the waituntil is polymorphic.
     295Not all types in a waituntil have an internal lock, and when using non-channel types acquiring all the locks incurs extra uneeded overhead.
     296Instead this race is consolidated in \CFA in two phases by having an intermediate pending status value for the race.
     297This case is detectable, and if detected the thread attempting to signal will first race to set the race flag to be pending.
     298If it succeeds, it then attempts to set the consumer's race flag to its success value.
     299If the producer successfully sets the consumer race flag, then the operation can proceed, if not the signalling thread will set its own race flag back to the initial value.
     300If any other threads attempt to set the producer's flag and see a pending value, they will wait until the value changes before proceeding to ensure that in the case that the producer fails, the signal will not be lost.
     301This protocol ensures that signals will not be lost and that the two races can be resolved in a safe manner.
     302
     303Channels in \CFA have exception based shutdown mechanisms that the waituntil statement needs to support.
     304These exception mechanisms were what brought in the @on_selected@ routine.
     305This routine is needed by channels to detect if they are closed upon waking from a waituntil statement, to ensure that the appropriate behaviour is taken.
     306
     307\subsection{Guards and Statement Predicate}\label{s:wu_guards}
     308Checking for when a synchronous multiplexing utility is done is trivial when it has an or/xor relationship, since any resource becoming available means that the blocked thread can proceed.
     309In \uC and \CFA, their \gls{synch_multiplex} utilities involve both an @and@ and @or@ operator, which make the problem of checking for completion of the statement more difficult.
     310
     311In the \uC @_Select@ statement, they solve this problem by constructing a tree of the resources, where the internal nodes are operators and the leafs are the resources.
     312The internal nodes also store the status of each of the subtrees beneath them.
     313When resources become available, their status is modified and the status of the leaf nodes percolate into the internal nodes update the state of the statement.
     314Once the root of the tree has both subtrees marked as @true@ then the statement is complete.
     315As an optimization, when the internal nodes are updated, their subtrees marked as @true@ are effectively pruned and are not touched again.
     316To support \uC's @_Select@ statement guards, the tree prunes the branch if the guard is false.
     317
     318The \CFA waituntil statement blocks a thread until a set of resources have become available that satisfy the underlying predicate.
     319The waiting condition of the waituntil statement can be represented as a predicate over the resources, joined by the waituntil operators, where a resource is @true@ if it is available, and @false@ otherwise.
     320In \CFA, this representation is used as the mechanism to check if a thread is done waiting on the waituntil.
     321Leveraging the compiler, a routine is generated per waituntil that is passed the statuses of the resources and returns a boolean that is @true@ when the waituntil is done, and false otherwise.
     322To support guards on the \CFA waituntil statement, the status of a resource disabled by a guard is set to ensure that the predicate function behaves as if that resource is no longer part of the predicate.
     323
     324In \uC's @_Select@, it supports operators both inside and outside the clauses of their statement.
     325\eg in the following example the code blocks will run once their corresponding predicate inside the round braces is satisfied.
     326
     327% C_TODO put this is uC++ code style not cfa-style
     328\begin{cfa}
     329Future_ISM<int> A, B, C, D;
     330_Select( A || B && C ) { ... }
     331and _Select( D && E ) { ... }
     332\end{cfa}
     333
     334This is more expressive that the waituntil statement in \CFA.
     335In \CFA, since the waituntil statement supports more resources than just futures, implmenting operators inside clauses was avoided for a few reasons.
     336As an example, suppose \CFA supported operators inside clauses and consider the code snippet in Figure~\ref{f:wu_inside_op}.
     337
     338\begin{figure}
     339\begin{cfa}
     340owner_lock A, B, C, D;
     341waituntil( A && B ) { ... }
     342or waituntil( C && D ) { ... }
     343\end{cfa}
     344\caption{Example of unsupported operators inside clauses in \CFA.}
     345\label{f:wu_inside_op}
     346\end{figure}
     347
     348If the waituntil in Figure~\ref{f:wu_inside_op} works with the same semantics as described and acquires each lock as it becomes available, it opens itself up to possible deadlocks since it is now holding locks and waiting on other resources.
     349As such other semantics would be needed to ensure that this operation is safe.
     350One possibility is to use \CC's @scoped_lock@ approach that was described in Section~\ref{s:DeadlockAvoidance}, however the potential for livelock leaves much to be desired.
     351Another possibility would be to use resource ordering similar to \CFA's @mutex@ statement, but that alone is not sufficient if the resource ordering is not used everywhere.
     352Additionally, using resource ordering could conflict with other semantics of the waituntil statement.
     353To show this conflict, consider if the locks in Figure~\ref{f:wu_inside_op} were ordered @D@, @B@, @C@, @A@.
     354If all the locks are available, it becomes complex to both respect the ordering of the waituntil in Figure~\ref{f:wu_inside_op} when choosing which code block to run and also respect the lock ordering of @D@, @B@, @C@, @A@ at the same time.
     355One other way this could be implemented is to wait until all resources for a given clause are available before proceeding to acquire them, but this also quickly becomes a poor approach.
     356This approach won't work due to TOCTOU issues, as it is not possible to ensure that the full set resources are available without holding them all first.
     357Operators inside clauses in \CFA could potentially be implemented with careful circumvention of the problems involved, but it was not deemed an important feature when taking into account the runtime cost that would need to be paid to handle these situations.
     358The problem of operators inside clauses also becomes a difficult issue to handle when supporting channels.
     359If internal operators were supported, it would require some way to ensure that channels with internal operators are modified on if and only if the corresponding code block is run, but that is not feasible due to reasons described in the exclusive-or portion of Section~\ref{s:wu_chans}.
     360
     361\section{Waituntil Performance}
     362The two \gls{synch_multiplex} utilities that are in the realm of comparability with the \CFA waituntil statement are the Go @select@ statement and the \uC @_Select@ statement.
     363As such, two microbenchmarks are presented, one for Go and one for \uC to contrast the systems.
     364The similar utilities discussed at the start of this chapter in C, Ada, Rust, \CC, and OCaml are either not meaningful or feasible to benchmark against.
     365The select(2) and related utilities in C are not comparable since they are system calls that go into the kernel and operate on file descriptors, whereas the waituntil exists solely in userspace.
     366Ada's @select@ only operates on methods, which is done in \CFA via the @waitfor@ utility so it is not feasible to benchmark against the @waituntil@, which cannot wait on the same resource.
     367Rust and \CC only offer a busy-wait based approach which is not meaningly comparable to a blocking approach.
     368OCaml's @select@ waits on channels that are not comparable with \CFA and Go channels, which makes the OCaml @select@ infeasible to compare it with Go's @select@ and \CFA's @waituntil@.
     369Given the differences in features, polymorphism, and expressibility between the waituntil and @select@, and @_Select@, the aim of the microbenchmarking in this chapter is to show that these implementations lie in the same realm of performance, not to pick a winner.
     370
     371\subsection{Channel Benchmark}
     372The channel microbenchmark compares \CFA's waituntil and Go's select, where the resource being waited on is a set of channels.
     373
     374%C_TODO explain benchmark
     375
     376%C_TODO show results
     377
     378%C_TODO discuss results
     379
     380\subsection{Future Benchmark}
     381The future benchmark compares \CFA's waituntil with \uC's @_Select@, with both utilities waiting on futures.
     382
     383%C_TODO explain benchmark
     384
     385%C_TODO show results
     386
     387%C_TODO discuss results
  • doc/theses/colby_parsons_MMAth/thesis.tex

    r2b78949 r8a930c03  
    111111    colorlinks=true,        % false: boxed links; true: colored links
    112112    linkcolor=blue,         % color of internal links
    113     citecolor=blue,        % color of links to bibliography
     113    citecolor=blue,         % color of links to bibliography
    114114    filecolor=magenta,      % color of file links
    115     urlcolor=cyan           % color of external links
     115    urlcolor=cyan,          % color of external links
     116    breaklinks=true
    116117}
    117118\ifthenelse{\boolean{PrintVersion}}{   % for improved print quality, change some hyperref options
     
    126127% \usepackage[acronym]{glossaries}
    127128\usepackage[automake,toc,abbreviations]{glossaries-extra} % Exception to the rule of hyperref being the last add-on package
     129\renewcommand*{\glstextformat}[1]{\textcolor{black}{#1}}
    128130% If glossaries-extra is not in your LaTeX distribution, get it from CTAN (http://ctan.org/pkg/glossaries-extra),
    129131% although it's supposed to be in both the TeX Live and MikTeX distributions. There are also documentation and
  • doc/user/figures/EHMHierarchy.fig

    r2b78949 r8a930c03  
    2929        1 1 1.00 60.00 90.00
    3030         4950 1950 4950 1725
    31 4 1 0 50 -1 0 13 0.0000 2 135 225 1950 1650 IO\001
    32 4 1 0 50 -1 0 13 0.0000 2 135 915 4950 1650 Arithmetic\001
    33 4 1 0 50 -1 0 13 0.0000 2 150 330 1350 2100 File\001
    34 4 1 0 50 -1 0 13 0.0000 2 135 735 2550 2100 Network\001
    35 4 1 0 50 -1 0 13 0.0000 2 180 1215 3750 2100 DivideByZero\001
    36 4 1 0 50 -1 0 13 0.0000 2 150 810 4950 2100 Overflow\001
    37 4 1 0 50 -1 0 13 0.0000 2 150 915 6000 2100 Underflow\001
    38 4 1 0 50 -1 0 13 0.0000 2 180 855 3450 1200 Exception\001
     314 1 0 50 -1 0 12 0.0000 2 135 225 1950 1650 IO\001
     324 1 0 50 -1 0 12 0.0000 2 135 915 4950 1650 Arithmetic\001
     334 1 0 50 -1 0 12 0.0000 2 150 330 1350 2100 File\001
     344 1 0 50 -1 0 12 0.0000 2 135 735 2550 2100 Network\001
     354 1 0 50 -1 0 12 0.0000 2 180 1215 3750 2100 DivideByZero\001
     364 1 0 50 -1 0 12 0.0000 2 150 810 4950 2100 Overflow\001
     374 1 0 50 -1 0 12 0.0000 2 150 915 6000 2100 Underflow\001
     384 1 0 50 -1 0 12 0.0000 2 180 855 3450 1200 Exception\001
  • doc/user/user.tex

    r2b78949 r8a930c03  
    1111%% Created On       : Wed Apr  6 14:53:29 2016
    1212%% Last Modified By : Peter A. Buhr
    13 %% Last Modified On : Mon Aug 22 23:43:30 2022
    14 %% Update Count     : 5503
     13%% Last Modified On : Mon Jun  5 21:18:29 2023
     14%% Update Count     : 5521
    1515%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    1616
     
    108108\huge \CFA Team (past and present) \medskip \\
    109109\Large Andrew Beach, Richard Bilson, Michael Brooks, Peter A. Buhr, Thierry Delisle, \smallskip \\
    110 \Large Glen Ditchfield, Rodolfo G. Esteves, Aaron Moss, Colby Parsons, Rob Schluntz, \smallskip \\
    111 \Large Fangren Yu, Mubeen Zulfiqar
     110\Large Glen Ditchfield, Rodolfo G. Esteves, Jiada Liang, Aaron Moss, Colby Parsons \smallskip \\
     111\Large Rob Schluntz, Fangren Yu, Mubeen Zulfiqar
    112112}% author
    113113
     
    169169Like \Index*[C++]{\CC{}}, there may be both old and new ways to achieve the same effect.
    170170For example, the following programs compare the C, \CFA, and \CC I/O mechanisms, where the programs output the same result.
    171 \begin{flushleft}
    172 \begin{tabular}{@{}l@{\hspace{1em}}l@{\hspace{1em}}l@{}}
    173 \multicolumn{1}{@{}c@{\hspace{1em}}}{\textbf{C}}        & \multicolumn{1}{c}{\textbf{\CFA}}     & \multicolumn{1}{c@{}}{\textbf{\CC}}   \\
     171\begin{center}
     172\begin{tabular}{@{}lll@{}}
     173\multicolumn{1}{@{}c}{\textbf{C}}       & \multicolumn{1}{c}{\textbf{\CFA}}     & \multicolumn{1}{c@{}}{\textbf{\CC}}   \\
    174174\begin{cfa}[tabsize=3]
    175175#include <stdio.h>$\indexc{stdio.h}$
     
    199199\end{cfa}
    200200\end{tabular}
    201 \end{flushleft}
     201\end{center}
    202202While \CFA I/O \see{\VRef{s:StreamIOLibrary}} looks similar to \Index*[C++]{\CC{}}, there are important differences, such as automatic spacing between variables and an implicit newline at the end of the expression list, similar to \Index*{Python}~\cite{Python}.
    203203
     
    856856still works.
    857857Nevertheless, reversing the default action would have a non-trivial effect on case actions that compound, such as the above example of processing shell arguments.
    858 Therefore, to preserve backwards compatibility, it is necessary to introduce a new kind of ©switch© statement, called \Indexc{choose}, with no implicit fall-through semantics and an explicit fall-through if the last statement of a case-clause ends with the new keyword \Indexc{fallthrough}/\Indexc{fallthru}, \eg:
     858Therefore, to preserve backwards compatibility, it is necessary to introduce a new kind of ©switch© statement, called \Indexc{choose}, with no implicit fall-through semantics and an explicit fall-through if the last statement of a case-clause ends with the new keyword \Indexc{fallthrough}/\-\Indexc{fallthru}, \eg:
    859859\begin{cfa}
    860860®choose® ( i ) {
     
    11671167\end{cfa}
    11681168\end{itemize}
    1169 \R{Warning}: specifying the down-to range maybe unexcepted because the loop control \emph{implicitly} switches the L and H values (and toggles the increment/decrement for I):
     1169\R{Warning}: specifying the down-to range maybe unexpected because the loop control \emph{implicitly} switches the L and H values (and toggles the increment/decrement for I):
    11701170\begin{cfa}
    11711171for ( i; 1 ~ 10 )       ${\C[1.5in]{// up range}$
     
    11731173for ( i; ®10 -~ 1® )    ${\C{// \R{WRONG down range!}}\CRT}$
    11741174\end{cfa}
    1175 The reason for this sematics is that the range direction can be toggled by adding/removing the minus, ©'-'©, versus interchanging the L and H expressions, which has a greater chance of introducing errors.
     1175The reason for this semantics is that the range direction can be toggled by adding/removing the minus, ©'-'©, versus interchanging the L and H expressions, which has a greater chance of introducing errors.
    11761176
    11771177
     
    22562256Days days = Mon; // enumeration type declaration and initialization
    22572257\end{cfa}
    2258 The set of enums are injected into the variable namespace at the definition scope.
    2259 Hence, enums may be overloaded with enum/variable/function names.
    2260 \begin{cfa}
     2258The set of enums is injected into the variable namespace at the definition scope.
     2259Hence, enums may be overloaded with variable, enum, and function names.
     2260\begin{cfa}
     2261int Foo;                        $\C{// type/variable separate namespaces}$
    22612262enum Foo { Bar };
    22622263enum Goo { Bar };       $\C[1.75in]{// overload Foo.Bar}$
    2263 int Foo;                        $\C{// type/variable separate namespace}$
    22642264double Bar;                     $\C{// overload Foo.Bar, Goo.Bar}\CRT$
    22652265\end{cfa}
     
    23012301Hence, the value of enum ©Mon© is 0, ©Tue© is 1, ...\,, ©Sun© is 6.
    23022302If an enum value is specified, numbering continues by one from that value for subsequent unnumbered enums.
    2303 If an enum value is an expression, the compiler performs constant-folding to obtain a constant value.
     2303If an enum value is a \emph{constant} expression, the compiler performs constant-folding to obtain a constant value.
    23042304
    23052305\CFA allows other integral types with associated values.
     
    23132313\begin{cfa}
    23142314// non-integral numeric
    2315 enum( ®double® ) Math { PI_2 = 1.570796, PI = 3.141597,  E = 2.718282 }
     2315enum( ®double® ) Math { PI_2 = 1.570796, PI = 3.141597, E = 2.718282 }
    23162316// pointer
    2317 enum( ®char *® ) Name { Fred = "Fred",  Mary = "Mary",  Jane = "Jane" };
     2317enum( ®char *® ) Name { Fred = "Fred",  Mary = "Mary", Jane = "Jane" };
    23182318int i, j, k;
    23192319enum( ®int *® ) ptr { I = &i,  J = &j,  K = &k };
    2320 enum( ®int &® ) ref { I = i,  J = j,  K = k };
     2320enum( ®int &® ) ref { I = i,   J = j,   K = k };
    23212321// tuple
    23222322enum( ®[int, int]® ) { T = [ 1, 2 ] };
     
    23612361\begin{cfa}
    23622362enum( char * ) Name2 { ®inline Name®, Jack = "Jack", Jill = "Jill" };
    2363 enum ®/* inferred */®  Name3 { ®inline Name2®, Sue = "Sue", Tom = "Tom" };
     2363enum ®/* inferred */® Name3 { ®inline Name2®, Sue = "Sue", Tom = "Tom" };
    23642364\end{cfa}
    23652365Enumeration ©Name2© inherits all the enums and their values from enumeration ©Name© by containment, and a ©Name© enumeration is a subtype of enumeration ©Name2©.
     
    38183818                                   "[ output-file (default stdout) ] ]";
    38193819                } // choose
    3820         } catch( ®Open_Failure® * ex; ex->istream == &in ) {
     3820        } catch( ®open_failure® * ex; ex->istream == &in ) { $\C{// input file errors}$
    38213821                ®exit® | "Unable to open input file" | argv[1];
    3822         } catch( ®Open_Failure® * ex; ex->ostream == &out ) {
     3822        } catch( ®open_failure® * ex; ex->ostream == &out ) { $\C{// output file errors}$
    38233823                ®close®( in );                                          $\C{// optional}$
    38243824                ®exit® | "Unable to open output file" | argv[2];
     
    40384038
    40394039\item
    4040 \Indexc{sepDisable}\index{manipulator!sepDisable@©sepDisable©} and \Indexc{sepEnable}\index{manipulator!sepEnable@©sepEnable©} toggle printing the separator.
     4040\Indexc{sepDisable}\index{manipulator!sepDisable@©sepDisable©} and \Indexc{sepEnable}\index{manipulator!sepEnable@©sepEnable©} globally toggle printing the separator.
    40414041\begin{cfa}[belowskip=0pt]
    40424042sout | sepDisable | 1 | 2 | 3; $\C{// turn off implicit separator}$
     
    40534053
    40544054\item
    4055 \Indexc{sepOn}\index{manipulator!sepOn@©sepOn©} and \Indexc{sepOff}\index{manipulator!sepOff@©sepOff©} toggle printing the separator with respect to the next printed item, and then return to the global separator setting.
     4055\Indexc{sepOn}\index{manipulator!sepOn@©sepOn©} and \Indexc{sepOff}\index{manipulator!sepOff@©sepOff©} locally toggle printing the separator with respect to the next printed item, and then return to the global separator setting.
    40564056\begin{cfa}[belowskip=0pt]
    40574057sout | 1 | sepOff | 2 | 3; $\C{// turn off implicit separator for the next item}$
     
    412941296
    41304130\end{cfa}
    4131 Note, a terminating ©nl© is merged (overrides) with the implicit newline at the end of the ©sout© expression, otherwise it is impossible to to print a single newline
     4131Note, a terminating ©nl© is merged (overrides) with the implicit newline at the end of the ©sout© expression, otherwise it is impossible to print a single newline
    41324132\item
    41334133\Indexc{nlOn}\index{manipulator!nlOn@©nlOn©} implicitly prints a newline at the end of each output expression.
Note: See TracChangeset for help on using the changeset viewer.