Changeset 218685e for doc/theses


Ignore:
Timestamp:
Jul 5, 2023, 11:53:24 AM (12 months ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
master
Children:
7ce70e2
Parents:
3883609
Message:

more proofreading of actor chapter

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/colby_parsons_MMAth/text/actors.tex

    r3883609 r218685e  
    630630\end{enumerate}
    631631
    632 There is one last piece to the stealing puzzle.
    633 When gulping a queue there are two steps:
     632\subsection{Stealing Problem}
     633
     634Each queue access involving a thief is protected using spinlock @mutex_lock@ and contention among thieves should be low.
     635However, to achieve the goal of almost zero contention for the victim, it is necessary to remove this lock around the only critical section involving the victim.
     636The victim critical-section is gulping a queue, which involves two steps:
     637\begin{cfa}
     638temp = worker_queues[x];
     639// preemption and steal
     640transfer( temp->owned_queuem, temp->c_queue ); // @being_processed@ set in transfer with mutual exclusion
     641\end{cfa}
     642where @transfer@ moves the top point from @c_queue@ to @owned_queuem@ and sets the top pointer of @c_queue@ to @0p@ (null) to partition the mailbox.
     643Specifically,
    634644\begin{enumerate}[topsep=5pt,itemsep=3pt,parsep=0pt]
    635645\item
    636 First the victim must dereference its current queue pointer from @worker_queues@
    637 
    638 \item
    639 Then the victim calls @transfer@ which gulps the queue.
     646The victim must dereference its current queue pointer from @worker_queues@.
     647\item
     648The victim calls @transfer@ to gulp the queue.
    640649\end{enumerate}
    641 If a thief steals the queue pointer after the victim dereferences it and before the victim calls @transfer@ (between the two steps), this poses a problem.
    642 The thief could potentially gulp from the queue and process it at the same time as the victim.
    643 This would violate both the mutual exclusion and the message ordering guarantees.
    644 Preventing this race from happening would require either a lock acquire or an atomic operation on the victim's fast-path.
    645 Either a lock or atomic operation here would be costly and would create contention between the thief and the victim; instead the implementation allows the race to occur and resolves the race lazily.
     650If the victim is preempted after the dereference, a thief can steal the queue pointer before the victim calls @transfer@.
     651The thief then races ahead, transitions back to a victim, searches its mailbox queues, finds the stolen non-empty queue, and gulps it.
     652The original victim now restarts and gulps from the stolen queue pointed to by its dereference, even though the thief put an empty mailbox queue in place for the victim.
     653At this point, the victim and thief are possibly processing the same messages, which violates mutual exclusion and the message-ordering guarantee.
     654Preventing this race requires either a lock acquire or an atomic operation on the victim's fast-path to guarantee the victim's gulp retrieves the empty mailbox queue installed by the thief.
     655However, any form of locking here creates contention between thief and victim.
     656
     657The alternative to locking is allowing the race and resolving it lazily.
    646658% As mentioned, there is a race between a victim gulping and a thief stealing because gulping partitions a mailbox queue making it ineligible for stealing.
    647659% Furthermore, after a thief steals, there is moment when victim gulps but the queue no longer
     
    654666% These two gulps cannot be allowed to happen concurrently.
    655667% If any behaviours from a queue are run by two workers at a time it violates both mutual exclusion and the actor ordering guarantees.
    656 To resolve the race, each mailbox header stores a @being_processed@ flag that is atomically set to @true@ when a queue is gulped.
    657 The flag indicates that a queue is being processed by a worker and is set back to @false@ once the local processing is finished.
    658 If a worker, either victim or thief, attempts to gulp a queue and finds that the @being_processed@ flag is @true@, it does not gulp the queue and moves on to the next queue in its range.
     668To resolve the race, each mailbox header stores a @being_processed@ flag that is atomically set when a queue is transferred.
     669The flag indicates that a queue is being processed by a worker and is reset once the local processing is finished.
     670If a worker, either victim or thief turned victim, attempts to gulp a queue and finds that the @being_processed@ flag is set, it does not gulp the queue and moves on to the next queue in its range.
    659671This resolves the race no matter the winner.
    660 If the thief manages to steal a queue and gulp it first, the victim may see the flag up and skip gulping the queue.
    661 Alternatively, if the victim wins the race, the thief may see the flag up and skip gulping the queue.
    662 There is a final case where the race occurs and is resolved with no gulps skipped if the winner of the race finishes processing the queue before the loser attempts to gulp!
    663 This race consolidation is a source of contention between victims and thieves as they both may try to acquire a lock on the same queue.
    664 However, the window for this race is very small, making this contention rare.
    665 This is why this mechanism is zero-victim-cost in practice but not in theory.
    666 By collecting statistics on failed gulps due to the @being_processed@ flag, it is found that this contention occurs ~0.05\% of the time when a gulp occurs (1 in every 2000 gulps).
    667 Hence, the claim is made that this stealing mechanism has zero-victim-cost in practice.
     672If the thief wins the race, it steals and gulps the queue, and the victim sees the flag set and skips gulping the queue.
     673If the victim wins the race, it gulps the queue, and the thief sees the flag set and skips gulping the queue.
     674
     675There is a final pathological case where the race occurs and is resolved with \emph{both} gulps occurring.
     676Here, the winner of the race finishes processing the queue and resets the @being_processed@ flag.
     677Then the loser unblocks and completes its gulp by transferring a \emph{stolen queue} and atomically setting the @being_processed@ flag.
     678The loser is now processing messages from the winner's queue, which is safe because the winner ignores this queue as @being_processed@ is set.
     679The loser never attempts to process this stolen queue again because its next queue cycle sees the swapped queue from the thief (which may or may not be empty at this point).
     680This race is now the only source of contention between victim and thieve as they both try to acquire a lock on the same queue during a transfer.
     681In theory, this scenario can cascade across multiple workers all performing this sequence across multiple queues.
     682However, the window for this race is extremely small, making this contention rare.
     683Furthermore, the chance of a cascading scenario across multiple workers is even rarer.
     684% The rarity is why this mechanism is zero-victim-cost in practice but not in theory.
     685By collecting statistics on failed gulps due to the @being_processed@ flag, this contention occurs $\approx$0.05\% of the time when a gulp occurs (1 in every 2000 gulps).
     686\PAB{You need to explain the experiment that gathered this information.}
     687Hence, the claim is made that this stealing mechanism has (probabilistically) zero-victim-cost in practice.
    668688
    669689\subsection{Queue Pointer Swap}\label{s:swap}
     
    689709bool DCAS( val * addr1, val * addr2, val old1, val old2, val new1, val new2 ) {
    690710        if ( ( *addr1 == old1 ) && ( *addr2 == old2 ) ) {
    691         *addr1 = new1;
    692         *addr2 = new2;
    693         return true;
    694     }
    695     return false;
     711                *addr1 = new1;
     712                *addr2 = new2;
     713                return true;
     714        }
     715        return false;
    696716}
    697717\end{cfa}
     
    704724\begin{cfa}
    705725bool DAS( val * addr1, val * addr2 ) {
    706     return DCAS( addr1, addr2, *addr1, *addr2, *addr2, *addr1 );
     726        return DCAS( addr1, addr2, *addr1, *addr2, *addr2, *addr1 );
    707727}
    708728\end{cfa}
Note: See TracChangeset for help on using the changeset viewer.