Changeset 218685e
- Timestamp:
- Jul 5, 2023, 11:53:24 AM (20 months ago)
- Branches:
- master
- Children:
- 7ce70e2
- Parents:
- 3883609
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/colby_parsons_MMAth/text/actors.tex
r3883609 r218685e 630 630 \end{enumerate} 631 631 632 There is one last piece to the stealing puzzle. 633 When gulping a queue there are two steps: 632 \subsection{Stealing Problem} 633 634 Each queue access involving a thief is protected using spinlock @mutex_lock@ and contention among thieves should be low. 635 However, to achieve the goal of almost zero contention for the victim, it is necessary to remove this lock around the only critical section involving the victim. 636 The victim critical-section is gulping a queue, which involves two steps: 637 \begin{cfa} 638 temp = worker_queues[x]; 639 // preemption and steal 640 transfer( temp->owned_queuem, temp->c_queue ); // @being_processed@ set in transfer with mutual exclusion 641 \end{cfa} 642 where @transfer@ moves the top point from @c_queue@ to @owned_queuem@ and sets the top pointer of @c_queue@ to @0p@ (null) to partition the mailbox. 643 Specifically, 634 644 \begin{enumerate}[topsep=5pt,itemsep=3pt,parsep=0pt] 635 645 \item 636 First the victim must dereference its current queue pointer from @worker_queues@ 637 638 \item 639 Then the victim calls @transfer@ which gulps the queue. 646 The victim must dereference its current queue pointer from @worker_queues@. 647 \item 648 The victim calls @transfer@ to gulp the queue. 640 649 \end{enumerate} 641 If a thief steals the queue pointer after the victim dereferences it and before the victim calls @transfer@ (between the two steps), this poses a problem. 642 The thief could potentially gulp from the queue and process it at the same time as the victim. 643 This would violate both the mutual exclusion and the message ordering guarantees. 644 Preventing this race from happening would require either a lock acquire or an atomic operation on the victim's fast-path. 645 Either a lock or atomic operation here would be costly and would create contention between the thief and the victim; instead the implementation allows the race to occur and resolves the race lazily. 650 If the victim is preempted after the dereference, a thief can steal the queue pointer before the victim calls @transfer@. 651 The thief then races ahead, transitions back to a victim, searches its mailbox queues, finds the stolen non-empty queue, and gulps it. 652 The original victim now restarts and gulps from the stolen queue pointed to by its dereference, even though the thief put an empty mailbox queue in place for the victim. 653 At this point, the victim and thief are possibly processing the same messages, which violates mutual exclusion and the message-ordering guarantee. 654 Preventing this race requires either a lock acquire or an atomic operation on the victim's fast-path to guarantee the victim's gulp retrieves the empty mailbox queue installed by the thief. 655 However, any form of locking here creates contention between thief and victim. 656 657 The alternative to locking is allowing the race and resolving it lazily. 646 658 % As mentioned, there is a race between a victim gulping and a thief stealing because gulping partitions a mailbox queue making it ineligible for stealing. 647 659 % Furthermore, after a thief steals, there is moment when victim gulps but the queue no longer … … 654 666 % These two gulps cannot be allowed to happen concurrently. 655 667 % If any behaviours from a queue are run by two workers at a time it violates both mutual exclusion and the actor ordering guarantees. 656 To resolve the race, each mailbox header stores a @being_processed@ flag that is atomically set to @true@ when a queue is gulped.657 The flag indicates that a queue is being processed by a worker and is set back to @false@once the local processing is finished.658 If a worker, either victim or thief , attempts to gulp a queue and finds that the @being_processed@ flag is @true@, it does not gulp the queue and moves on to the next queue in its range.668 To resolve the race, each mailbox header stores a @being_processed@ flag that is atomically set when a queue is transferred. 669 The flag indicates that a queue is being processed by a worker and is reset once the local processing is finished. 670 If a worker, either victim or thief turned victim, attempts to gulp a queue and finds that the @being_processed@ flag is set, it does not gulp the queue and moves on to the next queue in its range. 659 671 This resolves the race no matter the winner. 660 If the thief manages to steal a queue and gulp it first, the victim may see the flag up and skip gulping the queue. 661 Alternatively, if the victim wins the race, the thief may see the flag up and skip gulping the queue. 662 There is a final case where the race occurs and is resolved with no gulps skipped if the winner of the race finishes processing the queue before the loser attempts to gulp! 663 This race consolidation is a source of contention between victims and thieves as they both may try to acquire a lock on the same queue. 664 However, the window for this race is very small, making this contention rare. 665 This is why this mechanism is zero-victim-cost in practice but not in theory. 666 By collecting statistics on failed gulps due to the @being_processed@ flag, it is found that this contention occurs ~0.05\% of the time when a gulp occurs (1 in every 2000 gulps). 667 Hence, the claim is made that this stealing mechanism has zero-victim-cost in practice. 672 If the thief wins the race, it steals and gulps the queue, and the victim sees the flag set and skips gulping the queue. 673 If the victim wins the race, it gulps the queue, and the thief sees the flag set and skips gulping the queue. 674 675 There is a final pathological case where the race occurs and is resolved with \emph{both} gulps occurring. 676 Here, the winner of the race finishes processing the queue and resets the @being_processed@ flag. 677 Then the loser unblocks and completes its gulp by transferring a \emph{stolen queue} and atomically setting the @being_processed@ flag. 678 The loser is now processing messages from the winner's queue, which is safe because the winner ignores this queue as @being_processed@ is set. 679 The loser never attempts to process this stolen queue again because its next queue cycle sees the swapped queue from the thief (which may or may not be empty at this point). 680 This race is now the only source of contention between victim and thieve as they both try to acquire a lock on the same queue during a transfer. 681 In theory, this scenario can cascade across multiple workers all performing this sequence across multiple queues. 682 However, the window for this race is extremely small, making this contention rare. 683 Furthermore, the chance of a cascading scenario across multiple workers is even rarer. 684 % The rarity is why this mechanism is zero-victim-cost in practice but not in theory. 685 By collecting statistics on failed gulps due to the @being_processed@ flag, this contention occurs $\approx$0.05\% of the time when a gulp occurs (1 in every 2000 gulps). 686 \PAB{You need to explain the experiment that gathered this information.} 687 Hence, the claim is made that this stealing mechanism has (probabilistically) zero-victim-cost in practice. 668 688 669 689 \subsection{Queue Pointer Swap}\label{s:swap} … … 689 709 bool DCAS( val * addr1, val * addr2, val old1, val old2, val new1, val new2 ) { 690 710 if ( ( *addr1 == old1 ) && ( *addr2 == old2 ) ) { 691 692 693 694 695 711 *addr1 = new1; 712 *addr2 = new2; 713 return true; 714 } 715 return false; 696 716 } 697 717 \end{cfa} … … 704 724 \begin{cfa} 705 725 bool DAS( val * addr1, val * addr2 ) { 706 726 return DCAS( addr1, addr2, *addr1, *addr2, *addr2, *addr1 ); 707 727 } 708 728 \end{cfa}
Note: See TracChangeset
for help on using the changeset viewer.