Index: doc/bibliography/pl.bib
===================================================================
--- doc/bibliography/pl.bib	(revision 9b15e05aeaebeccf18406fb219713f33a144d58d)
+++ doc/bibliography/pl.bib	(revision c107f4ecc663a3ac0d0b52ca7337d2ed7db1224b)
@@ -648,4 +648,15 @@
     year	= 2008,
     pages	= {8-15},
+}
+
+@article{Joung00,
+    author	= {Joung, Yuh-Jzer},
+    title	= {Asynchronous group mutual exclusion},
+    journal	= {Distributed Computing},
+    year	= {2000},
+    month	= {Nov},
+    volume	= {13},
+    number	= {4},
+    pages	= {189--206},
 }
 
Index: doc/papers/concurrency/Paper.tex
===================================================================
--- doc/papers/concurrency/Paper.tex	(revision 9b15e05aeaebeccf18406fb219713f33a144d58d)
+++ doc/papers/concurrency/Paper.tex	(revision c107f4ecc663a3ac0d0b52ca7337d2ed7db1224b)
@@ -1179,13 +1179,13 @@
 \begin{cfa}
 thread Adder {
-    int * row, cols, * subtotal;			$\C{// communication}$
+    int * row, cols, & subtotal;			$\C{// communication}$
 };
 void ?{}( Adder & adder, int row[], int cols, int & subtotal ) {
-    adder.[ row, cols, subtotal ] = [ row, cols, &subtotal ];
+    adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ];
 }
 void main( Adder & adder ) with( adder ) {
-    *subtotal = 0;
+    subtotal = 0;
     for ( int c = 0; c < cols; c += 1 ) {
-		*subtotal += row[c];
+		subtotal += row[c];
     }
 }
@@ -1213,5 +1213,5 @@
 
 Uncontrolled non-deterministic execution is meaningless.
-To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where \newterm{synchronization} is a timing relationship among threads and \newterm{mutual exclusion} is an access-control mechanism on data shared by threads.
+To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where synchronization is a timing relationship among threads and mutual exclusion is an access-control mechanism on data shared by threads.
 Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala)).
 In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (\eg channels~\cite{CSP,Go}).
@@ -1233,36 +1233,32 @@
 
 
-\subsection{Basics}
-
-Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization.
-Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access.
-On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads.
-
-
-\subsubsection{Mutual-Exclusion}
-
-As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once.
+\subsection{Mutual Exclusion}
+
+A group of instructions manipulating a specific instance of shared data that must be performed atomically is called an (individual) \newterm{critical-section}~\cite{Dijkstra65}.
+A generalization is a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously.
+The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers have the same session and all writers have a unique session.
+\newterm{Mutual exclusion} enforces the correction number of threads are using a critical section at the same time.
+
 However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
-Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to  higher-level concurrency techniques, which sacrifice some performance in order to improve ease of use.
-Ease of use comes by either guaranteeing some problems cannot occur (\eg being deadlock free) or by offering a more explicit coupling between data and corresponding critical section.
+Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use.
+Ease of use comes by either guaranteeing some problems cannot occur (\eg deadlock free), or by offering a more explicit coupling between shared data and critical section.
 For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing large types atomically).
-Another challenge with low-level locks is composability.
-Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks.
-Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
-
-
-\subsubsection{Synchronization}
-
-As with mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use.
-Again, higher-level mechanisms often simplify usage by adding either better coupling between synchronization and data (\eg message passing) or offering a simpler solution to otherwise involved challenges.
+However, a significant challenge with (low-level) locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
+Easing composability is another feature higher-level mutual-exclusion mechanisms offer.
+
+
+\subsection{Synchronization}
+
+Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships.
+Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use.
+Higher-level mechanisms often simplify usage by adding better coupling between synchronization and data (\eg message passing), or offering a simpler solution to otherwise involved challenges, \eg barrier lock.
 As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}.
-Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order.
-However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}.
-Not satisfying this property is called \textbf{barging}.
-For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}.
-The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete to acquire the resource.
+Often synchronization is used to order access to a critical section, \eg ensuring the next kind of thread to enter a critical section is a reader thread
+If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, the reader has \newterm{barged}.
+Barging can result in staleness/freshness problems, where a reader barges ahead of a write and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from having an opportunity to be read.
 Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
-This challenge is often split into two different methods, barging avoidance and barging prevention.
-Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
+This challenge is often split into two different approaches, barging avoidance and barging prevention.
+Algorithms that allow a barger but divert it until later are avoiding the barger, while algorithms that preclude a barger from entering during synchronization in the critical section prevent the barger completely.
+baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
 
 
