Changeset 251454a0


Ignore:
Timestamp:
May 25, 2018, 9:42:25 AM (6 years ago)
Author:
Peter A. Buhr <pabuhr@…>
Branches:
ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, with_gc
Children:
b048dc3
Parents:
9b15e05
git-author:
Peter A. Buhr <pabuhr@…> (05/25/18 09:41:16)
git-committer:
Peter A. Buhr <pabuhr@…> (05/25/18 09:42:25)
Message:

more writing

Location:
doc
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • doc/bibliography/pl.bib

    r9b15e05 r251454a0  
    648648    year        = 2008,
    649649    pages       = {8-15},
     650}
     651
     652@article{Joung00,
     653    author      = {Joung, Yuh-Jzer},
     654    title       = {Asynchronous group mutual exclusion},
     655    journal     = {Distributed Computing},
     656    year        = {2000},
     657    month       = {Nov},
     658    volume      = {13},
     659    number      = {4},
     660    pages       = {189--206},
    650661}
    651662
  • doc/papers/concurrency/Paper.tex

    r9b15e05 r251454a0  
    11791179\begin{cfa}
    11801180thread Adder {
    1181     int * row, cols, * subtotal;                        $\C{// communication}$
     1181    int * row, cols, & subtotal;                        $\C{// communication}$
    11821182};
    11831183void ?{}( Adder & adder, int row[], int cols, int & subtotal ) {
    1184     adder.[ row, cols, subtotal ] = [ row, cols, &subtotal ];
     1184    adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ];
    11851185}
    11861186void main( Adder & adder ) with( adder ) {
    1187     *subtotal = 0;
     1187    subtotal = 0;
    11881188    for ( int c = 0; c < cols; c += 1 ) {
    1189                 *subtotal += row[c];
     1189                subtotal += row[c];
    11901190    }
    11911191}
     
    12131213
    12141214Uncontrolled non-deterministic execution is meaningless.
    1215 To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where \newterm{synchronization} is a timing relationship among threads and \newterm{mutual exclusion} is an access-control mechanism on data shared by threads.
     1215To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where synchronization is a timing relationship among threads and mutual exclusion is an access-control mechanism on data shared by threads.
    12161216Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala)).
    12171217In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (\eg channels~\cite{CSP,Go}).
     
    12331233
    12341234
    1235 \subsection{Basics}
    1236 
    1237 Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization.
    1238 Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access.
    1239 On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads.
    1240 
    1241 
    1242 \subsubsection{Mutual-Exclusion}
    1243 
    1244 As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once.
     1235\subsection{Mutual Exclusion}
     1236
     1237A group of instructions manipulating a specific instance of shared data that must be performed atomically is called an (individual) \newterm{critical-section}~\cite{Dijkstra65}.
     1238A generalization is a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously.
     1239The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers have the same session and all writers have a unique session.
     1240\newterm{Mutual exclusion} enforces the correction number of threads are using a critical section at the same time.
     1241
    12451242However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use.
    1246 Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to  higher-level concurrency techniques, which sacrifice some performance in order to improve ease of use.
    1247 Ease of use comes by either guaranteeing some problems cannot occur (\eg being deadlock free) or by offering a more explicit coupling between data and corresponding critical section.
     1243Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use.
     1244Ease of use comes by either guaranteeing some problems cannot occur (\eg deadlock free), or by offering a more explicit coupling between shared data and critical section.
    12481245For example, the \CC @std::atomic<T>@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing large types atomically).
    1249 Another challenge with low-level locks is composability.
    1250 Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks.
    1251 Easing composability is another feature higher-level mutual-exclusion mechanisms often offer.
    1252 
    1253 
    1254 \subsubsection{Synchronization}
    1255 
    1256 As with mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use.
    1257 Again, higher-level mechanisms often simplify usage by adding either better coupling between synchronization and data (\eg message passing) or offering a simpler solution to otherwise involved challenges.
     1246However, a significant challenge with (low-level) locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock.
     1247Easing composability is another feature higher-level mutual-exclusion mechanisms offer.
     1248
     1249
     1250\subsection{Synchronization}
     1251
     1252Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships.
     1253Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use.
     1254Higher-level mechanisms often simplify usage by adding better coupling between synchronization and data (\eg message passing), or offering a simpler solution to otherwise involved challenges, \eg barrier lock.
    12581255As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}.
    1259 Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order.
    1260 However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}.
    1261 Not satisfying this property is called \textbf{barging}.
    1262 For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}.
    1263 The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete to acquire the resource.
     1256Often synchronization is used to order access to a critical section, \eg ensuring the next kind of thread to enter a critical section is a reader thread
     1257If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, the reader has \newterm{barged}.
     1258Barging can result in staleness/freshness problems, where a reader barges ahead of a write and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from having an opportunity to be read.
    12641259Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs.
    1265 This challenge is often split into two different methods, barging avoidance and barging prevention.
    1266 Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
     1260This challenge is often split into two different approaches, barging avoidance and barging prevention.
     1261Algorithms that allow a barger but divert it until later are avoiding the barger, while algorithms that preclude a barger from entering during synchronization in the critical section prevent the barger completely.
     1262baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
    12671263
    12681264
Note: See TracChangeset for help on using the changeset viewer.