# Changes in /[fc263b5:34ca532]

Ignore:
Files:
6 edited

Unmodified
Removed
• ## doc/bibliography/pl.bib

 rfc263b5 year        = 2008, pages       = {8-15}, } @article{Joung00, author      = {Joung, Yuh-Jzer}, title       = {Asynchronous group mutual exclusion}, journal     = {Distributed Computing}, year        = {2000}, month       = {Nov}, volume      = {13}, number      = {4}, pages       = {189--206}, }
• ## doc/papers/concurrency/Paper.tex

 rfc263b5 \begin{cfa} thread Adder { int * row, cols, * subtotal;                        $\C{// communication}$ int * row, cols, & subtotal;                        $\C{// communication}$ }; void ?{}( Adder & adder, int row[], int cols, int & subtotal ) { adder.[ row, cols, subtotal ] = [ row, cols, &subtotal ]; adder.[ row, cols, &subtotal ] = [ row, cols, &subtotal ]; } void main( Adder & adder ) with( adder ) { *subtotal = 0; subtotal = 0; for ( int c = 0; c < cols; c += 1 ) { *subtotal += row[c]; subtotal += row[c]; } } Uncontrolled non-deterministic execution is meaningless. To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where \newterm{synchronization} is a timing relationship among threads and \newterm{mutual exclusion} is an access-control mechanism on data shared by threads. To reestablish meaningful execution requires mechanisms to reintroduce determinism (control non-determinism), called synchronization and mutual exclusion, where synchronization is a timing relationship among threads and mutual exclusion is an access-control mechanism on data shared by threads. Since many deterministic challenges appear with the use of mutable shared state, some languages/libraries disallow it (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka~\cite{Akka} (Scala)). In these paradigms, interaction among concurrent objects is performed by stateless message-passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (\eg channels~\cite{CSP,Go}). \subsection{Basics} Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads. \subsubsection{Mutual-Exclusion} As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. \subsection{Mutual Exclusion} A group of instructions manipulating a specific instance of shared data that must be performed atomically is called an (individual) \newterm{critical-section}~\cite{Dijkstra65}. A generalization is a \newterm{group critical-section}~\cite{Joung00}, where multiple tasks with the same session may use the resource simultaneously, but different sessions may not use the resource simultaneously. The readers/writer problem~\cite{Courtois71} is an instance of a group critical-section, where readers have the same session and all writers have a unique session. \newterm{Mutual exclusion} enforces the correction number of threads are using a critical section at the same time. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to  higher-level concurrency techniques, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (\eg being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. Methods range from low-level locks, which are fast and flexible but require significant attention for correctness, to higher-level concurrency techniques, which sacrifice some performance to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (\eg deadlock free), or by offering a more explicit coupling between shared data and critical section. For example, the \CC @std::atomic@ offers an easy way to express mutual-exclusion on a restricted set of operations (\eg reading/writing large types atomically). Another challenge with low-level locks is composability. Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer. \subsubsection{Synchronization} As with mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanisms often simplify usage by adding either better coupling between synchronization and data (\eg message passing) or offering a simpler solution to otherwise involved challenges. However, a significant challenge with (low-level) locks is composability because it takes careful organization for multiple locks to be used while preventing deadlock. Easing composability is another feature higher-level mutual-exclusion mechanisms offer. \subsection{Synchronization} Synchronization enforces relative ordering of execution, and synchronization tools provide numerous mechanisms to establish these timing relationships. Low-level synchronization primitives offer good performance and flexibility at the cost of ease of use. Higher-level mechanisms often simplify usage by adding better coupling between synchronization and data (\eg message passing), or offering a simpler solution to otherwise involved challenges, \eg barrier lock. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property is called \textbf{barging}. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete to acquire the resource. Often synchronization is used to order access to a critical section, \eg ensuring the next kind of thread to enter a critical section is a reader thread If a writer thread is scheduled for next access, but another reader thread acquires the critical section first, the reader has \newterm{barged}. Barging can result in staleness/freshness problems, where a reader barges ahead of a write and reads temporally stale data, or a writer barges ahead of another writer overwriting data with a fresh value preventing the previous value from having an opportunity to be read. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention. This challenge is often split into two different approaches, barging avoidance and barging prevention. Algorithms that allow a barger but divert it until later are avoiding the barger, while algorithms that preclude a barger from entering during synchronization in the critical section prevent the barger completely. baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention.
• ## doc/papers/general/Paper.tex

 rfc263b5 } \end{cfa} Since @pair( T *, T * )@ is a concrete type, there are no implicit parameters passed to @lexcmp@, so the generated code is identical to a function written in standard C using @void *@, yet the \CFA version is type-checked to ensure the fields of both pairs and the arguments to the comparison function match in type. Since @pair( T *, T * )@ is a concrete type, there are no implicit parameters passed to @lexcmp@, so the generated code is identical to a function written in standard C using @void *@, yet the \CFA version is type-checked to ensure the members of both pairs and the arguments to the comparison function match in type. Another useful pattern enabled by reused dtype-static type instantiations is zero-cost \newterm{tag-structures}. \subsection{Member Access} It is also possible to access multiple fields from a single expression using a \newterm{member-access}. It is also possible to access multiple members from a single expression using a \newterm{member-access}. The result is a single tuple-valued expression whose type is the tuple of the types of the members, \eg: \begin{cfa} \begin{cfa} forall( dtype T0, dtype T1 | sized(T0) | sized(T1) ) struct _tuple2 { T0 field_0;  T1 field_1;                                        $\C{// generated before the first 2-tuple}$ T0 member_0;  T1 member_1;                                      $\C{// generated before the first 2-tuple}$ }; _tuple2(int, int) f() { _tuple2(double, double) x; forall( dtype T0, dtype T1, dtype T2 | sized(T0) | sized(T1) | sized(T2) ) struct _tuple3 { T0 field_0;  T1 field_1;  T2 field_2;   $\C{// generated before the first 3-tuple}$ T0 member_0;  T1 member_1;  T2 member_2;        $\C{// generated before the first 3-tuple}$ }; _tuple3(int, double, int) y; \begin{comment} Since tuples are essentially structures, tuple indexing expressions are just field accesses: Since tuples are essentially structures, tuple indexing expressions are just member accesses: \begin{cfa} void f(int, [double, char]); _tuple2(int, double) x; x.field_0+x.field_1; printf("%d %g\n", x.field_0, x.field_1); f(x.field_0, (_tuple2){ x.field_1, 'z' }); \end{cfa} Note that due to flattening, @x@ used in the argument position is converted into the list of its fields. x.member_0+x.member_1; printf("%d %g\n", x.member_0, x.member_1); f(x.member_0, (_tuple2){ x.member_1, 'z' }); \end{cfa} Note that due to flattening, @x@ used in the argument position is converted into the list of its members. In the call to @f@, the second and third argument components are structured into a tuple argument. Similarly, tuple member expressions are recursively expanded into a list of member access expressions. The various kinds of tuple assignment, constructors, and destructors generate GNU C statement expressions. A variable is generated to store the value produced by a statement expression, since its fields may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg in a unique expression. A variable is generated to store the value produced by a statement expression, since its members may need to be constructed with a non-trivial constructor and it may need to be referred to multiple time, \eg in a unique expression. The use of statement expressions allows the translator to arbitrarily generate additional temporary variables as needed, but binds the implementation to a non-standard extension of the C language. However, there are other places where the \CFA translator makes use of GNU C extensions, such as its use of nested functions, so this restriction is not new. Heterogeneous data is often aggregated into a structure/union. To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate field-qualification by opening a scope containing the field identifiers. To reduce syntactic noise, \CFA provides a @with@ statement (see Pascal~\cite[\S~4.F]{Pascal}) to elide aggregate member-qualification by opening a scope containing the member identifiers. \begin{cquote} \vspace*{-\baselineskip}%??? The type must be an aggregate type. (Enumerations are already opened.) The object is the implicit qualifier for the open structure-fields. The object is the implicit qualifier for the open structure-members. All expressions in the expression list are open in parallel within the compound statement, which is different from Pascal, which nests the openings from left to right. The difference between parallel and nesting occurs for fields with the same name and type: \begin{cfa} struct S { int i; int j; double m; } s, w;    $\C{// field i has same type in structure types S and T}$ The difference between parallel and nesting occurs for members with the same name and type: \begin{cfa} struct S { int i; int j; double m; } s, w;    $\C{// member i has same type in structure types S and T}$ struct T { int i; int k; int m; } t, w; with ( s, t ) {                                                         $\C{// open structure variables s and t in parallel}$ For parallel semantics, both @s.i@ and @t.i@ are visible, so @i@ is ambiguous without qualification; for nested semantics, @t.i@ hides @s.i@, so @i@ implies @t.i@. \CFA's ability to overload variables means fields with the same name but different types are automatically disambiguated, eliminating most qualification when opening multiple aggregates. \CFA's ability to overload variables means members with the same name but different types are automatically disambiguated, eliminating most qualification when opening multiple aggregates. Qualification or a cast is used to disambiguate. \begin{cfa} void ?{}( S & s, int i ) with ( s ) {           $\C{// constructor}$ s.i = i;  j = 3;  m = 5.5;                    $\C{// initialize fields}$ s.i = i;  j = 3;  m = 5.5;                    $\C{// initialize members}$ } \end{cfa} \lstMakeShortInline@% \end{cquote} The only exception is bit field specification, which always appear to the right of the base type. The only exception is bit-field specification, which always appear to the right of the base type. % Specifically, the character @*@ is used to indicate a pointer, square brackets @[@\,@]@ are used to represent an array or function return value, and parentheses @()@ are used to indicate a function parameter. However, unlike C, \CFA type declaration tokens are distributed across all variables in the declaration list. // pointer to array of 5 doubles // common bit field syntax // common bit-field syntax \subsection{Type Nesting} Nested types provide a mechanism to organize associated types and refactor a subset of fields into a named aggregate (\eg sub-aggregates @name@, @address@, @department@, within aggregate @employe@). Nested types provide a mechanism to organize associated types and refactor a subset of members into a named aggregate (\eg sub-aggregates @name@, @address@, @department@, within aggregate @employe@). Java nested types are dynamic (apply to objects), \CC are static (apply to the \lstinline[language=C++]@class@), and C hoists (refactors) nested types into the enclosing scope, meaning there is no need for type qualification. Since \CFA in not object-oriented, adopting dynamic scoping does not make sense; instead \CFA adopts \CC static nesting, using the field-selection operator @.@'' for type qualification, as does Java, rather than the \CC type-selection operator @::@'' (see Figure~\ref{f:TypeNestingQualification}). instead \CFA adopts \CC static nesting, using the member-selection operator @.@'' for type qualification, as does Java, rather than the \CC type-selection operator @::@'' (see Figure~\ref{f:TypeNestingQualification}). \begin{figure} \centering \end{cfa} @VLA@ is a \newterm{managed type}\footnote{ A managed type affects the runtime environment versus a self-contained type.}: a type requiring a non-trivial constructor or destructor, or with a field of a managed type. A managed type affects the runtime environment versus a self-contained type.}: a type requiring a non-trivial constructor or destructor, or with a member of a managed type. A managed type is implicitly constructed at allocation and destructed at deallocation to ensure proper interaction with runtime resources, in this case, the @data@ array in the heap. For details of the code-generation placement of implicit constructor and destructor calls among complex executable statements see~\cite[\S~2.2]{Schluntz17}. \CFA constructors may be explicitly called, like Java, and destructors may be explicitly called, like \CC. Explicit calls to constructors double as a \CC-style \emph{placement syntax}, useful for construction of member fields in user-defined constructors and reuse of existing storage allocations. Explicit calls to constructors double as a \CC-style \emph{placement syntax}, useful for construction of members in user-defined constructors and reuse of existing storage allocations. Like the other operators in \CFA, there is a concise syntax for constructor/destructor function calls: \begin{cfa} For compatibility with C, a copy constructor from the first union member type is also defined. For @struct@ types, each of the four functions are implicitly defined to call their corresponding functions on each member of the struct. To better simulate the behaviour of C initializers, a set of \newterm{field constructors} is also generated for structures. To better simulate the behaviour of C initializers, a set of \newterm{member constructors} is also generated for structures. A constructor is generated for each non-empty prefix of a structure's member-list to copy-construct the members passed as parameters and default-construct the remaining members. To allow users to limit the set of constructors available for a type, when a user declares any constructor or destructor, the corresponding generated function and all field constructors for that type are hidden from expression resolution; To allow users to limit the set of constructors available for a type, when a user declares any constructor or destructor, the corresponding generated function and all member constructors for that type are hidden from expression resolution; similarly, the generated default constructor is hidden upon declaration of any constructor. These semantics closely mirror the rule for implicit declaration of constructors in \CC\cite[p.~186]{ANSI98:C++}. C provides variadic functions through @va_list@ objects, but the programmer is responsible for managing the number of arguments and their types, so the mechanism is type unsafe. KW-C~\cite{Buhr94a}, a predecessor of \CFA, introduced tuples to C as an extension of the C syntax, taking much of its inspiration from SETL. The main contributions of that work were adding MRVF, tuple mass and multiple assignment, and record-field access. The main contributions of that work were adding MRVF, tuple mass and multiple assignment, and record-member access. \CCeleven introduced @std::tuple@ as a library variadic template structure. Tuples are a generalization of @std::pair@, in that they allow for arbitrary length, fixed-size aggregation of heterogeneous values.
• ## src/Parser/parser.yy

 rfc263b5 // Created On       : Sat Sep  1 20:22:55 2001 // Last Modified By : Peter A. Buhr // Last Modified On : Thu May 24 16:49:58 2018 // Update Count     : 3367 // Last Modified On : Thu May 24 18:11:59 2018 // Update Count     : 3369 // declaration_list_opt:                                                                   // used at beginning of switch statement pop pop     // empty { $$= nullptr; } | declaration_list {$$ = DeclarationNode::newTuple( $3 ); } | '[' push cfa_parameter_list pop ',' push cfa_abstract_parameter_list pop ']' // To obtain LR(1 ), the last cfa_abstract_parameter_list is added into this flattened rule to lookahead to the // ']'. // To obtain LR(1 ), the last cfa_abstract_parameter_list is added into this flattened rule to lookahead to the ']'. { $$= DeclarationNode::newTuple( 3->appendList( 7 ) ); } ; TRAIT no_attr_identifier_or_type_name '(' push type_parameter_list pop ')' '{' '}' {$$ = DeclarationNode::newTrait($2, $5, 0 ); } | TRAIT no_attr_identifier_or_type_name '(' push type_parameter_list pop ')' '{' { typedefTable.enterScope(); } trait_declaration_list '}' | TRAIT no_attr_identifier_or_type_name '(' push type_parameter_list pop ')' '{' push trait_declaration_list '}' {$$= DeclarationNode::newTrait($2, $5,$10 ); } ;
• ## src/libcfa/concurrency/alarm.c

 rfc263b5 // Created On       : Fri Jun 2 11:31:25 2017 // Last Modified By : Peter A. Buhr // Last Modified On : Mon Apr  9 13:36:18 2018 // Update Count     : 61 // Last Modified On : Fri May 25 06:25:47 2018 // Update Count     : 67 // void __kernel_set_timer( Duration alarm ) { verifyf(alarm >= 1us || alarm == 0, "Setting timer to < 1us (%luns)", alarm.tv); verifyf(alarm >= 1us || alarm == 0, "Setting timer to < 1us (%jins)", alarm.tv); setitimer( ITIMER_REAL, &(itimerval){ alarm }, NULL ); }