Changeset 1da7397

Mar 27, 2021, 6:04:14 PM (3 years ago)
Peter A. Buhr <pabuhr@…>
ADT, arm-eh, ast-experimental, enum, forall-pointer-decay, jacob/cs343-translation, master, new-ast-unique-expr, pthread-emulation, qualifiedEnum
2644610 (diff), f8a7fed (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.

fix conflict

6 added
10 edited


  • doc/theses/andrew_beach_MMath/features.tex

    r2644610 r1da7397  
    33This chapter covers the design and user interface of the \CFA
    4 exception-handling mechanism.
    6 \section{Virtuals}
    7 Virtual types and casts are not part of the exception system nor are they
    8 required for an exception system. But an object-oriented style hierarchy is a
    9 great way of organizing exceptions so a minimal virtual system has been added
    10 to \CFA.
    12 The pattern of a simple hierarchy was borrowed from object-oriented
    13 programming was chosen for several reasons.
    14 The first is that it allows new exceptions to be added in user code
    15 and in libraries independently of each other. Another is it allows for
    16 different levels of exception grouping (all exceptions, all IO exceptions or
    17 a particular IO exception). Also it also provides a simple way of passing
    18 data back and forth across the throw.
    20 Virtual types and casts are not required for a basic exception-system but are
    21 useful for advanced exception features. However, \CFA is not object-oriented so
    22 there is no obvious concept of virtuals. Hence, to create advanced exception
    23 features for this work, I needed to design and implement a virtual-like
    24 system for \CFA.
    26 % NOTE: Maybe we should but less of the rational here.
    27 Object-oriented languages often organized exceptions into a simple hierarchy,
    28 \eg Java.
     4exception-handling mechanism (EHM). % or exception system.
     6% We should cover what is an exception handling mechanism and what is an
     7% exception before this. Probably in the introduction. Some of this could
     8% move there.
     9\paragraph{Raise / Handle}
     10An exception operation has two main parts: raise and handle.
     11These are the two parts that the user will write themselves and so
     12might be the only two pieces of the EHM that have any syntax.
     13These terms are sometimes also known as throw and catch but this work uses
     14throw/catch as a particular kind of raise/handle.
     17The raise is the starting point for exception handling and usually how
     18Some well known examples include the throw statements of \Cpp and Java and
     19the raise statement from Python.
     21For this overview a raise does nothing more kick off the handling of an
     22exception, which is called raising the exception. This is inexact but close
     23enough for the broad strokes of the overview.
     26The purpose of most exception operations is to run some sort of handler that
     27contains user code.
     28The try statement of \Cpp illistrates the common features
     29Handlers have three common features: a region of code they apply to, an
     30exception label that describes what exceptions they handle and code to run
     31when they handle an exception.
     32Each handler can handle exceptions raised in that region that match their
     33exception label. Different EHMs will have different rules to pick a handler
     34if multipe handlers could be used such as ``best match" or ``first found".
     37After an exception is raised comes what is usually the biggest step for the
     38EHM, finding and setting up the handler. This can be broken up into three
     39different tasks: searching for a handler, matching against the handler and
     40installing the handler.
     42First the EHM must search for possible handlers that could be used to handle
     43the exception. Searching is usually independent of the exception that was
     44thrown and instead depends on the call stack, the current function, its caller
     45and repeating down the stack.
     47Second it much match the exception with each handler to see which one is the
     48best match and hence which one should be used to handle the exception.
     49In languages where the best match is the first match these two are often
     50intertwined, a match check is preformed immediately after the search finds
     51a possible handler.
     53Third, after a handler is chosen it must be made ready to run.
     54What this actually involves can vary widely to fit with the rest of the
     55design of the EHM. The installation step might be trivial or it could be
     56the most expensive step in handling an exception. The latter tends to be the
     57case when stack unwinding is involved.
     59As an alternate third step if no appropriate handler is found then some sort
     60of recovery has to be preformed. This is only required with unchecked
     61exceptions as checked exceptions can promise that a handler is found. It also
     62is also installing a handler but it is a special default that may be
     63installed differently.
     66In \CFA the EHM uses a hierarchial system to organise its exceptions.
     67This stratagy is borrowed from object-orientated languages where the
     68exception hierarchy is a natural extension of the object hierarchy.
     70Consider the following hierarchy of exceptions:
    45 The hierarchy provides the ability to handle an exception at different degrees
    46 of specificity (left to right). Hence, it is possible to catch a more general
    47 exception-type in higher-level code where the implementation details are
    48 unknown, which reduces tight coupling to the lower-level implementation.
    49 Otherwise, low-level code changes require higher-level code changes, \eg,
    50 changing from raising @underflow@ to @overflow@ at the low level means changing
    51 the matching catch at the high level versus catching the general @arithmetic@
    52 exception. In detail, each virtual type may have a parent and can have any
    53 number of children. A type's descendants are its children and its children's
    54 descendants. A type may not be its own descendant.
    56 The exception hierarchy allows a handler (@catch@ clause) to match multiple
    57 exceptions, \eg a base-type handler catches both base and derived
    58 exception-types.
    59 \begin{cfa}
    60 try {
    61         ...
    62 } catch(arithmetic &) {
    63         ... // handle arithmetic, underflow, overflow, zerodivide
    64 }
    65 \end{cfa}
    66 Most exception mechanisms perform a linear search of the handlers and select
    67 the first matching handler, so the order of handers is now important because
    68 matching is many to one.
    70 Each virtual type needs an associated virtual table. A virtual table is a
    71 structure with fields for all the virtual members of a type. A virtual type has
    72 all the virtual members of its parent and can add more. It may also update the
    73 values of the virtual members and often does.
     88A handler labelled with any given exception can handle exceptions of that
     89type or any child type of that exception. The root of the exception hierarchy
     90(here \texttt{exception}) acts as a catch-all, leaf types catch single types
     91and the exceptions in the middle can be used to catch different groups of
     92related exceptions.
     94This system has some notable advantages, such as multiple levels of grouping,
     95the ability for libraries to add new exception types and the isolation
     96between different sub-hierarchies. So the design was adapted for a
     97non-object-orientated language.
     99% Could I cite the rational for the Python IO exception rework?
     102After the handler has finished the entire exception operation has to complete
     103and continue executing somewhere else. This step is usually very simple
     104both logically and in its implementation as the installation of the handler
     105usually does the heavy lifting.
     107The EHM can return control to many different places.
     108However, the most common is after the handler definition and the next most
     109common is after the raise.
     112For effective exception handling, additional information is usually required
     113as this base model only communicates the exception's identity. Common
     114additional methods of communication are putting fields on an exception and
     115allowing a handler to access the lexical scope it is defined in (usually
     116a function's local variables).
     118\paragraph{Other Features}
     119Any given exception handling mechanism is free at add other features on top
     120of this. This is an overview of the base that all EHMs use but it is not an
     121exaustive list of everything an EHM can do.
     124Virtual types and casts are not part of the exception system nor are they
     125required for an exception system. But an object-oriented style hierarchy is a
     126great way of organizing exceptions so a minimal virtual system has been added
     127to \CFA.
     129The virtual system supports multiple ``trees" of types. Each tree is
     130a simple hierarchy with a single root type. Each type in a tree has exactly
     131one parent - except for the root type which has zero parents - and any
     132number of children.
     133Any type that belongs to any of these trees is called a virtual type.
     135% A type's ancestors are its parent and its parent's ancestors.
     136% The root type has no ancestors.
     137% A type's decendents are its children and its children's decendents.
     139Every virtual type also has a list of virtual members. Children inherit
     140their parent's list of virtual members but may add new members to it.
     141It is important to note that these are virtual members, not virtual methods.
     142However as function pointers are allowed they can be used to mimic virtual
     143methods as well.
     145The unique id for the virtual type and all the virtual members are combined
     146into a virtual table type. Each virtual type has a pointer to a virtual table
     147as a hidden field.
     149Up until this point the virtual system is a lot like ones found in object-
     150orientated languages but this where they diverge. Objects encapsulate a
     151single set of behaviours in each type, universally across the entire program,
     152and indeed all programs that use that type definition. In this sense the
     153types are ``closed" and cannot be altered.
     155However in \CFA types do not encapsulate any behaviour. Traits are local and
     156types can begin to statify a trait, stop satifying a trait or satify the same
     157trait in a different way with each new definition. In this sense they are
     158``open" as they can change at any time. This means it is implossible to pick
     159a single set of functions that repersent the type.
     161So we don't try to have a single value. The user can define virtual tables
     162which are filled in at their declaration and given a name. Anywhere you can
     163see that name you can use that virtual table; even if it is defined locally
     164inside a function, although in that case you must respect its lifetime.
     166An object of a virtual type is ``bound" to a virtual table instance which
     167sets the virtual members for that object. The virtual members can be accessed
     168through the object.
    75170While much of the virtual infrastructure is created, it is currently only used
    83178\Cpp syntax for special casts. Both the type of @EXPRESSION@ and @TYPE@ must be
    84179a pointer to a virtual type.
    85 The cast dynamically checks if the @EXPRESSION@ type is the same or a subtype
     180The cast dynamically checks if the @EXPRESSION@ type is the same or a sub-type
    86181of @TYPE@, and if true, returns a pointer to the
    87182@EXPRESSION@ object, otherwise it returns @0p@ (null pointer).
    108203The function @get_exception_vtable@ is actually a constant function.
    109 Recardless of the value passed in (including the null pointer) it should
     204Regardless of the value passed in (including the null pointer) it should
    110205return a reference to the virtual table instance for that type.
    111206The reason it is a function instead of a constant is that it make type
    139234and their use will be detailed there.
    141 However all three of these traits can be trickly to use directly.
     236However all three of these traits can be tricky to use directly.
    142237There is a bit of repetition required but
    143238the largest issue is that the virtual table type is mangled and not in a user
    152247list will be passed to both types.
    153248In the current set-up the base name and the polymorphic arguments have to
    154 match so these macros can be used without losing flexability.
     249match so these macros can be used without losing flexibility.
    156251For example consider a function that is polymorphic over types that have a
    185280It is dynamic, non-local goto. If a throw is successful then the stack will
    186281be unwound and control will (usually) continue in a different function on
    187 the call stack. They are commonly used when an error has occured and recovery
     282the call stack. They are commonly used when an error has occurred and recovery
    188283is impossible in the current function.
    197292The expression must return a reference to a termination exception, where the
    198 termination exception is any type that satifies @is_termination_exception@
     293termination exception is any type that satisfies @is_termination_exception@
    199294at the call site.
    200295Through \CFA's trait system the functions in the traits are passed into the
    204299The throw will copy the provided exception into managed memory. It is the
    205 user's responcibility to ensure the original exception is cleaned up if the
     300user's responsibility to ensure the original exception is cleaned up if the
    206301stack is unwound (allocating it on the stack should be sufficient).
    222 When viewed on its own a try statement will simply exceute the statements in
     317When viewed on its own a try statement will simply execute the statements in
    223318@GUARDED_BLOCK@ and when those are finished the try statement finishes.
    232327Exception matching checks the representation of the thrown exception-type is
    233328the same or a descendant type of the exception types in the handler clauses. If
    234 it is the same of a descendent of @EXCEPTION_TYPE@$_i$ then @NAME@$_i$ is
     329it is the same of a descendant of @EXCEPTION_TYPE@$_i$ then @NAME@$_i$ is
    235330bound to a pointer to the exception and the statements in @HANDLER_BLOCK@$_i$
    236331are executed. If control reaches the end of the handler, the exception is
    255350closure will be taken from up the stack and executed, after which the throwing
    256351function will continue executing.
    257 These are most often used when an error occured and if the error is repaired
     352These are most often used when an error occurred and if the error is repaired
    258353then the function can continue.
    264359The semantics of the @throwResume@ statement are like the @throw@, but the
    265 expression has return a reference a type that satifies the trait
     360expression has return a reference a type that satisfies the trait
    266361@is_resumption_exception@. The assertions from this trait are available to
    267362the exception system while handling the exception.
    269 At runtime, no copies are made. As the stack is not unwound the exception and
     364At run-time, no copies are made. As the stack is not unwound the exception and
    270365any values on the stack will remain in scope while the resumption is handled.
    331426search and match the handler in the @catchResume@ clause. This will be
    332427call and placed on the stack on top of the try-block. The second throw then
    333 throws and will seach the same try block and put call another instance of the
     428throws and will search the same try block and put call another instance of the
    334429same handler leading to an infinite loop.
    337432can form with multiple handlers and different exception types.
    339 To prevent all of these cases we mask sections of the stack, or equvilantly
    340 the try statements on the stack, so that the resumption seach skips over
     434To prevent all of these cases we mask sections of the stack, or equivalently
     435the try statements on the stack, so that the resumption search skips over
    341436them and continues with the next unmasked section of the stack.
    373468The symmetry with termination is why this pattern was picked. Other patterns,
    374469such as marking just the handlers that caught, also work but lack the
    375 symmetry whih means there is more to remember.
     470symmetry which means there is more to remember.
    377472\section{Conditional Catch}
    398 Note, catching @IOFailure@, checking for @f1@ in the handler, and reraising the
    399 exception if not @f1@ is different because the reraise does not examine any of
     493Note, catching @IOFailure@, checking for @f1@ in the handler, and re-raising the
     494exception if not @f1@ is different because the re-raise does not examine any of
    400495remaining handlers in the current try statement.
    448543@return@ that causes control to leave the finally block. Other ways to leave
    449544the finally block, such as a long jump or termination are much harder to check,
    450 and at best requiring additional run-time overhead, and so are mearly
     545and at best requiring additional run-time overhead, and so are mealy
    506601this point.}
    508 The recommended way to avoid the abort is to handle the intial resumption
     603The recommended way to avoid the abort is to handle the initial resumption
    509604from the implicate join. If required you may put an explicate join inside a
    510605finally clause to disable the check and use the local
  • libcfa/src/concurrency/clib/

    r2644610 r1da7397  
    5858        this.themain = themain;
    5959        this.arg = arg;
    60         ((thread&)this){"C-thread", cl};
     60        (this.self){"C-thread", cl};
    6161        __thrd_start(this, main);
    102102        this.init = init;
    103103        this.arg = arg;
    104         ((thread&)this){"Processir Init"};
     104        (this.self){"Processir Init"};
    106106        // Don't use __thrd_start! just prep the context manually
    313313        ssize_t cfathread_write(int fildes, const void *buf, size_t nbyte) {
    314                 return cfa_write(fildes, buf, nbyte, CFA_IO_LAZY);
     314                // Use send rather then write for socket since it's faster
     315                return cfa_send(fildes, buf, nbyte, 0, CFA_IO_LAZY);
    315316        }
    336337        ssize_t cfathread_read(int fildes, void *buf, size_t nbyte) {
    337                 return cfa_read(fildes, buf, nbyte, CFA_IO_LAZY);
    338         }
    340 }
     338                // Use recv rather then read for socket since it's faster
     339                return cfa_recv(fildes, buf, nbyte, 0, CFA_IO_LAZY);
     340        }
  • libcfa/src/concurrency/invoke.h

    r2644610 r1da7397  
    226226                static inline $thread * volatile & ?`next ( $thread * this )  __attribute__((const)) {
    227                         return this->seqable.back;
     227                        return this->;
    228228                }
  • libcfa/src/concurrency/kernel.hfa

    r2644610 r1da7397  
    6969        // Cluster from which to get threads
    7070        struct cluster * cltr;
     72        // Id within the cluster
     73        unsigned cltr_id;
    7275        // Set to true to notify the processor should terminate
  • libcfa/src/concurrency/kernel/

    r2644610 r1da7397  
    487487                // Adjust the ready queue size
    488                 ready_queue_grow( cltr, target );
     488                this.cltr_id = ready_queue_grow( cltr, target );
    490490        // Unlock the RWlock
  • libcfa/src/concurrency/kernel_private.hfa

    r2644610 r1da7397  
    279279// Increase the width of the ready queue (number of lanes) by 4
    280 void ready_queue_grow  (struct cluster * cltr, int target);
     280unsigned ready_queue_grow  (struct cluster * cltr, int target);
  • libcfa/src/concurrency/locks.hfa

    r2644610 r1da7397  
    2121#include "bits/weakso_locks.hfa"
     22#include "containers/queueLockFree.hfa"
     24#include "thread.hfa"
    2326#include "time_t.hfa"
    2427#include "time.hfa"
     30// Semaphores
     32// '0-nary' semaphore
     33// Similar to a counting semaphore except the value of one is never reached
     34// as a consequence, a V() that would bring the value to 1 *spins* until
     35// a P consumes it
     36struct Semaphore0nary {
     37        __spinlock_t lock; // needed to protect
     38        mpsc_queue($thread) queue;
     41static inline bool P(Semaphore0nary & this, $thread * thrd) __attribute__((artificial));
     42static inline bool P(Semaphore0nary & this, $thread * thrd) {
     43        /* paranoid */ verify(!(thrd->;
     44        /* paranoid */ verify(!(thrd`next));
     46        push(this.queue, thrd);
     47        return true;
     50static inline bool P(Semaphore0nary & this) __attribute__((artificial));
     51static inline bool P(Semaphore0nary & this) {
     52    $thread * thrd = active_thread();
     53    P(this, thrd);
     54    park();
     55    return true;
     58static inline $thread * V(Semaphore0nary & this, const bool doUnpark = true) __attribute__((artificial));
     59static inline $thread * V(Semaphore0nary & this, const bool doUnpark = true) {
     60        $thread * next;
     61        lock(this.lock __cfaabi_dbg_ctx2);
     62                for (;;) {
     63                        next = pop(this.queue);
     64                        if (next) break;
     65                        Pause();
     66                }
     67        unlock(this.lock);
     69        if (doUnpark) unpark(next);
     70        return next;
     73// Wrapper used on top of any sempahore to avoid potential locking
     74struct BinaryBenaphore {
     75        volatile ssize_t counter;
     78static inline {
     79        void ?{}(BinaryBenaphore & this) { this.counter = 0; }
     80        void ?{}(BinaryBenaphore & this, zero_t) { this.counter = 0; }
     81        void ?{}(BinaryBenaphore & this, one_t ) { this.counter = 1; }
     83        // returns true if no blocking needed
     84        bool P(BinaryBenaphore & this) { return __atomic_fetch_sub(&this.counter, 1, __ATOMIC_SEQ_CST) > 0; }
     85        bool tryP(BinaryBenaphore & this) {
     86                ssize_t c = this.counter;
     87                return (c >= 1) && __atomic_compare_exchange_n(&this.counter, &c, c-1, false, __ATOMIC_SEQ_CST, __ATOMIC_RELAXED);
     88        }
     90        // returns true if notify needed
     91        bool V(BinaryBenaphore & this) {
     92                ssize_t c = 0;
     93                for () {
     94                        if (__atomic_compare_exchange_n(&this.counter, &c, c+1, false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) {
     95                                if (c == 0) return true;
     96                                /* paranoid */ verify(c < 0);
     97                                return false;
     98                        } else {
     99                                if (c == 1) return true;
     100                                /* paranoid */ verify(c < 1);
     101                                Pause();
     102                        }
     103                }
     104        }
     107// Binary Semaphore based on the BinaryBenaphore on top of the 0-nary Semaphore
     108struct ThreadBenaphore {
     109        BinaryBenaphore ben;
     110        Semaphore0nary  sem;
     113static inline void ?{}(ThreadBenaphore & this) {}
     114static inline void ?{}(ThreadBenaphore & this, zero_t) { (this.ben){ 0 }; }
     115static inline void ?{}(ThreadBenaphore & this, one_t ) { (this.ben){ 1 }; }
     117static inline bool P(ThreadBenaphore & this)              { return /* P(this.ben) ? false : */ P(this.sem); }
     118static inline bool P(ThreadBenaphore & this, $thread * t) { return /* P(this.ben) ? false : */ P(this.sem, t ); }
     119static inline bool tryP(ThreadBenaphore & this)           { return tryP(this.ben); }
     120static inline bool P(ThreadBenaphore & this, bool wait)   { return wait ? P(this) : tryP(this); }
     122static inline $thread * V(ThreadBenaphore & this, const bool doUnpark = true) {
     123        // if (V(this.ben)) return 0p;
     124        return V(this.sem, doUnpark);
     128// Semaphore
     129struct semaphore {
     130        __spinlock_t lock;
     131        int count;
     132        __queue_t($thread) waiting;
     135void  ?{}(semaphore & this, int count = 1);
     136void ^?{}(semaphore & this);
     137bool   P (semaphore & this);
     138bool   V (semaphore & this);
     139bool   V (semaphore & this, unsigned count);
    54169static inline size_t get_recursion_count( owner_lock & this ) { return get_recursion_count( (blocking_lock &)this ); }
     171struct fast_lock {
     172        $thread * volatile owner;
     173        ThreadBenaphore sem;
     176static inline bool $try_lock(fast_lock & this, $thread * thrd) {
     177    $thread * exp = 0p;
     178    return __atomic_compare_exchange_n(&this.owner, &exp, thrd, false, __ATOMIC_SEQ_CST, __ATOMIC_RELAXED);
     181static inline void $lock(fast_lock & this, $thread * thrd) {
     182        /* paranoid */verify(thrd != this.owner);
     184        for (;;) {
     185                if ($try_lock(this, thrd)) return;
     186                P(this.sem, thrd);
     187        }
     190static inline void lock( fast_lock & this ) {
     191        $thread * thrd = active_thread();
     192        /* paranoid */verify(thrd != this.owner);
     194        for (;;) {
     195                if ($try_lock(this, thrd)) return;
     196                P(this.sem);
     197        }
     200static inline void try_lock ( fast_lock & this ) {
     201        $thread * thrd = active_thread();
     202        /* paranoid */ verify(thrd != this.owner);
     203        return $try_lock(this, thrd);
     206static inline void unlock( fast_lock & this ) {
     207        $thread * thrd = active_thread();
     208        /* paranoid */ verify(thrd == this.owner);
     209        $thread * next = V(this.sem, false); // implicit fence
     210        // open 'owner' only after fence
     211        this.owner = 0p;
     213        // Unpark the next person (can be 0p, unpark handles it)
     214        unpark(next);
     217static inline void on_wait( fast_lock & this ) {
     218        unlock(this);
     219        #warning this is broken
     222static inline void on_notify( fast_lock & this, struct $thread * t ) {
     223        $lock(this, t);
     224        #warning this is broken
     227static inline void   set_recursion_count( fast_lock & this, size_t recursion ) {}
     228static inline size_t get_recursion_count( fast_lock & this ) { return 0; }
     230struct mcs_node {
     231        mcs_node * volatile next;
     232        single_sem sem;
     235static inline void ?{}(mcs_node & this) { = 0p; }
     237static inline mcs_node * volatile & ?`next ( mcs_node * node ) {
     238        return node->next;
     241struct mcs_lock {
     242        mcs_queue(mcs_node) queue;
     245static inline void lock(mcs_lock & l, mcs_node & n) {
     246        if(push(l.queue, &n))
     247                wait(n.sem);
     250static inline void unlock(mcs_lock & l, mcs_node & n) {
     251        mcs_node * next = advance(l.queue, &n);
     252        if(next) post(next->sem);
    57256// is_blocking_lock
    121320        bool wait( condition_variable(L) & this, L & l, uintptr_t info, Time time );
    124 //-----------------------------------------------------------------------------
    125 // Semaphore
    126 struct semaphore {
    127         __spinlock_t lock;
    128         int count;
    129         __queue_t($thread) waiting;
    130 };
    132 void  ?{}(semaphore & this, int count = 1);
    133 void ^?{}(semaphore & this);
    134 bool   P (semaphore & this);
    135 bool   V (semaphore & this);
    136 bool   V (semaphore & this, unsigned count);
  • libcfa/src/concurrency/

    r2644610 r1da7397  
    41 #define BIAS 16
     41#define BIAS 4
    4343// returns the maximum number of processors the RWLock support
    252252                preferred =
    253253                        //*
    254                         kernelTLS().this_processor ? kernelTLS().this_processor->id * 4 : -1;
     254                        kernelTLS().this_processor ? kernelTLS().this_processor->cltr_id : -1;
    255255                        /*/
    256256                        thrd->link.preferred * 4;
    330330        #if defined(BIAS)
    331331                // Don't bother trying locally too much
    332                 preferred = kernelTLS().this_processor->id * 4;
     332                preferred = kernelTLS().this_processor->cltr_id;
    333333        #endif
    353353                #if !defined(__CFA_NO_STATISTICS__)
    354                         if(locali) {
    355                                 __tls_stats()->ready.pick.pop.local++;
    356                         }
    357                         if(localj) {
     354                        if(locali && localj) {
    358355                                __tls_stats()->ready.pick.pop.local++;
    359356                        }
    529526// Grow the ready queue
    530 void ready_queue_grow  (struct cluster * cltr, int target) {
     527unsigned ready_queue_grow(struct cluster * cltr, int target) {
     528        unsigned preferred;
     529        size_t ncount;
    531531        /* paranoid */ verify( ready_mutate_islocked() );
    532532        __cfadbg_print_safe(ready_queue, "Kernel : Growing ready queue\n");
    543543                // Find new count
    544544                // Make sure we always have atleast 1 list
    545                 size_t ncount = target >= 2 ? target * 4: 1;
     545                if(target >= 2) {
     546                        ncount = target * 4;
     547                        preferred = ncount - 4;
     548                } else {
     549                        ncount = 1;
     550                        preferred = 0;
     551                }
    547553                // Allocate new array (uses realloc and memcpies the data)
    579585        /* paranoid */ verify( ready_mutate_islocked() );
     586        return preferred;
  • libcfa/src/containers/queueLockFree.hfa

    r2644610 r1da7397  
    2020                // Adds an element to the list
    2121                // Multi-Thread Safe, Lock-Free
    22                 T * push(mcs_queue(T) & this, T & elem) {
    23                         /* paranoid */ verify(!(&elem)`next);
     22                T * push(mcs_queue(T) & this, T * elem) __attribute__((artificial));
     23                T * push(mcs_queue(T) & this, T * elem) {
     24                        /* paranoid */ verify(!(elem`next));
    2425                        // Race to add to the tail
    25                         T * prev = __atomic_exchange_n(&this.tail, &elem, __ATOMIC_SEQ_CST);
     26                        T * prev = __atomic_exchange_n(&this.tail, elem, __ATOMIC_SEQ_CST);
    2627                        // If we aren't the first, we need to tell the person before us
    2728                        // No need to
    28                         if (prev) prev`next = &elem;
     29                        if (prev) prev`next = elem;
    2930                        return prev;
    3031                }
    3334                // Passing an element that is not the head is undefined behavior
    3435                // NOT Multi-Thread Safe, concurrent pushes are safe
    35                 T * advance(mcs_queue(T) & this, T & elem) {
    36                         T * expected = &elem;
     36                T * advance(mcs_queue(T) & this, T * elem) __attribute__((artificial));
     37                T * advance(mcs_queue(T) & this, T * elem) {
     38                        T * expected = elem;
    3739                        // Check if this is already the last item
    3840                        if (__atomic_compare_exchange_n(&this.tail, &expected, 0p, false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) return 0p;
    40                         // If not wait for next item to show-up
    41                         // added by push
    42                         while (!(&elem)`next) Pause();
    43                         return (&elem)`next;
     42                        // If not wait for next item to show-up, filled by push
     43                        while (!(elem`next)) Pause();
     45                        // we need to return if the next link was empty
     46                        T * ret = elem`next;
     48                        // invalidate link to reset to initial state
     49                        elem`next = 0p;
     50                        return ret;
    4451                }
    4552        }
    6471                // Added a new element to the queue
    6572                // Multi-Thread Safe, Lock-Free
    66                 T * push(mpsc_queue(T) & this, T & elem) {
     73                T * push(mpsc_queue(T) & this, T * elem) __attribute__((artificial));
     74                T * push(mpsc_queue(T) & this, T * elem) {
    6775                        T * prev = push((mcs_queue(T)&)this, elem);
    68                         if (!prev) this.head = &elem;
     76                        if (!prev) this.head = elem;
    6977                        return prev;
    7078                }
    7482                // next is set to the new head of the queue
    7583                // NOT Multi-Thread Safe
     84                T * pop(mpsc_queue(T) & this, T *& next) __attribute__((artificial));
    7685                T * pop(mpsc_queue(T) & this, T *& next) {
    7786                        T * elem = this.head;
    8493                                // force memory sync
    8594                                __atomic_thread_fence(__ATOMIC_SEQ_CST);
     96                                // invalidate link to reset to initial state
     97                                elem`next = 0p;
    8698                        }
    8799                        // Otherwise, there might be a race where it only looks but someone is enqueuing
    91103                                // after that point, it could overwrite the write in push
    92104                                this.head = 0p;
    93                                 next = advance((mcs_queue(T)&)this, (*elem));
     105                                next = advance((mcs_queue(T)&)this, elem);
    95107                                // Only write to the head if there is a next element
    98110                                if (next) this.head = next;
    99111                        }
    101                         // invalidate link
    102                         elem`next = 0p;
    104113                        // return removed element
  • tests/

    r2644610 r1da7397  
    7575        pybin/ \
    7676        long_tests.hfa \
    77         .in/ \
    7877        io/.in/ \
     78        io/.in/ \
    7979        avltree/avl.h \
    8080        avltree/avl-private.h \
    8181        concurrent/clib.c \
     82        concurrent/clib_tls.c \
    8283        exceptions/with-threads.hfa \
    8384        exceptions/except-io.hfa
Note: See TracChangeset for help on using the changeset viewer.