Changeset f93c50a


Ignore:
Timestamp:
Sep 24, 2021, 6:13:46 PM (4 years ago)
Author:
caparsons <caparson@…>
Branches:
ADT, ast-experimental, enum, forall-pointer-decay, master, pthread-emulation, qualifiedEnum
Children:
166b384
Parents:
7e7a076 (diff), 9411cf0 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

fixed merge

Files:
8 added
24 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/andrew_beach_MMath/Makefile

    r7e7a076 rf93c50a  
    3131
    3232# The main rule, it does all the tex/latex processing.
    33 ${BUILD}/${BASE}.dvi: ${RAWSRC} ${FIGTEX} Makefile | ${BUILD}
     33${BUILD}/${BASE}.dvi: ${RAWSRC} ${FIGTEX} termhandle.pstex resumhandle.pstex Makefile | ${BUILD}
    3434        ${LATEX} ${BASE}
    3535        ${BIBTEX} ${BUILD}/${BASE}
     
    4040${FIGTEX}: ${BUILD}/%.tex: %.fig | ${BUILD}
    4141        fig2dev -L eepic $< > $@
     42
     43%.pstex : %.fig | ${Build}
     44        fig2dev -L pstex $< > ${BUILD}/$@
     45        fig2dev -L pstex_t -p ${BUILD}/$@ $< > ${BUILD}/$@_t
    4246
    4347# Step through dvi & postscript to handle xfig specials.
  • doc/theses/andrew_beach_MMath/existing.tex

    r7e7a076 rf93c50a  
    4949asterisk (@*@) is replaced with a ampersand (@&@);
    5050this includes cv-qualifiers (\snake{const} and \snake{volatile})
    51 %\todo{Should I go into even more detail on cv-qualifiers.}
    5251and multiple levels of reference.
    5352
  • doc/theses/andrew_beach_MMath/features.tex

    r7e7a076 rf93c50a  
    129129\section{Virtuals}
    130130\label{s:virtuals}
    131 %\todo{Maybe explain what "virtual" actually means.}
     131A common feature in many programming languages is a tool to pair code
     132(behaviour) with data.
     133In \CFA this is done with the virtual system,
     134which allow type information to be abstracted away, recovered and allow
     135operations to be performed on the abstract objects.
     136
    132137Virtual types and casts are not part of \CFA's EHM nor are they required for
    133138an EHM.
     
    488493Since it is so general, a more specific handler can be defined,
    489494overriding the default behaviour for the specific exception types.
    490 %\todo{Examples?}
     495
     496For example, consider an error reading a configuration file.
     497This is most likely a problem with the configuration file (@config_error@),
     498but the function could have been passed the wrong file name (@arg_error@).
     499In this case the function could raise one exception and then, if it is
     500unhandled, raise the other.
     501This is not usual behaviour for either exception so changing the
     502default handler will be done locally:
     503\begin{cfa}
     504{
     505        void defaultTerminationHandler(config_error &) {
     506                throw (arg_error){arg_vt};
     507        }
     508        throw (config_error){config_vt};
     509}
     510\end{cfa}
    491511
    492512\subsection{Resumption}
     
    551571the just handled exception came from, and continues executing after it,
    552572not after the try statement.
    553 %\todo{Examples?}
     573
     574For instance, a resumption used to send messages to the logger may not
     575need to be handled at all. Putting the following default handler
     576at the global scope can make handling that exception optional by default.
     577\begin{cfa}
     578void defaultResumptionHandler(log_message &) {
     579    // Nothing, it is fine not to handle logging.
     580}
     581// ... No change at raise sites. ...
     582throwResume (log_message){strlit_log, "Begin event processing."}
     583\end{cfa}
    554584
    555585\subsubsection{Resumption Marking}
     
    878908After a coroutine stack is unwound, control returns to the @resume@ function
    879909that most recently resumed it. @resume@ reports a
    880 @CoroutineCancelled@ exception, which contains a references to the cancelled
     910@CoroutineCancelled@ exception, which contains a reference to the cancelled
    881911coroutine and the exception used to cancel it.
    882912The @resume@ function also takes the \defaultResumptionHandler{} from the
  • doc/theses/andrew_beach_MMath/implement.tex

    r7e7a076 rf93c50a  
    5050The problem is that a type ID may appear in multiple TUs that compose a
    5151program (see \autoref{ss:VirtualTable}), so the initial solution would seem
    52 to be make it external in each translation unit. Hovever, the type ID must
     52to be make it external in each translation unit. However, the type ID must
    5353have a declaration in (exactly) one of the TUs to create the storage.
    5454No other declaration related to the virtual type has this property, so doing
     
    167167\subsection{Virtual Table}
    168168\label{ss:VirtualTable}
    169 %\todo{Clarify virtual table type vs. virtual table instance.}
    170169Each virtual type has a virtual table type that stores its type ID and
    171170virtual members.
    172 Each virtual type instance is bound to a table instance that is filled with
    173 the values of virtual members.
    174 Both the layout of the fields and their value are decided by the rules given
     171An instance of a virtual type is bound to a virtual table instance,
     172which have the values of the virtual members.
     173Both the layout of the fields (in the virtual table type)
     174and their value (in the virtual table instance) are decided by the rules given
    175175below.
    176176
     
    414414of a function's state with @setjmp@ and restoring that snapshot with
    415415@longjmp@. This approach bypasses the need to know stack details by simply
    416 reseting to a snapshot of an arbitrary but existing function frame on the
     416resetting to a snapshot of an arbitrary but existing function frame on the
    417417stack. It is up to the programmer to ensure the snapshot is valid when it is
    418418reset and that all required cleanup from the unwound stacks is performed.
    419 This approach is fragile and requires extra work in the surrounding code.
     419Because it does not automate or check any of this cleanup,
     420it can be easy to make mistakes and always must be handled manually.
    420421
    421422With respect to the extra work in the surrounding code,
     
    435436library that provides tools for stack walking, handler execution, and
    436437unwinding. What follows is an overview of all the relevant features of
    437 libunwind needed for this work, and how \CFA uses them to implement exception
    438 handling.
     438libunwind needed for this work.
     439Following that is the description of the \CFA code that uses libunwind
     440to implement termination.
    439441
    440442\subsection{libunwind Usage}
  • doc/theses/andrew_beach_MMath/intro.tex

    r7e7a076 rf93c50a  
    3939it returns control to that function.
    4040\begin{center}
    41 \input{termination}
     41%\input{termination}
     42%
     43%\medskip
     44\input{termhandle.pstex_t}
     45% I hate these diagrams, but I can't access xfig to fix them and they are
     46% better than the alternative.
    4247\end{center}
    43 %\todo{What does the right half of termination.fig mean?}
    4448
    4549Resumption exception handling searches the stack for a handler and then calls
     
    5054that preformed the raise, usually starting after the raise.
    5155\begin{center}
    52 \input{resumption}
     56%\input{resumption}
     57%
     58%\medskip
     59\input{resumhandle.pstex_t}
     60% The other one.
    5361\end{center}
    5462
     
    214222unwinding the stack like in termination exception
    215223handling.\cite{RustPanicMacro}\cite{RustPanicModule}
    216 Go's panic through is very similar to a termination, except it only supports
     224Go's panic though is very similar to a termination, except it only supports
    217225a catch-all by calling \code{Go}{recover()}, simplifying the interface at
    218226the cost of flexibility.\cite{Go:2021}
     
    237245through multiple functions before it is addressed.
    238246
     247Here is an example of the pattern in Bash, where commands can only  ``return"
     248numbers and most output is done through streams of text.
     249\begin{lstlisting}[language=bash,escapechar={}]
     250# Immediately after running a command:
     251case $? in
     2520)
     253        # Success
     254        ;;
     2551)
     256        # Error Code 1
     257        ;;
     2582|3)
     259        # Error Code 2 or Error Code 3
     260        ;;
     261# Add more cases as needed.
     262asac
     263\end{lstlisting}
     264
    239265\item\emph{Special Return with Global Store}:
    240266Similar to the error codes pattern but the function itself only returns
     
    246272
    247273This approach avoids the multiple results issue encountered with straight
    248 error codes but otherwise has the same disadvantages and more.
     274error codes as only a single error value has to be returned,
     275but otherwise has the same disadvantages and more.
    249276Every function that reads or writes to the global store must agree on all
    250277possible errors and managing it becomes more complex with concurrency.
     278
     279This example shows some of what has to be done to robustly handle a C
     280standard library function that reports errors this way.
     281\begin{lstlisting}[language=C]
     282// Make sure to clear the store.
     283errno = 0;
     284// Now a library function can set the error.
     285int handle = open(path_name, flags);
     286if (-1 == handle) {
     287        switch (errno) {
     288    case ENAMETOOLONG:
     289                // path_name is a bad argument.
     290                break;
     291        case ENFILE:
     292                // A system resource has been exausted.
     293                break;
     294        // And many more...
     295    }
     296}
     297\end{lstlisting}
     298% cite open man page?
    251299
    252300\item\emph{Return Union}:
     
    259307This pattern is very popular in any functional or semi-functional language
    260308with primitive support for tagged unions (or algebraic data types).
    261 % We need listing Rust/rust to format code snippets from it.
    262 % Rust's \code{rust}{Result<T, E>}
     309Return unions can also be expressed as monads (evaluation in a context)
     310and often are in languages with special syntax for monadic evaluation,
     311such as Haskell's \code{haskell}{do} blocks.
     312
    263313The main advantage is that an arbitrary object can be used to represent an
    264314error, so it can include a lot more information than a simple error code.
     
    266316execution, and if there aren't primitive tagged unions proper, usage can be
    267317hard to enforce.
     318% We need listing Rust/rust to format code snippets from it.
     319% Rust's \code{rust}{Result<T, E>}
     320
     321This is a simple example of examining the result of a failing function in
     322Haskell, using its \code{haskell}{Either} type.
     323Examining \code{haskell}{error} further would likely involve more matching,
     324but the type of \code{haskell}{error} is user defined so there are no
     325general cases.
     326\begin{lstlisting}[language=haskell]
     327case failingFunction argA argB of
     328    Right value -> -- Use the successful computed value.
     329    Left error -> -- Handle the produced error.
     330\end{lstlisting}
     331
     332Return unions as monads will result in the same code, but can hide most
     333of the work to propagate errors in simple cases. The code to actually handle
     334the errors, or to interact with other monads (a common case in these
     335languages) still has to be written by hand.
     336
     337If \code{haskell}{failingFunction} is implemented with two helpers that
     338use the same error type, then it can be implemented with a \code{haskell}{do}
     339block.
     340\begin{lstlisting}[language=haskell]
     341failingFunction x y = do
     342        z <- helperOne x
     343        helperTwo y z
     344\end{lstlisting}
    268345
    269346\item\emph{Handler Functions}:
     
    286363function calls, but cheaper (constant time) to call,
    287364they are more suited to more frequent (less exceptional) situations.
     365Although, in \Cpp and other languages that do not have checked exceptions,
     366they can actually be enforced by the type system be more reliable.
     367
     368This is a more local example in \Cpp, using a function to provide
     369a default value for a mapping.
     370\begin{lstlisting}[language=C++]
     371ValueT Map::key_or_default(KeyT key, ValueT(*make_default)(KeyT)) {
     372        ValueT * value = find_value(key);
     373        if (nullptr != value) {
     374                return *value;
     375        } else {
     376                return make_default(key);
     377        }
     378}
     379\end{lstlisting}
    288380\end{itemize}
    289381
  • doc/theses/andrew_beach_MMath/performance.tex

    r7e7a076 rf93c50a  
    3737resumption exceptions. Even the older programming languages with resumption
    3838seem to be notable only for having resumption.
     39On the other hand, the functional equivalents to resumption are too new.
     40There does not seem to be any standard implementations in well-known
     41languages, so far they seem confined to extensions and research languages.
     42% There was some maybe interesting comparison to an OCaml extension
     43% but I'm not sure how to get that working if it is interesting.
    3944Instead, resumption is compared to its simulation in other programming
    4045languages: fixup functions that are explicitly passed into a function.
     
    312317\CFA, \Cpp and Java.
    313318% To be exact, the Match All and Match None cases.
    314 %\todo{Not true in Python.}
    315 The most likely explanation is that, since exceptions
    316 are rarely considered to be the common case, the more optimized languages
    317 make that case expensive to improve other cases.
     319The most likely explination is that,
     320the generally faster languages have made ``common cases fast" at the expense
     321of the rarer cases. Since exceptions are considered rare, they are made
     322expensive to help speed up common actions, such as entering and leaving try
     323statements.
     324Python on the other hand, while generally slower than the other languages,
     325uses exceptions more and has not scarified their performance.
    318326In addition, languages with high-level representations have a much
    319327easier time scanning the stack as there is less to decode.
  • doc/theses/andrew_beach_MMath/uw-ethesis.bib

    r7e7a076 rf93c50a  
    5050    author={The Rust Team},
    5151    key={Rust Panic Macro},
    52     howpublished={\href{https://doc.rust-lang.org/std/panic/index.html}{https://\-doc.rust-lang.org/\-std/\-panic/\-index.html}},
     52    howpublished={\href{https://doc.rust-lang.org/std/macro.panic.html}{https://\-doc.rust-lang.org/\-std/\-macro.panic.html}},
    5353    addendum={Accessed 2021-08-31},
    5454}
  • libcfa/src/concurrency/clib/cfathread.cfa

    r7e7a076 rf93c50a  
    1414//
    1515
     16// #define EPOLL_FOR_SOCKETS
     17
    1618#include "fstream.hfa"
    1719#include "locks.hfa"
     
    2325#include "cfathread.h"
    2426
     27extern "C" {
     28                #include <string.h>
     29                #include <errno.h>
     30}
     31
    2532extern void ?{}(processor &, const char[], cluster &, thread$ *);
    2633extern "C" {
    2734      extern void __cfactx_invoke_thread(void (*main)(void *), void * this);
     35        extern int accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen, int flags);
    2836}
    2937
    3038extern Time __kernel_get_time();
     39extern unsigned register_proc_id( void );
    3140
    3241//================================================================================
    33 // Thread run y the C Interface
     42// Epoll support for sockets
     43
     44#if defined(EPOLL_FOR_SOCKETS)
     45        extern "C" {
     46                #include <sys/epoll.h>
     47                #include <sys/resource.h>
     48        }
     49
     50        static pthread_t master_poller;
     51        static int master_epollfd = 0;
     52        static size_t poller_cnt = 0;
     53        static int * poller_fds = 0p;
     54        static struct leaf_poller * pollers = 0p;
     55
     56        struct __attribute__((aligned)) fd_info_t {
     57                int pollid;
     58                size_t rearms;
     59        };
     60        rlim_t fd_limit = 0;
     61        static fd_info_t * volatile * fd_map = 0p;
     62
     63        void * master_epoll( __attribute__((unused)) void * args ) {
     64                unsigned id = register_proc_id();
     65
     66                enum { MAX_EVENTS = 5 };
     67                struct epoll_event events[MAX_EVENTS];
     68                for() {
     69                        int ret = epoll_wait(master_epollfd, events, MAX_EVENTS, -1);
     70                        if ( ret < 0 ) {
     71                                abort | "Master epoll error: " | strerror(errno);
     72                        }
     73
     74                        for(i; ret) {
     75                                thread$ * thrd = (thread$ *)events[i].data.u64;
     76                                unpark( thrd );
     77                        }
     78                }
     79
     80                return 0p;
     81        }
     82
     83        static inline int epoll_rearm(int epollfd, int fd, uint32_t event) {
     84                struct epoll_event eevent;
     85                eevent.events = event | EPOLLET | EPOLLONESHOT;
     86                eevent.data.u64 = (uint64_t)active_thread();
     87
     88                if(0 != epoll_ctl(epollfd, EPOLL_CTL_MOD, fd, &eevent))
     89                {
     90                        if(errno == ENOENT) return -1;
     91                        abort | acquire | "epoll" | epollfd | "ctl rearm" | fd | "error: " | errno | strerror(errno);
     92                }
     93
     94                park();
     95                return 0;
     96        }
     97
     98        thread leaf_poller {
     99                int epollfd;
     100        };
     101
     102        void ?{}(leaf_poller & this, int fd) { this.epollfd = fd; }
     103
     104        void main(leaf_poller & this) {
     105                enum { MAX_EVENTS = 1024 };
     106                struct epoll_event events[MAX_EVENTS];
     107                const int max_retries = 5;
     108                int retries = max_retries;
     109
     110                struct epoll_event event;
     111                event.events = EPOLLIN | EPOLLET | EPOLLONESHOT;
     112                event.data.u64 = (uint64_t)&(thread&)this;
     113
     114                if(0 != epoll_ctl(master_epollfd, EPOLL_CTL_ADD, this.epollfd, &event))
     115                {
     116                        abort | "master epoll ctl add leaf: " | errno | strerror(errno);
     117                }
     118
     119                park();
     120
     121                for() {
     122                        yield();
     123                        int ret = epoll_wait(this.epollfd, events, MAX_EVENTS, 0);
     124                        if ( ret < 0 ) {
     125                                abort | "Leaf epoll error: " | errno | strerror(errno);
     126                        }
     127
     128                        if(ret) {
     129                                for(i; ret) {
     130                                        thread$ * thrd = (thread$ *)events[i].data.u64;
     131                                        unpark( thrd, UNPARK_REMOTE );
     132                                }
     133                        }
     134                        else if(0 >= --retries) {
     135                                epoll_rearm(master_epollfd, this.epollfd, EPOLLIN);
     136                        }
     137                }
     138        }
     139
     140        void setup_epoll( void ) __attribute__(( constructor ));
     141        void setup_epoll( void ) {
     142                if(master_epollfd) abort | "Master epoll already setup";
     143
     144                master_epollfd = epoll_create1(0);
     145                if(master_epollfd == -1) {
     146                        abort | "failed to create master epoll: " | errno | strerror(errno);
     147                }
     148
     149                struct rlimit rlim;
     150                if(int ret = getrlimit(RLIMIT_NOFILE, &rlim); 0 != ret) {
     151                        abort | "failed to get nofile limit: " | errno | strerror(errno);
     152                }
     153
     154                fd_limit = rlim.rlim_cur;
     155                fd_map = alloc(fd_limit);
     156                for(i;fd_limit) {
     157                        fd_map[i] = 0p;
     158                }
     159
     160                poller_cnt = 2;
     161                poller_fds = alloc(poller_cnt);
     162                pollers    = alloc(poller_cnt);
     163                for(i; poller_cnt) {
     164                        poller_fds[i] = epoll_create1(0);
     165                        if(poller_fds[i] == -1) {
     166                                abort | "failed to create leaf epoll [" | i | "]: " | errno | strerror(errno);
     167                        }
     168
     169                        (pollers[i]){ poller_fds[i] };
     170                }
     171
     172                pthread_attr_t attr;
     173                if (int ret = pthread_attr_init(&attr); 0 != ret) {
     174                        abort | "failed to create master epoll thread attr: " | ret | strerror(ret);
     175                }
     176
     177                if (int ret = pthread_create(&master_poller, &attr, master_epoll, 0p); 0 != ret) {
     178                        abort | "failed to create master epoll thread: " | ret | strerror(ret);
     179                }
     180        }
     181
     182        static inline int epoll_wait(int fd, uint32_t event) {
     183                if(fd_map[fd] >= 1p) {
     184                        fd_map[fd]->rearms++;
     185                        epoll_rearm(poller_fds[fd_map[fd]->pollid], fd, event);
     186                        return 0;
     187                }
     188
     189                for() {
     190                        fd_info_t * expected = 0p;
     191                        fd_info_t * sentinel = 1p;
     192                        if(__atomic_compare_exchange_n( &(fd_map[fd]), &expected, sentinel, true, __ATOMIC_SEQ_CST, __ATOMIC_RELAXED)) {
     193                                struct epoll_event eevent;
     194                                eevent.events = event | EPOLLET | EPOLLONESHOT;
     195                                eevent.data.u64 = (uint64_t)active_thread();
     196
     197                                int id = thread_rand() % poller_cnt;
     198                                if(0 != epoll_ctl(poller_fds[id], EPOLL_CTL_ADD, fd, &eevent))
     199                                {
     200                                        abort | "epoll ctl add" | poller_fds[id] | fd | fd_map[fd] | expected | "error: " | errno | strerror(errno);
     201                                }
     202
     203                                fd_info_t * ninfo = alloc();
     204                                ninfo->pollid = id;
     205                                ninfo->rearms = 0;
     206                                __atomic_store_n( &fd_map[fd], ninfo, __ATOMIC_SEQ_CST);
     207
     208                                park();
     209                                return 0;
     210                        }
     211
     212                        if(expected >= 0) {
     213                                fd_map[fd]->rearms++;
     214                                epoll_rearm(poller_fds[fd_map[fd]->pollid], fd, event);
     215                                return 0;
     216                        }
     217
     218                        Pause();
     219                }
     220        }
     221#endif
     222
     223//================================================================================
     224// Thread run by the C Interface
    34225
    35226struct cfathread_object {
     
    245436        // Mutex
    246437        struct cfathread_mutex {
    247                 fast_lock impl;
     438                linear_backoff_then_block_lock impl;
    248439        };
    249440        int cfathread_mutex_init(cfathread_mutex_t *restrict mut, const cfathread_mutexattr_t *restrict) __attribute__((nonnull (1))) { *mut = new(); return 0; }
     
    260451        // Condition
    261452        struct cfathread_condition {
    262                 condition_variable(fast_lock) impl;
     453                condition_variable(linear_backoff_then_block_lock) impl;
    263454        };
    264455        int cfathread_cond_init(cfathread_cond_t *restrict cond, const cfathread_condattr_t *restrict) __attribute__((nonnull (1))) { *cond = new(); return 0; }
     
    288479        // IO operations
    289480        int cfathread_socket(int domain, int type, int protocol) {
    290                 return socket(domain, type, protocol);
     481                return socket(domain, type
     482                #if defined(EPOLL_FOR_SOCKETS)
     483                        | SOCK_NONBLOCK
     484                #endif
     485                , protocol);
    291486        }
    292487        int cfathread_bind(int socket, const struct sockaddr *address, socklen_t address_len) {
     
    299494
    300495        int cfathread_accept(int socket, struct sockaddr *restrict address, socklen_t *restrict address_len) {
    301                 return cfa_accept4(socket, address, address_len, 0, CFA_IO_LAZY);
     496                #if defined(EPOLL_FOR_SOCKETS)
     497                        int ret;
     498                        for() {
     499                                yield();
     500                                ret = accept4(socket, address, address_len, SOCK_NONBLOCK);
     501                                if(ret >= 0) break;
     502                                if(errno != EAGAIN && errno != EWOULDBLOCK) break;
     503
     504                                epoll_wait(socket, EPOLLIN);
     505                        }
     506                        return ret;
     507                #else
     508                        return cfa_accept4(socket, address, address_len, 0, CFA_IO_LAZY);
     509                #endif
    302510        }
    303511
    304512        int cfathread_connect(int socket, const struct sockaddr *address, socklen_t address_len) {
    305                 return cfa_connect(socket, address, address_len, CFA_IO_LAZY);
     513                #if defined(EPOLL_FOR_SOCKETS)
     514                        int ret;
     515                        for() {
     516                                ret = connect(socket, address, address_len);
     517                                if(ret >= 0) break;
     518                                if(errno != EAGAIN && errno != EWOULDBLOCK) break;
     519
     520                                epoll_wait(socket, EPOLLIN);
     521                        }
     522                        return ret;
     523                #else
     524                        return cfa_connect(socket, address, address_len, CFA_IO_LAZY);
     525                #endif
    306526        }
    307527
     
    315535
    316536        ssize_t cfathread_sendmsg(int socket, const struct msghdr *message, int flags) {
    317                 return cfa_sendmsg(socket, message, flags, CFA_IO_LAZY);
     537                #if defined(EPOLL_FOR_SOCKETS)
     538                        ssize_t ret;
     539                        __STATS__( false, io.ops.sockwrite++; )
     540                        for() {
     541                                ret = sendmsg(socket, message, flags);
     542                                if(ret >= 0) break;
     543                                if(errno != EAGAIN && errno != EWOULDBLOCK) break;
     544
     545                                __STATS__( false, io.ops.epllwrite++; )
     546                                epoll_wait(socket, EPOLLOUT);
     547                        }
     548                #else
     549                        ssize_t ret = cfa_sendmsg(socket, message, flags, CFA_IO_LAZY);
     550                #endif
     551                return ret;
    318552        }
    319553
    320554        ssize_t cfathread_write(int fildes, const void *buf, size_t nbyte) {
    321555                // Use send rather then write for socket since it's faster
    322                 return cfa_send(fildes, buf, nbyte, 0, CFA_IO_LAZY);
     556                #if defined(EPOLL_FOR_SOCKETS)
     557                        ssize_t ret;
     558                        // __STATS__( false, io.ops.sockwrite++; )
     559                        for() {
     560                                ret = send(fildes, buf, nbyte, 0);
     561                                if(ret >= 0) break;
     562                                if(errno != EAGAIN && errno != EWOULDBLOCK) break;
     563
     564                                // __STATS__( false, io.ops.epllwrite++; )
     565                                epoll_wait(fildes, EPOLLOUT);
     566                        }
     567                #else
     568                        ssize_t ret = cfa_send(fildes, buf, nbyte, 0, CFA_IO_LAZY);
     569                #endif
     570                return ret;
    323571        }
    324572
     
    336584                msg.msg_controllen = 0;
    337585
    338                 ssize_t ret = cfa_recvmsg(socket, &msg, flags, CFA_IO_LAZY);
     586                #if defined(EPOLL_FOR_SOCKETS)
     587                        ssize_t ret;
     588                        yield();
     589                        for() {
     590                                ret = recvmsg(socket, &msg, flags);
     591                                if(ret >= 0) break;
     592                                if(errno != EAGAIN && errno != EWOULDBLOCK) break;
     593
     594                                epoll_wait(socket, EPOLLIN);
     595                        }
     596                #else
     597                        ssize_t ret = cfa_recvmsg(socket, &msg, flags, CFA_IO_LAZY);
     598                #endif
    339599
    340600                if(address_len) *address_len = msg.msg_namelen;
     
    344604        ssize_t cfathread_read(int fildes, void *buf, size_t nbyte) {
    345605                // Use recv rather then read for socket since it's faster
    346                 return cfa_recv(fildes, buf, nbyte, 0, CFA_IO_LAZY);
    347         }
    348 
    349 }
     606                #if defined(EPOLL_FOR_SOCKETS)
     607                        ssize_t ret;
     608                        __STATS__( false, io.ops.sockread++; )
     609                        yield();
     610                        for() {
     611                                ret = recv(fildes, buf, nbyte, 0);
     612                                if(ret >= 0) break;
     613                                if(errno != EAGAIN && errno != EWOULDBLOCK) break;
     614
     615                                __STATS__( false, io.ops.epllread++; )
     616                                epoll_wait(fildes, EPOLLIN);
     617                        }
     618                #else
     619                        ssize_t ret = cfa_recv(fildes, buf, nbyte, 0, CFA_IO_LAZY);
     620                #endif
     621                return ret;
     622        }
     623
     624}
  • libcfa/src/concurrency/invoke.h

    r7e7a076 rf93c50a  
    170170                bool corctx_flag;
    171171
    172                 int last_cpu;
    173 
    174172                //SKULLDUGGERY errno is not save in the thread data structure because returnToKernel appears to be the only function to require saving and restoring it
    175173
     
    177175                struct cluster * curr_cluster;
    178176
    179                 // preferred ready-queue
     177                // preferred ready-queue or CPU
    180178                unsigned preferred;
    181179
  • libcfa/src/concurrency/io.cfa

    r7e7a076 rf93c50a  
    9090        static inline unsigned __flush( struct $io_context & );
    9191        static inline __u32 __release_sqes( struct $io_context & );
    92         extern void __kernel_unpark( thread$ * thrd );
     92        extern void __kernel_unpark( thread$ * thrd, unpark_hint );
    9393
    9494        bool __cfa_io_drain( processor * proc ) {
     
    118118                        __cfadbg_print_safe( io, "Kernel I/O : Syscall completed : cqe %p, result %d for %p\n", &cqe, cqe.res, future );
    119119
    120                         __kernel_unpark( fulfil( *future, cqe.res, false ) );
     120                        __kernel_unpark( fulfil( *future, cqe.res, false ), UNPARK_LOCAL );
    121121                }
    122122
  • libcfa/src/concurrency/kernel.cfa

    r7e7a076 rf93c50a  
    341341                                }
    342342
    343                                         __STATS( if(this->print_halts) __cfaabi_bits_print_safe( STDOUT_FILENO, "PH:%d - %lld 0\n", this->unique_id, rdtscl()); )
     343                                __STATS( if(this->print_halts) __cfaabi_bits_print_safe( STDOUT_FILENO, "PH:%d - %lld 0\n", this->unique_id, rdtscl()); )
    344344                                __cfadbg_print_safe(runtime_core, "Kernel : core %p waiting on eventfd %d\n", this, this->idle);
    345345
    346                                 // __disable_interrupts_hard();
    347                                 eventfd_t val;
    348                                 eventfd_read( this->idle, &val );
    349                                 // __enable_interrupts_hard();
     346                                {
     347                                        eventfd_t val;
     348                                        ssize_t ret = read( this->idle, &val, sizeof(val) );
     349                                        if(ret < 0) {
     350                                                switch((int)errno) {
     351                                                case EAGAIN:
     352                                                #if EAGAIN != EWOULDBLOCK
     353                                                        case EWOULDBLOCK:
     354                                                #endif
     355                                                case EINTR:
     356                                                        // No need to do anything special here, just assume it's a legitimate wake-up
     357                                                        break;
     358                                                default:
     359                                                        abort( "KERNEL : internal error, read failure on idle eventfd, error(%d) %s.", (int)errno, strerror( (int)errno ) );
     360                                                }
     361                                        }
     362                                }
    350363
    351364                                        __STATS( if(this->print_halts) __cfaabi_bits_print_safe( STDOUT_FILENO, "PH:%d - %lld 1\n", this->unique_id, rdtscl()); )
     
    409422        /* paranoid */ verifyf( thrd_dst->link.next == 0p, "Expected null got %p", thrd_dst->link.next );
    410423        __builtin_prefetch( thrd_dst->context.SP );
    411 
    412         int curr = __kernel_getcpu();
    413         if(thrd_dst->last_cpu != curr) {
    414                 int64_t l = thrd_dst->last_cpu;
    415                 int64_t c = curr;
    416                 int64_t v = (l << 32) | c;
    417                 __push_stat( __tls_stats(), v, false, "Processor", this );
    418         }
    419 
    420         thrd_dst->last_cpu = curr;
    421424
    422425        __cfadbg_print_safe(runtime_core, "Kernel : core %p running thread %p (%s)\n", this, thrd_dst, thrd_dst->self_cor.name);
     
    473476                if(unlikely(thrd_dst->preempted != __NO_PREEMPTION)) {
    474477                        // The thread was preempted, reschedule it and reset the flag
    475                         schedule_thread$( thrd_dst );
     478                        schedule_thread$( thrd_dst, UNPARK_LOCAL );
    476479                        break RUNNING;
    477480                }
     
    557560// Scheduler routines
    558561// KERNEL ONLY
    559 static void __schedule_thread( thread$ * thrd ) {
     562static void __schedule_thread( thread$ * thrd, unpark_hint hint ) {
    560563        /* paranoid */ verify( ! __preemption_enabled() );
    561564        /* paranoid */ verify( ready_schedule_islocked());
     
    577580        // Dereference the thread now because once we push it, there is not guaranteed it's still valid.
    578581        struct cluster * cl = thrd->curr_cluster;
    579         __STATS(bool outside = thrd->last_proc && thrd->last_proc != kernelTLS().this_processor; )
     582        __STATS(bool outside = hint == UNPARK_LOCAL && thrd->last_proc && thrd->last_proc != kernelTLS().this_processor; )
    580583
    581584        // push the thread to the cluster ready-queue
    582         push( cl, thrd, local );
     585        push( cl, thrd, hint );
    583586
    584587        // variable thrd is no longer safe to use
     
    605608}
    606609
    607 void schedule_thread$( thread$ * thrd ) {
     610void schedule_thread$( thread$ * thrd, unpark_hint hint ) {
    608611        ready_schedule_lock();
    609                 __schedule_thread( thrd );
     612                __schedule_thread( thrd, hint );
    610613        ready_schedule_unlock();
    611614}
     
    658661}
    659662
    660 void __kernel_unpark( thread$ * thrd ) {
     663void __kernel_unpark( thread$ * thrd, unpark_hint hint ) {
    661664        /* paranoid */ verify( ! __preemption_enabled() );
    662665        /* paranoid */ verify( ready_schedule_islocked());
     
    666669        if(__must_unpark(thrd)) {
    667670                // Wake lost the race,
    668                 __schedule_thread( thrd );
     671                __schedule_thread( thrd, hint );
    669672        }
    670673
     
    673676}
    674677
    675 void unpark( thread$ * thrd ) {
     678void unpark( thread$ * thrd, unpark_hint hint ) {
    676679        if( !thrd ) return;
    677680
     
    679682                disable_interrupts();
    680683                        // Wake lost the race,
    681                         schedule_thread$( thrd );
     684                        schedule_thread$( thrd, hint );
    682685                enable_interrupts(false);
    683686        }
  • libcfa/src/concurrency/kernel.hfa

    r7e7a076 rf93c50a  
    151151struct __attribute__((aligned(128))) __timestamp_t {
    152152        volatile unsigned long long tv;
    153 };
    154 
    155 static inline void  ?{}(__timestamp_t & this) { this.tv = 0; }
     153        volatile unsigned long long ma;
     154};
     155
     156// Aligned timestamps which are used by the relaxed ready queue
     157struct __attribute__((aligned(128))) __help_cnts_t {
     158        volatile unsigned long long src;
     159        volatile unsigned long long dst;
     160        volatile unsigned long long tri;
     161};
     162
     163static inline void  ?{}(__timestamp_t & this) { this.tv = 0; this.ma = 0; }
    156164static inline void ^?{}(__timestamp_t & this) {}
    157165
     
    169177                // Array of times
    170178                __timestamp_t * volatile tscs;
     179
     180                // Array of stats
     181                __help_cnts_t * volatile help;
    171182
    172183                // Number of lanes (empty or not)
  • libcfa/src/concurrency/kernel/fwd.hfa

    r7e7a076 rf93c50a  
    119119
    120120        extern "Cforall" {
     121                enum unpark_hint { UNPARK_LOCAL, UNPARK_REMOTE };
     122
    121123                extern void park( void );
    122                 extern void unpark( struct thread$ * this );
     124                extern void unpark( struct thread$ *, unpark_hint );
     125                static inline void unpark( struct thread$ * thrd ) { unpark(thrd, UNPARK_LOCAL); }
    123126                static inline struct thread$ * active_thread () {
    124127                        struct thread$ * t = publicTLS_get( this_thread );
  • libcfa/src/concurrency/kernel/startup.cfa

    r7e7a076 rf93c50a  
    200200        __cfadbg_print_safe(runtime_core, "Kernel : Main cluster ready\n");
    201201
     202        // Construct the processor context of the main processor
     203        void ?{}(processorCtx_t & this, processor * proc) {
     204                (this.__cor){ "Processor" };
     205                this.__cor.starter = 0p;
     206                this.proc = proc;
     207        }
     208
     209        void ?{}(processor & this) with( this ) {
     210                ( this.terminated ){};
     211                ( this.runner ){};
     212                init( this, "Main Processor", *mainCluster, 0p );
     213                kernel_thread = pthread_self();
     214
     215                runner{ &this };
     216                __cfadbg_print_safe(runtime_core, "Kernel : constructed main processor context %p\n", &runner);
     217        }
     218
     219        // Initialize the main processor and the main processor ctx
     220        // (the coroutine that contains the processing control flow)
     221        mainProcessor = (processor *)&storage_mainProcessor;
     222        (*mainProcessor){};
     223
     224        register_tls( mainProcessor );
     225
    202226        // Start by initializing the main thread
    203227        // SKULLDUGGERY: the mainThread steals the process main thread
     
    210234        __cfadbg_print_safe(runtime_core, "Kernel : Main thread ready\n");
    211235
    212 
    213 
    214         // Construct the processor context of the main processor
    215         void ?{}(processorCtx_t & this, processor * proc) {
    216                 (this.__cor){ "Processor" };
    217                 this.__cor.starter = 0p;
    218                 this.proc = proc;
    219         }
    220 
    221         void ?{}(processor & this) with( this ) {
    222                 ( this.terminated ){};
    223                 ( this.runner ){};
    224                 init( this, "Main Processor", *mainCluster, 0p );
    225                 kernel_thread = pthread_self();
    226 
    227                 runner{ &this };
    228                 __cfadbg_print_safe(runtime_core, "Kernel : constructed main processor context %p\n", &runner);
    229         }
    230 
    231         // Initialize the main processor and the main processor ctx
    232         // (the coroutine that contains the processing control flow)
    233         mainProcessor = (processor *)&storage_mainProcessor;
    234         (*mainProcessor){};
    235 
    236         register_tls( mainProcessor );
    237         mainThread->last_cpu = __kernel_getcpu();
    238 
    239236        //initialize the global state variables
    240237        __cfaabi_tls.this_processor = mainProcessor;
     
    252249        // Add the main thread to the ready queue
    253250        // once resume is called on mainProcessor->runner the mainThread needs to be scheduled like any normal thread
    254         schedule_thread$(mainThread);
     251        schedule_thread$(mainThread, UNPARK_LOCAL);
    255252
    256253        // SKULLDUGGERY: Force a context switch to the main processor to set the main thread's context to the current UNIX
     
    486483        link.next = 0p;
    487484        link.ts   = -1llu;
    488         preferred = -1u;
     485        preferred = ready_queue_new_preferred();
    489486        last_proc = 0p;
    490487        #if defined( __CFA_WITH_VERIFY__ )
  • libcfa/src/concurrency/kernel_private.hfa

    r7e7a076 rf93c50a  
    4646}
    4747
    48 void schedule_thread$( thread$ * ) __attribute__((nonnull (1)));
     48void schedule_thread$( thread$ *, unpark_hint hint ) __attribute__((nonnull (1)));
    4949
    5050extern bool __preemption_enabled();
     
    300300// push thread onto a ready queue for a cluster
    301301// returns true if the list was previously empty, false otherwise
    302 __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, bool local);
     302__attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, unpark_hint hint);
    303303
    304304//-----------------------------------------------------------------------
     
    321321
    322322//-----------------------------------------------------------------------
     323// get preferred ready for new thread
     324unsigned ready_queue_new_preferred();
     325
     326//-----------------------------------------------------------------------
    323327// Increase the width of the ready queue (number of lanes) by 4
    324328void ready_queue_grow  (struct cluster * cltr);
  • libcfa/src/concurrency/ready_queue.cfa

    r7e7a076 rf93c50a  
    100100        #define __kernel_rseq_unregister rseq_unregister_current_thread
    101101#elif defined(CFA_HAVE_LINUX_RSEQ_H)
    102         void __kernel_raw_rseq_register  (void);
    103         void __kernel_raw_rseq_unregister(void);
     102        static void __kernel_raw_rseq_register  (void);
     103        static void __kernel_raw_rseq_unregister(void);
    104104
    105105        #define __kernel_rseq_register __kernel_raw_rseq_register
     
    246246// Cforall Ready Queue used for scheduling
    247247//=======================================================================
     248unsigned long long moving_average(unsigned long long nval, unsigned long long oval) {
     249        const unsigned long long tw = 16;
     250        const unsigned long long nw = 4;
     251        const unsigned long long ow = tw - nw;
     252        return ((nw * nval) + (ow * oval)) / tw;
     253}
     254
    248255void ?{}(__ready_queue_t & this) with (this) {
    249256        #if defined(USE_CPU_WORK_STEALING)
     
    251258                lanes.data = alloc( lanes.count );
    252259                lanes.tscs = alloc( lanes.count );
     260                lanes.help = alloc( cpu_info.hthrd_count );
    253261
    254262                for( idx; (size_t)lanes.count ) {
    255263                        (lanes.data[idx]){};
    256264                        lanes.tscs[idx].tv = rdtscl();
     265                        lanes.tscs[idx].ma = rdtscl();
     266                }
     267                for( idx; (size_t)cpu_info.hthrd_count ) {
     268                        lanes.help[idx].src = 0;
     269                        lanes.help[idx].dst = 0;
     270                        lanes.help[idx].tri = 0;
    257271                }
    258272        #else
    259273                lanes.data  = 0p;
    260274                lanes.tscs  = 0p;
     275                lanes.help  = 0p;
    261276                lanes.count = 0;
    262277        #endif
     
    270285        free(lanes.data);
    271286        free(lanes.tscs);
     287        free(lanes.help);
    272288}
    273289
    274290//-----------------------------------------------------------------------
    275291#if defined(USE_CPU_WORK_STEALING)
    276         __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, bool push_local) with (cltr->ready_queue) {
     292        __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, unpark_hint hint) with (cltr->ready_queue) {
    277293                __cfadbg_print_safe(ready_queue, "Kernel : Pushing %p on cluster %p\n", thrd, cltr);
    278294
    279295                processor * const proc = kernelTLS().this_processor;
    280                 const bool external = !push_local || (!proc) || (cltr != proc->cltr);
    281 
     296                const bool external = (!proc) || (cltr != proc->cltr);
     297
     298                // Figure out the current cpu and make sure it is valid
    282299                const int cpu = __kernel_getcpu();
    283300                /* paranoid */ verify(cpu >= 0);
     
    285302                /* paranoid */ verify(cpu * READYQ_SHARD_FACTOR < lanes.count);
    286303
    287                 const cpu_map_entry_t & map = cpu_info.llc_map[cpu];
     304                // Figure out where thread was last time and make sure it's
     305                /* paranoid */ verify(thrd->preferred >= 0);
     306                /* paranoid */ verify(thrd->preferred < cpu_info.hthrd_count);
     307                /* paranoid */ verify(thrd->preferred * READYQ_SHARD_FACTOR < lanes.count);
     308                const int prf = thrd->preferred * READYQ_SHARD_FACTOR;
     309
     310                const cpu_map_entry_t & map;
     311                choose(hint) {
     312                        case UNPARK_LOCAL : &map = &cpu_info.llc_map[cpu];
     313                        case UNPARK_REMOTE: &map = &cpu_info.llc_map[prf];
     314                }
    288315                /* paranoid */ verify(map.start * READYQ_SHARD_FACTOR < lanes.count);
    289316                /* paranoid */ verify(map.self * READYQ_SHARD_FACTOR < lanes.count);
     
    296323                        if(unlikely(external)) { r = __tls_rand(); }
    297324                        else { r = proc->rdq.its++; }
    298                         i = start + (r % READYQ_SHARD_FACTOR);
     325                        choose(hint) {
     326                                case UNPARK_LOCAL : i = start + (r % READYQ_SHARD_FACTOR);
     327                                case UNPARK_REMOTE: i = prf   + (r % READYQ_SHARD_FACTOR);
     328                        }
    299329                        // If we can't lock it retry
    300330                } while( !__atomic_try_acquire( &lanes.data[i].lock ) );
     
    332362                processor * const proc = kernelTLS().this_processor;
    333363                const int start = map.self * READYQ_SHARD_FACTOR;
     364                const unsigned long long ctsc = rdtscl();
    334365
    335366                // Did we already have a help target
    336367                if(proc->rdq.target == -1u) {
    337                         // if We don't have a
    338                         unsigned long long min = ts(lanes.data[start]);
     368                        unsigned long long max = 0;
    339369                        for(i; READYQ_SHARD_FACTOR) {
    340                                 unsigned long long tsc = ts(lanes.data[start + i]);
    341                                 if(tsc < min) min = tsc;
    342                         }
    343                         proc->rdq.cutoff = min;
    344 
     370                                unsigned long long tsc = moving_average(ctsc - ts(lanes.data[start + i]), lanes.tscs[start + i].ma);
     371                                if(tsc > max) max = tsc;
     372                        }
     373                         proc->rdq.cutoff = (max + 2 * max) / 2;
    345374                        /* paranoid */ verify(lanes.count < 65536); // The following code assumes max 65536 cores.
    346375                        /* paranoid */ verify(map.count < 65536); // The following code assumes max 65536 cores.
    347376
    348                         if(0 == (__tls_rand() % 10_000)) {
     377                        if(0 == (__tls_rand() % 100)) {
    349378                                proc->rdq.target = __tls_rand() % lanes.count;
    350379                        } else {
     
    358387                }
    359388                else {
    360                         const unsigned long long bias = 0; //2_500_000_000;
    361                         const unsigned long long cutoff = proc->rdq.cutoff > bias ? proc->rdq.cutoff - bias : proc->rdq.cutoff;
     389                        unsigned long long max = 0;
     390                        for(i; READYQ_SHARD_FACTOR) {
     391                                unsigned long long tsc = moving_average(ctsc - ts(lanes.data[start + i]), lanes.tscs[start + i].ma);
     392                                if(tsc > max) max = tsc;
     393                        }
     394                        const unsigned long long cutoff = (max + 2 * max) / 2;
    362395                        {
    363396                                unsigned target = proc->rdq.target;
    364397                                proc->rdq.target = -1u;
    365                                 if(lanes.tscs[target].tv < cutoff && ts(lanes.data[target]) < cutoff) {
     398                                lanes.help[target / READYQ_SHARD_FACTOR].tri++;
     399                                if(moving_average(ctsc - lanes.tscs[target].tv, lanes.tscs[target].ma) > cutoff) {
    366400                                        thread$ * t = try_pop(cltr, target __STATS(, __tls_stats()->ready.pop.help));
    367401                                        proc->rdq.last = target;
    368402                                        if(t) return t;
     403                                        else proc->rdq.target = -1u;
    369404                                }
     405                                else proc->rdq.target = -1u;
    370406                        }
    371407
     
    428464        }
    429465
    430         __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, bool push_local) with (cltr->ready_queue) {
     466        __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, unpark_hint hint) with (cltr->ready_queue) {
    431467                __cfadbg_print_safe(ready_queue, "Kernel : Pushing %p on cluster %p\n", thrd, cltr);
    432468
    433                 const bool external = !push_local || (!kernelTLS().this_processor) || (cltr != kernelTLS().this_processor->cltr);
     469                const bool external = (hint != UNPARK_LOCAL) || (!kernelTLS().this_processor) || (cltr != kernelTLS().this_processor->cltr);
    434470                /* paranoid */ verify(external || kernelTLS().this_processor->rdq.id < lanes.count );
    435471
     
    515551#endif
    516552#if defined(USE_WORK_STEALING)
    517         __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, bool push_local) with (cltr->ready_queue) {
     553        __attribute__((hot)) void push(struct cluster * cltr, struct thread$ * thrd, unpark_hint hint) with (cltr->ready_queue) {
    518554                __cfadbg_print_safe(ready_queue, "Kernel : Pushing %p on cluster %p\n", thrd, cltr);
    519555
    520556                // #define USE_PREFERRED
    521557                #if !defined(USE_PREFERRED)
    522                 const bool external = !push_local || (!kernelTLS().this_processor) || (cltr != kernelTLS().this_processor->cltr);
     558                const bool external = (hint != UNPARK_LOCAL) || (!kernelTLS().this_processor) || (cltr != kernelTLS().this_processor->cltr);
    523559                /* paranoid */ verify(external || kernelTLS().this_processor->rdq.id < lanes.count );
    524560                #else
    525561                        unsigned preferred = thrd->preferred;
    526                         const bool external = push_local || (!kernelTLS().this_processor) || preferred == -1u || thrd->curr_cluster != cltr;
     562                        const bool external = (hint != UNPARK_LOCAL) || (!kernelTLS().this_processor) || preferred == -1u || thrd->curr_cluster != cltr;
    527563                        /* paranoid */ verifyf(external || preferred < lanes.count, "Invalid preferred queue %u for %u lanes", preferred, lanes.count );
    528564
     
    645681        // Actually pop the list
    646682        struct thread$ * thrd;
     683        unsigned long long tsc_before = ts(lane);
    647684        unsigned long long tsv;
    648685        [thrd, tsv] = pop(lane);
     
    658695        __STATS( stats.success++; )
    659696
    660         #if defined(USE_WORK_STEALING)
     697        #if defined(USE_WORK_STEALING) || defined(USE_CPU_WORK_STEALING)
     698                unsigned long long now = rdtscl();
    661699                lanes.tscs[w].tv = tsv;
     700                lanes.tscs[w].ma = moving_average(now > tsc_before ? now - tsc_before : 0, lanes.tscs[w].ma);
    662701        #endif
    663702
    664         thrd->preferred = w;
     703        #if defined(USE_CPU_WORK_STEALING)
     704                thrd->preferred = w / READYQ_SHARD_FACTOR;
     705        #else
     706                thrd->preferred = w;
     707        #endif
    665708
    666709        // return the popped thread
     
    688731
    689732//-----------------------------------------------------------------------
     733// get preferred ready for new thread
     734unsigned ready_queue_new_preferred() {
     735        unsigned pref = 0;
     736        if(struct thread$ * thrd = publicTLS_get( this_thread )) {
     737                pref = thrd->preferred;
     738        }
     739        else {
     740                #if defined(USE_CPU_WORK_STEALING)
     741                        pref = __kernel_getcpu();
     742                #endif
     743        }
     744
     745        #if defined(USE_CPU_WORK_STEALING)
     746                /* paranoid */ verify(pref >= 0);
     747                /* paranoid */ verify(pref < cpu_info.hthrd_count);
     748        #endif
     749
     750        return pref;
     751}
     752
     753//-----------------------------------------------------------------------
    690754// Check that all the intrusive queues in the data structure are still consistent
    691755static void check( __ready_queue_t & q ) with (q) {
     
    915979        extern void __enable_interrupts_hard();
    916980
    917         void __kernel_raw_rseq_register  (void) {
     981        static void __kernel_raw_rseq_register  (void) {
    918982                /* paranoid */ verify( __cfaabi_rseq.cpu_id == RSEQ_CPU_ID_UNINITIALIZED );
    919983
     
    933997        }
    934998
    935         void __kernel_raw_rseq_unregister(void) {
     999        static void __kernel_raw_rseq_unregister(void) {
    9361000                /* paranoid */ verify( __cfaabi_rseq.cpu_id >= 0 );
    9371001
  • libcfa/src/concurrency/ready_subqueue.hfa

    r7e7a076 rf93c50a  
    9898
    9999        // Get the relevant nodes locally
    100         unsigned long long ts = this.anchor.ts;
    101100        thread$ * node = this.anchor.next;
    102101        this.anchor.next = node->link.next;
     
    116115        /* paranoid */ verify( node->link.ts   != 0  );
    117116        /* paranoid */ verify( this.anchor.ts  != 0  );
    118         return [node, ts];
     117        return [node, this.anchor.ts];
    119118}
    120119
  • libcfa/src/concurrency/stats.cfa

    r7e7a076 rf93c50a  
    4848                        stats->io.calls.completed   = 0;
    4949                        stats->io.calls.errors.busy = 0;
     50                        stats->io.ops.sockread      = 0;
     51                        stats->io.ops.epllread      = 0;
     52                        stats->io.ops.sockwrite     = 0;
     53                        stats->io.ops.epllwrite     = 0;
    5054                #endif
    5155
     
    104108                        tally_one( &cltr->io.calls.completed  , &proc->io.calls.completed   );
    105109                        tally_one( &cltr->io.calls.errors.busy, &proc->io.calls.errors.busy );
     110                        tally_one( &cltr->io.ops.sockread     , &proc->io.ops.sockread      );
     111                        tally_one( &cltr->io.ops.epllread     , &proc->io.ops.epllread      );
     112                        tally_one( &cltr->io.ops.sockwrite    , &proc->io.ops.sockwrite     );
     113                        tally_one( &cltr->io.ops.epllwrite    , &proc->io.ops.epllwrite     );
    106114                #endif
    107115        }
     
    179187                                     | " - cmp " | eng3(io.calls.drain) | "/" | eng3(io.calls.completed) | "(" | ws(3, 3, avgcomp) | "/drain)"
    180188                                     | " - " | eng3(io.calls.errors.busy) | " EBUSY";
     189                                sstr | "- ops blk: "
     190                                     |   " sk rd: " | eng3(io.ops.sockread)  | "epll: " | eng3(io.ops.epllread)
     191                                     |   " sk wr: " | eng3(io.ops.sockwrite) | "epll: " | eng3(io.ops.epllwrite);
    181192                                sstr | nl;
    182193                        }
  • libcfa/src/concurrency/stats.hfa

    r7e7a076 rf93c50a  
    102102                                volatile uint64_t sleeps;
    103103                        } poller;
     104                        struct {
     105                                volatile uint64_t sockread;
     106                                volatile uint64_t epllread;
     107                                volatile uint64_t sockwrite;
     108                                volatile uint64_t epllwrite;
     109                        } ops;
    104110                };
    105111        #endif
  • libcfa/src/concurrency/thread.cfa

    r7e7a076 rf93c50a  
    2525#include "invoke.h"
    2626
     27uint64_t thread_rand();
     28
    2729//-----------------------------------------------------------------------------
    2830// Thread ctors and dtors
     
    3436        preempted = __NO_PREEMPTION;
    3537        corctx_flag = false;
    36         disable_interrupts();
    37         last_cpu = __kernel_getcpu();
    38         enable_interrupts();
    3938        curr_cor = &self_cor;
    4039        self_mon.owner = &this;
     
    4443        link.next = 0p;
    4544        link.ts   = -1llu;
    46         preferred = -1u;
     45        preferred = ready_queue_new_preferred();
    4746        last_proc = 0p;
    4847        #if defined( __CFA_WITH_VERIFY__ )
     
    141140        /* paranoid */ verify( this_thrd->context.SP );
    142141
    143         schedule_thread$( this_thrd );
     142        schedule_thread$( this_thrd, UNPARK_LOCAL );
    144143        enable_interrupts();
    145144}
  • libcfa/src/containers/string_res.cfa

    r7e7a076 rf93c50a  
    2121
    2222
    23 #ifdef VbyteDebug
    24 extern HandleNode *HeaderPtr;
     23
     24
     25
     26
     27
     28
     29// DON'T COMMIT:
     30// #define VbyteDebug
     31
     32
     33
     34
     35
     36#ifdef VbyteDebug
     37HandleNode *HeaderPtr;
    2538#endif // VbyteDebug
    2639
     
    140153
    141154VbyteHeap HeapArea;
     155
     156VbyteHeap * DEBUG_string_heap = & HeapArea;
     157
     158size_t DEBUG_string_bytes_avail_until_gc( VbyteHeap * heap ) {
     159    return ((char*)heap->ExtVbyte) - heap->EndVbyte;
     160}
     161
     162const char * DEBUG_string_heap_start( VbyteHeap * heap ) {
     163    return heap->StartVbyte;
     164}
     165
    142166
    143167// Returns the size of the string in bytes
     
    225249void assign(string_res &this, const char* buffer, size_t bsize) {
    226250
     251    // traverse the incumbent share-edit set (SES) to recover the range of a base string to which `this` belongs
     252    string_res * shareEditSetStartPeer = & this;
     253    string_res * shareEditSetEndPeer = & this;
     254    for (string_res * editPeer = this.shareEditSet_next; editPeer != &this; editPeer = editPeer->shareEditSet_next) {
     255        if ( editPeer->Handle.s < shareEditSetStartPeer->Handle.s ) {
     256            shareEditSetStartPeer = editPeer;
     257        }
     258        if ( shareEditSetEndPeer->Handle.s + shareEditSetEndPeer->Handle.lnth < editPeer->Handle.s + editPeer->Handle.lnth) {
     259            shareEditSetEndPeer = editPeer;
     260        }
     261    }
     262
     263    // full string is from start of shareEditSetStartPeer thru end of shareEditSetEndPeer
     264    // `this` occurs in the middle of it, to be replaced
     265    // build up the new text in `pasting`
     266
     267    string_res pasting = {
     268        shareEditSetStartPeer->Handle.s,                   // start of SES
     269        this.Handle.s - shareEditSetStartPeer->Handle.s }; // length of SES, before this
     270    append( pasting,
     271        buffer,                                            // start of replacement for this
     272        bsize );                                           // length of replacement for this
     273    append( pasting,
     274        this.Handle.s + this.Handle.lnth,                  // start of SES after this
     275        shareEditSetEndPeer->Handle.s + shareEditSetEndPeer->Handle.lnth -
     276        (this.Handle.s + this.Handle.lnth) );              // length of SES, after this
     277
     278    // The above string building can trigger compaction.
     279    // The reference points (that are arguments of the string building) may move during that building.
     280    // From this point on, they are stable.
     281    // So now, capture their values for use in the overlap cases, below.
     282    // Do not factor these definitions with the arguments used above.
     283
     284    char * beforeBegin = shareEditSetStartPeer->Handle.s;
     285    size_t beforeLen = this.Handle.s - beforeBegin;
     286
    227287    char * afterBegin = this.Handle.s + this.Handle.lnth;
    228 
    229     char * shareEditSetStart = this.Handle.s;
    230     char * shareEditSetEnd = afterBegin;
    231     for (string_res * editPeer = this.shareEditSet_next; editPeer != &this; editPeer = editPeer->shareEditSet_next) {
    232         shareEditSetStart = min( shareEditSetStart, editPeer->Handle.s );
    233         shareEditSetEnd = max( shareEditSetStart, editPeer->Handle.s + editPeer->Handle.lnth);
    234     }
    235 
    236     char * beforeBegin = shareEditSetStart;
    237     size_t beforeLen = this.Handle.s - shareEditSetStart;
    238     size_t afterLen = shareEditSetEnd - afterBegin;
    239 
    240     string_res pasting = { beforeBegin, beforeLen };
    241     append(pasting, buffer, bsize);
    242     string_res after = { afterBegin, afterLen }; // juxtaposed with in-progress pasting
    243     pasting += after;                        // optimized case
     288    size_t afterLen = shareEditSetEndPeer->Handle.s + shareEditSetEndPeer->Handle.lnth - afterBegin;
    244289
    245290    size_t oldLnth = this.Handle.lnth;
     
    253298    for (string_res * p = this.shareEditSet_next; p != &this; p = p->shareEditSet_next) {
    254299        assert (p->Handle.s >= beforeBegin);
    255         if ( p->Handle.s < beforeBegin + beforeLen ) {
    256             // p starts before the edit
    257             if ( p->Handle.s + p->Handle.lnth < beforeBegin + beforeLen ) {
     300        if ( p->Handle.s >= afterBegin ) {
     301            assert ( p->Handle.s <= afterBegin + afterLen );
     302            assert ( p->Handle.s + p->Handle.lnth <= afterBegin + afterLen );
     303            // p starts after the edit
     304            // take start and end as end-anchored
     305            size_t startOffsetFromEnd = afterBegin + afterLen - p->Handle.s;
     306            p->Handle.s = limit - startOffsetFromEnd;
     307            // p->Handle.lnth unaffected
     308        } else if ( p->Handle.s <= beforeBegin + beforeLen ) {
     309            // p starts before, or at the start of, the edit
     310            if ( p->Handle.s + p->Handle.lnth <= beforeBegin + beforeLen ) {
    258311                // p ends before the edit
    259312                // take end as start-anchored too
    260313                // p->Handle.lnth unaffected
    261314            } else if ( p->Handle.s + p->Handle.lnth < afterBegin ) {
    262                 // p ends during the edit
     315                // p ends during the edit; p does not include the last character replaced
    263316                // clip end of p to end at start of edit
    264317                p->Handle.lnth = beforeLen - ( p->Handle.s - beforeBegin );
     
    274327            size_t startOffsetFromStart = p->Handle.s - beforeBegin;
    275328            p->Handle.s = pasting.Handle.s + startOffsetFromStart;
    276         } else if ( p->Handle.s < afterBegin ) {
     329        } else {
     330            assert ( p->Handle.s < afterBegin );
    277331            // p starts during the edit
    278332            assert( p->Handle.s + p->Handle.lnth >= beforeBegin + beforeLen );
    279333            if ( p->Handle.s + p->Handle.lnth < afterBegin ) {
    280                 // p ends during the edit
     334                // p ends during the edit; p does not include the last character replaced
    281335                // set p to empty string at start of edit
    282336                p->Handle.s = this.Handle.s;
    283337                p->Handle.lnth = 0;
    284338            } else {
    285                 // p ends after the edit
     339                // p includes the end of the edit
    286340                // clip start of p to start at end of edit
     341                int charsToClip = afterBegin - p->Handle.s;
    287342                p->Handle.s = this.Handle.s + this.Handle.lnth;
    288                 p->Handle.lnth += this.Handle.lnth;
    289                 p->Handle.lnth -= oldLnth;
     343                p->Handle.lnth -= charsToClip;
    290344            }
    291         } else {
    292             assert ( p->Handle.s <= afterBegin + afterLen );
    293             assert ( p->Handle.s + p->Handle.lnth <= afterBegin + afterLen );
    294             // p starts after the edit
    295             // take start and end as end-anchored
    296             size_t startOffsetFromEnd = afterBegin + afterLen - p->Handle.s;
    297             p->Handle.s = limit - startOffsetFromEnd;
    298             // p->Handle.lnth unaffected
    299345        }
    300346        MoveThisAfter( p->Handle, pasting.Handle );     // move substring handle to maintain sorted order by string position
     
    641687    } // if
    642688#ifdef VbyteDebug
    643     serr | "exit:MoveThisAfter";
    644689    {
    645690        serr | "HandleList:";
     
    650695                serr | n->s[i];
    651696            } // for
    652             serr | "\" flink:" | n->flink | " blink:" | n->blink;
     697            serr | "\" flink:" | n->flink | " blink:" | n->blink | nl;
    653698        } // for
    654699        serr | nlOn;
    655700    }
     701    serr | "exit:MoveThisAfter";
    656702#endif // VbyteDebug
    657703} // MoveThisAfter
     
    662708
    663709//######################### VbyteHeap #########################
    664 
    665 #ifdef VbyteDebug
    666 HandleNode *HeaderPtr = 0p;
    667 #endif // VbyteDebug
    668710
    669711// Move characters from one location in the byte-string area to another. The routine handles the following situations:
  • libcfa/src/containers/string_res.hfa

    r7e7a076 rf93c50a  
    3636void ?{}( HandleNode &, VbyteHeap & );          // constructor for nodes in the handle list
    3737void ^?{}( HandleNode & );                      // destructor for handle nodes
     38
     39extern VbyteHeap * DEBUG_string_heap;
     40size_t DEBUG_string_bytes_avail_until_gc( VbyteHeap * heap );
     41const char * DEBUG_string_heap_start( VbyteHeap * heap );
    3842
    3943
  • libcfa/src/fstream.cfa

    r7e7a076 rf93c50a  
    1010// Created On       : Wed May 27 17:56:53 2015
    1111// Last Modified By : Peter A. Buhr
    12 // Last Modified On : Thu Jul 29 22:34:10 2021
    13 // Update Count     : 454
     12// Last Modified On : Tue Sep 21 21:51:38 2021
     13// Update Count     : 460
    1414//
    1515
     
    2828#define IO_MSG "I/O error: "
    2929
    30 void ?{}( ofstream & os, void * file ) {
    31         os.file$ = file;
    32         os.sepDefault$ = true;
    33         os.sepOnOff$ = false;
    34         os.nlOnOff$ = true;
    35         os.prt$ = false;
    36         os.sawNL$ = false;
    37         os.acquired$ = false;
     30void ?{}( ofstream & os, void * file ) with(os) {
     31        file$ = file;
     32        sepDefault$ = true;
     33        sepOnOff$ = false;
     34        nlOnOff$ = true;
     35        prt$ = false;
     36        sawNL$ = false;
     37        acquired$ = false;
    3838        sepSetCur$( os, sepGet( os ) );
    3939        sepSet( os, " " );
     
    124124void open( ofstream & os, const char name[], const char mode[] ) {
    125125        FILE * file = fopen( name, mode );
    126         // #ifdef __CFA_DEBUG__
    127126        if ( file == 0p ) {
    128127                throw (Open_Failure){ os };
    129128                // abort | IO_MSG "open output file \"" | name | "\"" | nl | strerror( errno );
    130129        } // if
    131         // #endif // __CFA_DEBUG__
    132         (os){ file };
     130        (os){ file };                                                                           // initialize
    133131} // open
    134132
     
    137135} // open
    138136
    139 void close( ofstream & os ) {
    140   if ( (FILE *)(os.file$) == 0p ) return;
    141   if ( (FILE *)(os.file$) == (FILE *)stdout || (FILE *)(os.file$) == (FILE *)stderr ) return;
    142 
    143         if ( fclose( (FILE *)(os.file$) ) == EOF ) {
     137void close( ofstream & os ) with(os) {
     138  if ( (FILE *)(file$) == 0p ) return;
     139  if ( (FILE *)(file$) == (FILE *)stdout || (FILE *)(file$) == (FILE *)stderr ) return;
     140
     141        if ( fclose( (FILE *)(file$) ) == EOF ) {
    144142                throw (Close_Failure){ os };
    145143                // abort | IO_MSG "close output" | nl | strerror( errno );
    146144        } // if
    147         os.file$ = 0p;
     145        file$ = 0p;
    148146} // close
    149147
     
    177175} // fmt
    178176
    179 inline void acquire( ofstream & os ) {
    180         lock( os.lock$ );
    181         if ( ! os.acquired$ ) os.acquired$ = true;
    182         else unlock( os.lock$ );
     177inline void acquire( ofstream & os ) with(os) {
     178        lock( lock$ );                                                                          // may increase recursive lock
     179        if ( ! acquired$ ) acquired$ = true;                            // not locked ?
     180        else unlock( lock$ );                                                           // unwind recursive lock at start
    183181} // acquire
    184182
     
    187185} // release
    188186
    189 inline void lock( ofstream & os ) { acquire( os ); }
    190 inline void unlock( ofstream & os ) { release( os ); }
    191 
    192 void ?{}( osacquire & acq, ofstream & os ) { &acq.os = &os; lock( os.lock$ ); }
     187void ?{}( osacquire & acq, ofstream & os ) { lock( os.lock$ ); &acq.os = &os; }
    193188void ^?{}( osacquire & acq ) { release( acq.os ); }
    194189
     
    222217
    223218// private
    224 void ?{}( ifstream & is, void * file ) {
    225         is.file$ = file;
    226         is.nlOnOff$ = false;
    227         is.acquired$ = false;
     219void ?{}( ifstream & is, void * file ) with(is) {
     220        file$ = file;
     221        nlOnOff$ = false;
     222        acquired$ = false;
    228223} // ?{}
    229224
     
    265260void open( ifstream & is, const char name[], const char mode[] ) {
    266261        FILE * file = fopen( name, mode );
    267         // #ifdef __CFA_DEBUG__
    268262        if ( file == 0p ) {
    269263                throw (Open_Failure){ is };
    270264                // abort | IO_MSG "open input file \"" | name | "\"" | nl | strerror( errno );
    271265        } // if
    272         // #endif // __CFA_DEBUG__
    273         is.file$ = file;
     266        (is){ file };                                                                           // initialize
    274267} // open
    275268
     
    278271} // open
    279272
    280 void close( ifstream & is ) {
    281   if ( (FILE *)(is.file$) == 0p ) return;
    282   if ( (FILE *)(is.file$) == (FILE *)stdin ) return;
    283 
    284         if ( fclose( (FILE *)(is.file$) ) == EOF ) {
     273void close( ifstream & is ) with(is) {
     274  if ( (FILE *)(file$) == 0p ) return;
     275  if ( (FILE *)(file$) == (FILE *)stdin ) return;
     276
     277        if ( fclose( (FILE *)(file$) ) == EOF ) {
    285278                throw (Close_Failure){ is };
    286279                // abort | IO_MSG "close input" | nl | strerror( errno );
    287280        } // if
    288         is.file$ = 0p;
     281        file$ = 0p;
    289282} // close
    290283
     
    327320} // fmt
    328321
    329 inline void acquire( ifstream & is ) {
    330         lock( is.lock$ );
    331         if ( ! is.acquired$ ) is.acquired$ = true;
    332         else unlock( is.lock$ );
     322inline void acquire( ifstream & is ) with(is) {
     323        lock( lock$ );                                                                          // may increase recursive lock
     324        if ( ! acquired$ ) acquired$ = true;                            // not locked ?
     325        else unlock( lock$ );                                                           // unwind recursive lock at start
    333326} // acquire
    334327
     
    337330} // release
    338331
    339 void ?{}( isacquire & acq, ifstream & is ) { &acq.is = &is; lock( is.lock$ ); }
     332void ?{}( isacquire & acq, ifstream & is ) { lock( is.lock$ ); &acq.is = &is; }
    340333void ^?{}( isacquire & acq ) { release( acq.is ); }
    341334
     
    350343
    351344// exception I/O constructors
    352 void ?{}( Open_Failure & this, ofstream & ostream ) {
    353         this.virtual_table = &Open_Failure_vt;
    354         this.ostream = &ostream;
    355         this.tag = 1;
    356 } // ?{}
    357 
    358 void ?{}( Open_Failure & this, ifstream & istream ) {
    359         this.virtual_table = &Open_Failure_vt;
    360         this.istream = &istream;
    361         this.tag = 0;
     345void ?{}( Open_Failure & ex, ofstream & ostream ) with(ex) {
     346        virtual_table = &Open_Failure_vt;
     347        ostream = &ostream;
     348        tag = 1;
     349} // ?{}
     350
     351void ?{}( Open_Failure & ex, ifstream & istream ) with(ex) {
     352        virtual_table = &Open_Failure_vt;
     353        istream = &istream;
     354        tag = 0;
    362355} // ?{}
    363356
     
    366359
    367360// exception I/O constructors
    368 void ?{}( Close_Failure & this, ofstream & ostream ) {
    369         this.virtual_table = &Close_Failure_vt;
    370         this.ostream = &ostream;
    371         this.tag = 1;
    372 } // ?{}
    373 
    374 void ?{}( Close_Failure & this, ifstream & istream ) {
    375         this.virtual_table = &Close_Failure_vt;
    376         this.istream = &istream;
    377         this.tag = 0;
     361void ?{}( Close_Failure & ex, ofstream & ostream ) with(ex) {
     362        virtual_table = &Close_Failure_vt;
     363        ostream = &ostream;
     364        tag = 1;
     365} // ?{}
     366
     367void ?{}( Close_Failure & ex, ifstream & istream ) with(ex) {
     368        virtual_table = &Close_Failure_vt;
     369        istream = &istream;
     370        tag = 0;
    378371} // ?{}
    379372
     
    382375
    383376// exception I/O constructors
    384 void ?{}( Write_Failure & this, ofstream & ostream ) {
    385         this.virtual_table = &Write_Failure_vt;
    386         this.ostream = &ostream;
    387         this.tag = 1;
    388 } // ?{}
    389 
    390 void ?{}( Write_Failure & this, ifstream & istream ) {
    391         this.virtual_table = &Write_Failure_vt;
    392         this.istream = &istream;
    393         this.tag = 0;
     377void ?{}( Write_Failure & ex, ofstream & ostream ) with(ex) {
     378        virtual_table = &Write_Failure_vt;
     379        ostream = &ostream;
     380        tag = 1;
     381} // ?{}
     382
     383void ?{}( Write_Failure & ex, ifstream & istream ) with(ex) {
     384        virtual_table = &Write_Failure_vt;
     385        istream = &istream;
     386        tag = 0;
    394387} // ?{}
    395388
     
    398391
    399392// exception I/O constructors
    400 void ?{}( Read_Failure & this, ofstream & ostream ) {
    401         this.virtual_table = &Read_Failure_vt;
    402         this.ostream = &ostream;
    403         this.tag = 1;
    404 } // ?{}
    405 
    406 void ?{}( Read_Failure & this, ifstream & istream ) {
    407         this.virtual_table = &Read_Failure_vt;
    408         this.istream = &istream;
    409         this.tag = 0;
     393void ?{}( Read_Failure & ex, ofstream & ostream ) with(ex) {
     394        virtual_table = &Read_Failure_vt;
     395        ostream = &ostream;
     396        tag = 1;
     397} // ?{}
     398
     399void ?{}( Read_Failure & ex, ifstream & istream ) with(ex) {
     400        virtual_table = &Read_Failure_vt;
     401        istream = &istream;
     402        tag = 0;
    410403} // ?{}
    411404
  • tools/perf/process_stat_array.py

    r7e7a076 rf93c50a  
    11#!/usr/bin/python3
    22
    3 import argparse, os, sys, re
     3import argparse, json, math, os, sys, re
     4from PIL import Image
     5import numpy as np
    46
    57def dir_path(string):
     
    1113parser = argparse.ArgumentParser()
    1214parser.add_argument('--path', type=dir_path, default=".cfadata", help= 'paste path to biog.txt file')
     15parser.add_argument('--out', type=argparse.FileType('w'), default=sys.stdout)
    1316
    1417try :
     
    2326counters = {}
    2427
     28max_cpu = 0
     29min_cpu = 1000000
     30max_tsc = 0
     31min_tsc = 18446744073709551615
     32
    2533#open the files
    2634for filename in filenames:
     
    3139                with open(os.path.join(root, filename), 'r') as file:
    3240                        for line in file:
    33                                 # data = [int(x.strip()) for x in line.split(',')]
    34                                 data = [int(line.strip())]
    35                                 data = [me, *data]
     41                                raw = [int(x.strip()) for x in line.split(',')]
     42
     43                                ## from/to
     44                                high = (raw[1] >> 32)
     45                                low  = (raw[1] & 0xffffffff)
     46                                data = [me, raw[0], high, low]
     47                                max_cpu = max(max_cpu, high, low)
     48                                min_cpu = min(min_cpu, high, low)
     49
     50                                ## number
     51                                # high = (raw[1] >> 8)
     52                                # low  = (raw[1] & 0xff)
     53                                # data = [me, raw[0], high, low]
     54                                # max_cpu = max(max_cpu, low)
     55                                # min_cpu = min(min_cpu, low)
     56
     57
     58                                max_tsc = max(max_tsc, raw[0])
     59                                min_tsc = min(min_tsc, raw[0])
    3660                                merged.append(data)
    3761
    38         except:
     62        except Exception as e:
     63                print(e)
    3964                pass
    4065
     66
     67print({"max-cpu": max_cpu, "min-cpu": min_cpu, "max-tsc": max_tsc, "min-tsc": min_tsc})
    4168
    4269# Sort by timestamp (the second element)
     
    4774merged.sort(key=takeSecond)
    4875
    49 # for m in merged:
    50 #       print(m)
     76json.dump({"values":merged, "max-cpu": max_cpu, "min-cpu": min_cpu, "max-tsc": max_tsc, "min-tsc": min_tsc}, args.out)
    5177
    52 single = []
    53 curr = 0
     78# vmin = merged[ 0][1]
     79# vmax = float(merged[-1][1] - vmin) / 2500000000.0
     80# # print(vmax)
    5481
    55 # merge the data
    56 # for (me, time, value) in merged:
    57 for (me, value) in merged:
    58         # check now much this changes
    59         old = counters[me]
    60         change = value - old
    61         counters[me] = value
     82# bins = []
     83# for _ in range(0, int(math.ceil(vmax * 10))):
     84#       bins.append([0] * (32 * 32))
    6285
    63         # add change to the current
    64         curr = curr + change
    65         single.append( value )
     86# # print(len(bins))
     87# bins = np.array(bins)
    6688
    67         pass
     89# rejected = 0
     90# highest  = 0
    6891
    69 print(single)
     92# for x in merged:
     93#       b = int(float(x[1] - vmin) / 250000000.0)
     94#       from_ = x[2]
     95#       if from_ < 0 or from_ > 32:
     96#               rejected += 1
     97#               continue;
     98#       to_   = x[3]
     99#       if to_ < 0 or to_ > 32:
     100#               rejected += 1
     101#               continue;
     102#       idx = (to_ * 32) + from_
     103#       bins[b][idx] = bins[b][idx] + 1
     104#       highest = max(highest, bins[b][idx])
     105
     106# bins = np.array(map(lambda x: np.int8(x * 255.0 / float(highest)), bins))
     107
     108# print([highest, rejected])
     109# print(bins.shape)
     110
     111# im = Image.fromarray(bins)
     112# im.save('test.png')
     113
     114# vmax = merged[-1][1]
     115
     116# diff = float(vmax - vmin) / 2500000000.0
     117# print([vmin, vmax])
     118# print([vmax - vmin, diff])
     119
     120# print(len(merged))
     121
     122# for b in bins:
     123#       print(b)
     124
     125# single = []
     126# curr = 0
     127
     128# # merge the data
     129# # for (me, time, value) in merged:
     130# for (me, value) in merged:
     131#       # check now much this changes
     132#       old = counters[me]
     133#       change = value - old
     134#       counters[me] = value
     135
     136#       # add change to the current
     137#       curr = curr + change
     138#       single.append( value )
     139
     140#       pass
     141
     142# print(single)
    70143
    71144# single = sorted(single)[:len(single)-100]
Note: See TracChangeset for help on using the changeset viewer.