# Changeset 31a6f38

Ignore:
Timestamp:
Sep 21, 2020, 11:37:07 PM (2 years ago)
Branches:
arm-eh, enum, forall-pointer-decay, jacob/cs343-translation, master, new-ast-unique-expr, pthread-emulation, qualifiedEnum
Children:
Parents:
9a05b81
Message:

complete proofread of Thierry's PhD Comp II document

File:
1 edited

### Legend:

Unmodified
 r9a05b81 \section{Introduction} \subsection{\CFA and the \CFA concurrency package} \CFA\cite{Moss18} is a modern, polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language. \CFA~\cite{Moss18} is a modern, polymorphic, non-object-oriented, concurrent, backwards-compatible extension of the C programming language. It aims to add high-productivity features while maintaining the predictable performance of C. As such, concurrency in \CFA\cite{Delisle19} aims to offer simple and safe high-level tools while still allowing performant code. \CFA concurrent code is written in the synchronous programming paradigm but uses \glspl{uthrd} in order to achieve the simplicity and maintainability of synchronous programming without sacrificing the efficiency of asynchronous programming. As such, concurrency in \CFA~\cite{Delisle19} aims to offer simple and safe high-level tools while still allowing performant code. \CFA concurrent code is written in the synchronous programming paradigm but uses \glspl{uthrd} to achieve the simplicity and maintainability of synchronous programming without sacrificing the efficiency of asynchronous programming. As such, the \CFA \newterm{scheduler} is a preemptive user-level scheduler that maps \glspl{uthrd} onto \glspl{kthrd}. \subsection{Scheduling} \newterm{Scheduling} occurs when execution switches from one thread to another, where the second thread is implicitly chosen by the scheduler. This scheduling is an indirect handoff, as opposed to generators and coroutines which explicitly switch to the next generator and coroutine respectively. This scheduling is an indirect handoff, as opposed to generators and coroutines that explicitly switch to the next generator and coroutine respectively. The cost of switching between two threads for an indirect handoff has two components: \begin{enumerate} and the cost of scheduling, \ie deciding which thread to run next among all the threads ready to run. \end{enumerate} The first cost is generally constant and fixed\footnote{Affecting the constant context-switch cost is whether it is done in one step, after the scheduling, or in two steps, context-switching to a third fixed thread before scheduling.}, while the scheduling cost can vary based on the system state. Adding multiple \glspl{kthrd} does not fundamentally change the scheduler semantics or requirements, it simply adds new correctness requirements, \ie \newterm{linearizability}\footnote{Meaning, however fast the CPU threads run, there is an equivalent sequential order that gives the same result.}, and a new dimension to performance: scalability, where scheduling cost now also depends on contention. The first cost is generally constant\footnote{Affecting the constant context-switch cost is whether it is done in one step, where the first thread schedules the second, or in two steps, where the first thread context switches to a third scheduler thread.}, while the scheduling cost can vary based on the system state. Adding multiple \glspl{kthrd} does not fundamentally change the scheduler semantics or requirements, it simply adds new correctness requirements, \ie \newterm{linearizability}\footnote{Meaning however fast the CPU threads run, there is an equivalent sequential order that gives the same result.}, and a new dimension to performance: scalability, where scheduling cost also depends on contention. The more threads switch, the more the administration cost of scheduling becomes noticeable. It is therefore important to build a scheduler with the lowest possible cost and latency. Another important consideration is \newterm{fairness}. In principle, scheduling should give the illusion of perfect fairness, where all threads ready to run are running \emph{simultaneously}. In practice, there can be advantages to unfair scheduling, similar to the express cash register at a grocery store. While the illusion of simultaneity is easier to reason about, it can break down if the scheduler allows too much unfairness. Therefore, the scheduler should offer as much fairness as needed to guarantee eventual progress, but use unfairness to help performance. In practice, threads must wait in turn but there can be advantages to unfair scheduling, similar to the express cash register at a grocery store. The goal of this research is to produce a scheduler that is simple for programmers to understand and offers good performance. \subsection{Research Goal} The goal of this research is to produce a scheduler that is simple for programmers to understand and offers good general performance. Here understandability does not refer to the API but to how much scheduling concerns programmers need to take into account when writing a \CFA concurrent package. Therefore, the main goal of this proposal is : Therefore, the main consequence of this goal is : \begin{quote} The \CFA scheduler should be \emph{viable} for \emph{any} workload. \end{quote} For a general-purpose scheduler, it is impossible to produce an optimal algorithm as it would require knowledge of the future behaviour of threads. As such, scheduling performance is generally either defined by the best-case scenario, \ie a workload to which the scheduler is tailored, or the worst-case scenario, \ie the scheduler behaves no worse than \emph{X}. For a general-purpose scheduler, it is impossible to produce an optimal algorithm as that requires knowledge of the future behaviour of threads. As such, scheduling performance is generally either defined by a best-case scenario, \ie a workload to which the scheduler is tailored, or a worst-case scenario, \ie the scheduler behaves no worse than \emph{X}. For this proposal, the performance is evaluated using the second approach to allow \CFA programmers to rely on scheduling performance. Because there is no optimal scheduler, ultimately \CFA may allow programmers to write their own scheduler; but that is not the subject of this proposal, which considers only the default scheduler. \item creating an abstraction layer over the operating system to handle kernel-threads spinning unnecessarily, \item scheduling blocking I/O operations, \item and writing sufficient library tools to allow developers to indirectly use the scheduler, either through tuning knobs or replacing the default scheduler. \item and writing sufficient library tools to allow developers to indirectly use the scheduler, either through tuning knobs in the default scheduler or replacing the default scheduler. \end{enumerate} \paragraph{Performance} The performance of a scheduler can generally be measured in terms of scheduling cost, scalability and latency. \newterm{Scheduling cost} is the cost to switch from one thread to another, as mentioned above. For simple applications, where a single kernel thread does most of the scheduling, it is generally the dominating cost. \newterm{Scalability} is the cost of adding multiple kernel threads because it increases the time for context switching because of contention by multiple threads accessing shared resources, \eg the ready queue. For compute-bound concurrent applications with little context switching, the scheduling cost is negligible. For applications with high context-switch rates, scheduling cost can begin to dominating the cost. \newterm{Scalability} is the cost of adding multiple kernel threads. It can increase the time for scheduling because of contention from the multiple threads accessing shared resources, \eg a single ready queue. Finally, \newterm{tail latency} is service delay and relates to thread fairness. Specifically, latency measures how long a thread waits to run once scheduled and is evaluated in the worst case. Specifically, latency measures how long a thread waits to run once scheduled and is evaluated by the worst case. The \CFA scheduler should offer good performance for all three metrics. \newterm{Eventual progress} guarantees every scheduled thread is eventually run, \ie prevent starvation. As a hard requirement, the \CFA scheduler must guarantee eventual progress, otherwise the above-mentioned illusion of simultaneous execution is broken and the scheduler becomes much more complex to reason about. \newterm{Predictability} and \newterm{reliability} mean similar workloads achieve similar performance and programmer execution intuition is respected. \newterm{Predictability} and \newterm{reliability} mean similar workloads achieve similar performance so programmer execution intuition is respected. For example, a thread that yields aggressively should not run more often than other tasks. While this is intuitive, it does not hold true for many work-stealing or feedback based schedulers. The \CFA scheduler must guarantee eventual progress and should be predictable and offer reliable performance. The \CFA scheduler must guarantee eventual progress, should be predictable, and offer reliable performance. \paragraph{Efficiency} Finally, efficient usage of CPU resources is also an important requirement and is discussed in depth towards the end of the proposal. \newterm{Efficiency} means avoiding using CPU cycles when there are no threads to run, and conversely, use all CPUs available when the workload can benefit from it. \newterm{Efficiency} means avoiding using CPU cycles when there are no threads to run (conserve energy/heat), and conversely, using as many available CPU cycles when the workload can benefit from it. Balancing these two states is where the complexity lies. The \CFA scheduler should be efficient with respect to the underlying (shared) computer. \begin{enumerate} \item Threads live long enough for useful feedback information to be gathered. \item Threads belong to multiple users so fairness across threads is insufficient. \item Threads belong to multiple users so fairness across users is largely invisible. \end{enumerate} Security concerns mean more precise and robust fairness metrics must be used to guarantee fairness across processes created by users as well as threads created within a process. In the case of the \CFA scheduler, every thread runs in the same user space and is controlled by the same user. Fairness across users is therefore a given and it is then possible to safely ignore the possibility that threads are malevolent. This approach allows for a much simpler fairness metric and in this proposal \emph{fairness} is defined as: when multiple threads are cycling through the system, the total ordering of threads being scheduled, \ie pushed onto the ready queue, should not differ much from the total ordering of threads being executed, \ie popped from the ready queue. Fairness across threads is therefore a given and it is then possible to safely ignore the possibility that threads are malevolent. This approach allows for a much simpler fairness metric, and in this proposal, \emph{fairness} is defined as: \begin{quote} When multiple threads are cycling through the system, the total ordering of threads being scheduled, \ie pushed onto the ready queue, should not differ much from the total ordering of threads being executed, \ie popped from the ready queue. \end{quote} Since feedback is not necessarily feasible within the lifetime of all threads and a simple fairness metric can be used, the scheduling strategy proposed for the \CFA runtime does not use per-threads feedback. Threads with equal priority are scheduled using a secondary strategy, often something simple like round robin or FIFO. A consequence of priority is that, as long as there is a thread with a higher priority that desires to run, a thread with a lower priority does not run. This possible starving of threads can dramatically increase programming complexity since starving threads and priority inversion (prioritizing a lower priority thread) can both lead to serious problems. The potential for thread starvation dramatically increases programming complexity since starving threads and priority inversion (prioritizing a lower priority thread) can both lead to serious problems. An important observation is that threads do not need to have explicit priorities for problems to occur. Indeed, any system with multiple ready queues that attempts to exhaust one queue before accessing the other queues, essentially provide implicit priority, which can encounter starvation problems. Indeed, any system with multiple ready queues that attempts to exhaust one queue before accessing the other queues, essentially provides implicit priority, which can encounter starvation problems. For example, a popular scheduling strategy that suffers from implicit priorities is work stealing. \newterm{Work stealing} is generally presented as follows: \item If a processor's ready queue is empty, attempt to run threads from some other processor's ready queue. \end{enumerate} In a loaded system\footnote{A \newterm{loaded system} is a system where threads are being run at the same rate they are scheduled.}, if a thread does not yield, block, or preempt for an extended period of time, threads on the same processor's list starve if no other processors exhaust their list. Since priorities can be complex for programmers to incorporate into their execution intuition, the scheduling strategy proposed for the \CFA runtime does not use a strategy with either implicit or explicit thread priorities. Since priorities can be complex for programmers to incorporate into their execution intuition, the \CFA scheduling strategy does not provided explicit priorities and attempts to eliminate implicit priorities. \subsection{Schedulers without feedback or priorities} This proposal conjectures that it is possible to construct a default scheduler for the \CFA runtime that offers good scalability and a simple fairness guarantee that is easy for programmers to reason about. The simplest fairness guarantee is FIFO ordering, \ie threads scheduled first run first. The simplest fairness guarantee is FIFO ordering, \ie threads scheduled first come first. However, enforcing FIFO ordering generally conflicts with scalability across multiple processors because of the additional synchronization. Thankfully, strict FIFO is not needed for sufficient fairness. Since concurrency is inherently non-deterministic, fairness concerns in scheduling are only a problem if a thread repeatedly runs before another thread can run. Some relaxation is possible because non-determinism means programmers already handle ordering problems to produce correct code and hence rely on weak guarantees, \eg that a specific thread will \emph{eventually} run. Some relaxation is possible because non-determinism means programmers already handle ordering problems to produce correct code and hence rely on weak guarantees, \eg that a thread \emph{eventually} runs. Since some reordering does not break correctness, the FIFO fairness guarantee can be significantly relaxed without causing problems. For this proposal, the target guarantee is that the \CFA scheduler provides \emph{probable} FIFO ordering, which allows reordering but makes it improbable that threads are reordered far from their position in total ordering. The \CFA scheduler fairness is defined as follows: \begin{itemize} \item Given two threads $X$ and $Y$, the odds that thread $X$ runs $N$ times \emph{after} thread $Y$ is scheduled but \emph{before} it is run, decreases exponentially with regard to $N$. \end{itemize} \begin{quote} Given two threads $X$ and $Y$, the odds that thread $X$ runs $N$ times \emph{after} thread $Y$ is scheduled but \emph{before} it is run, decreases exponentially with regard to $N$. \end{quote} While this is not a bounded guarantee, the probability that unfairness persist for long periods of times decreases exponentially, making persisting unfairness virtually impossible. The described queue uses an array of underlying strictly FIFO queues as shown in Figure~\ref{fig:base}\footnote{For this section, the number of underlying queues is assumed to be constant. Section~\ref{sec:resize} discusses resizing the array.}. Pushing new data is done by selecting one of these underlying queues at random, recording a timestamp for the operation and pushing to the selected queue. Pushing new data is done by selecting one of the underlying queues at random, recording a timestamp for the operation, and pushing to the selected queue. Popping is done by selecting two queues at random and popping from the queue with the oldest timestamp. A higher number of underlying queues lead to less contention on each queue and therefore better performance. A higher number of underlying queues leads to less contention on each queue and therefore better performance. In a loaded system, it is highly likely the queues are non-empty, \ie several tasks are on each of the underlying queues. This means that selecting a queue at random to pop from is highly likely to yield a queue with available items. For this case, selecting a queue at random to pop from is highly likely to yield a queue with available items. In Figure~\ref{fig:base}, ignoring the ellipsis, the chances of getting an empty queue is 2/7 per pick, meaning two random picks yield an item approximately 9 times out of 10. \input{base.pstex_t} \end{center} \caption{Relaxed FIFO list at the base of the scheduler: an array of strictly FIFO lists. The timestamp is in all nodes and cell arrays.} \caption{Loaded relaxed FIFO list base on an array of strictly FIFO lists. A timestamp appears in each node and array cell.} \label{fig:base} \end{figure} \input{empty.pstex_t} \end{center} \caption{More empty'' state of the queue: the array contains many empty cells.} \caption{Unloaded relaxed FIFO list where the array contains many empty cells.} \label{fig:empty} \end{figure} When the ready queue is \emph{more empty}, \ie several of the queues are empty, selecting a random queue for popping is less likely to yield a successful selection and more attempts are needed, resulting in a performance degradation. In an unloaded system, several of the queues are empty, so selecting a random queue for popping is less likely to yield a successful selection and more attempts are needed, resulting in a performance degradation. Figure~\ref{fig:empty} shows an example with fewer elements, where the chances of getting an empty queue is 5/7 per pick, meaning two random picks yield an item only half the time. Since the ready queue is not empty, the pop operation \emph{must} find an element before returning and therefore must retry. \end{table} Performance can be improved in case~D (Table~\ref{tab:perfcases}) by adding information to help processors find which inner queues are used. Performance can be improved in Table~\ref{tab:perfcases} case~D by adding information to help processors find which inner queues are used. This addition aims to avoid the cost of retrying the pop operation but does not affect contention on the underlying queues and can incur some management cost for both push and pop operations. The approach used to encode this information can vary in density and be either global or local. With a multi-word bitmask, this maximum limit can be increased arbitrarily, but it is not possible to check if the queue is empty by reading the bitmask atomically. Finally, a dense bitmap, either single or multi-word, causes additional problems in case C (Table 1), because many processors are continuously scanning the bitmask to find the few available threads. Finally, a dense bitmap, either single or multi-word, causes additional problems in Table~\ref{tab:perfcases} case C, because many processors are continuously scanning the bitmask to find the few available threads. This increased contention on the bitmask(s) reduces performance because of cache misses after updates and the bitmask is updated more frequently by the scanning processors racing to read and/or update that information. This increased update frequency means the information in the bitmask is more often stale before a processor can use it to find an item, \ie mask read says there are available user threads but none on queue. \begin{figure} \begin{center} {\resizebox{0.8\textwidth}{!}{\input{emptybit}}} \end{center} \caption{More empty'' queue with added bitmask to indicate which array cells have items.} {\resizebox{0.73\textwidth}{!}{\input{emptybit}}} \end{center} \vspace*{-5pt} \caption{Unloaded queue with added bitmask to indicate which array cells have items.} \label{fig:emptybit} \begin{center} {\resizebox{0.73\textwidth}{!}{\input{emptytree}}} \end{center} \vspace*{-5pt} \caption{Unloaded queue with added binary search tree indicate which array cells have items.} \label{fig:emptytree} \begin{center} {\resizebox{0.9\textwidth}{!}{\input{emptytls}}} \end{center} \vspace*{-5pt} \caption{Unloaded queue with added per processor bitmask to indicate which array cells have items.} \label{fig:emptytls} \end{figure} Figure~\ref{fig:emptytree} shows another approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}\footnote{This particular paper seems to be patented in the US. Figure~\ref{fig:emptytree} shows an approach using a hierarchical tree data-structure to reduce contention and has been shown to work in similar cases~\cite{ellen2007snzi}\footnote{This particular paper seems to be patented in the US. How does that affect \CFA? Can I use it in my work?}. However, this approach may lead to poorer performance in case~B (Table~\ref{tab:perfcases}) due to the inherent pointer chasing cost and already low contention cost in that case. \begin{figure} \begin{center} {\resizebox{0.8\textwidth}{!}{\input{emptytree}}} \end{center} \caption{More empty'' queue with added binary search tree indicate which array cells have items.} \label{fig:emptytree} \end{figure} Finally, a third approach is to use dense information, similar to the bitmap, but have each thread keep its own independent copy of it. However, this approach may lead to poorer performance in Table~\ref{tab:perfcases} case~B due to the inherent pointer chasing cost and already low contention cost in that case. Figure~\ref{fig:emptytls} shows an approach using dense information, similar to the bitmap, but have each thread keep its own independent copy of it. While this approach can offer good scalability \emph{and} low latency, the liveliness of the information can become a problem. In the simple cases, local copies of which underlying queues are empty can become stale and end-up not being useful for the pop operation. In the simple cases, local copies with empty underlying queues can become stale and end-up not being useful for the pop operation. A more serious problem is that reliable information is necessary for some parts of this algorithm to be correct. As mentioned in this section, processors must know \emph{reliably} whether the list is empty or not to decide if they can return \texttt{NULL} or if they must keep looking during a pop operation. Section~\ref{sec:sleep} discusses another case where reliable information is required for the algorithm to be correct. \begin{figure} \begin{center} \input{emptytls} \end{center} \caption{More empty'' queue with added per processor bitmask to indicate which array cells have items.} \label{fig:emptytls} \end{figure} There is a fundamental tradeoff among these approach. Dense global information about empty underlying queues helps zero-contention cases at the cost of high-contention case. Sparse global information helps high-contention cases but increases latency in zero-contention-cases, to read and aggregate'' the information\footnote{Hierarchical structures, \eg binary search tree, effectively aggregate information but follow pointer chains, learning information at each node. Dense global information about empty underlying queues helps zero-contention cases at the cost of the high-contention case. Sparse global information helps high-contention cases but increases latency in zero-contention cases to read and aggregate'' the information\footnote{Hierarchical structures, \eg binary search tree, effectively aggregate information but follow pointer chains, learning information at each node. Similarly, other sparse schemes need to read multiple cachelines to acquire all the information needed.}. Finally, dense local information has both the advantages of low latency in zero-contention cases and scalability in high-contention cases. However the information can become stale making it difficult to use to ensure correctness. Finally, dense local information has both the advantages of low latency in zero-contention cases and scalability in high-contention cases. However, the information can become stale making it difficult to use to ensure correctness. The fact that these solutions have these fundamental limits suggest to me a better solution that attempts to combine these properties in an interesting way. Also, the lock discussed in Section~\ref{sec:resize} allows for solutions that adapt to the number of processors, which could also prove useful. How much scalability is actually needed is highly debatable. \emph{libfibre}\cite{libfibre} has compared favourably to other schedulers in webserver tests\cite{Karsten20} and uses a single atomic counter in its scheduling algorithm similarly to the proposed bitmask. \emph{libfibre}~\cite{libfibre} has compared favourably to other schedulers in webserver tests~\cite{Karsten20} and uses a single atomic counter in its scheduling algorithm similarly to the proposed bitmask. As such, the single atomic instruction on a shared cacheline may be sufficiently performant. I have built a prototype of this ready queue in the shape of a data queue, \ie nodes on the queue are structures with a single int representing a thread and intrusive data fields. Using this prototype, I ran preliminary performance experiments that confirm the expected performance in Table~\ref{tab:perfcases}. However, these experiments only offer a hint at the actual performance of the scheduler since threads form more complex operations than simple integer nodes, \eg threads are not independent of each other, when a thread blocks some other thread must intervene to wake it. I have built a prototype of this ready queue in the shape of a data queue, \ie nodes on the queue are structures with a single $int$ representing a thread and intrusive data fields. Using this prototype, preliminary performance experiments confirm the expected performance in Table~\ref{tab:perfcases}. However, these experiments only offer a hint at the actual performance of the scheduler since threads are involved in more complex operations, \eg threads are not independent of each other: when a thread blocks some other thread must intervene to wake it. I have also integrated this prototype into the \CFA runtime, but have not yet created performance experiments to compare results, as creating one-to-one comparisons between the prototype and the \CFA runtime will be complex. Threads on a cluster are always scheduled on one of the processors of the cluster. Currently, the runtime handles dynamically adding and removing processors from clusters at any time. Since this is part of the existing design, the proposed scheduler must also support this behaviour. Since this feature is part of the existing design, the proposed scheduler must also support this behaviour. However, dynamically resizing a cluster is considered a rare event associated with setup, tear down and major configuration changes. This assumption is made both in the design of the proposed scheduler as well as in the original design of the \CFA runtime system. As such, the proposed scheduler must honour the correctness of this behaviour but does not have any performance objectives with regard to resizing a cluster. How long adding or removing processors take and how much this disrupts the performance of other threads is considered a secondary concern since it should be amortized over long periods of times. That is, the time to add or remove processors and how much this disrupts the performance of other threads is considered a secondary concern since it should be amortized over long periods of times. However, as mentioned in Section~\ref{sec:queue}, contention on the underlying queues can have a direct impact on performance. The number of underlying queues must therefore be adjusted as the number of processors grows or shrinks. There are possible alternatives to the reader-writer lock solution. This problem is effectively a memory reclamation problem and as such there is a large body of research on the subject\cite{michael2004hazard, brown2015reclaiming}. This problem is effectively a memory reclamation problem and as such there is a large body of research on the subject~\cite{brown2015reclaiming, michael2004hazard}. However, the reader-write lock-solution is simple and can be leveraged to solve other problems (\eg processor ordering and memory reclamation of threads), which makes it an attractive solution. Individual processors always finish scheduling user threads before looking for new work, which means that the last processor to go to sleep cannot miss threads scheduled from inside the cluster (if they do, that demonstrates the ready queue is not linearizable). However, this guarantee does not hold if threads are scheduled from outside the cluster, either due to an external event like timers and I/O, or due to a user (or kernel) thread migrating from a different cluster. In this case, missed signals can lead to the cluster deadlocking\footnote{Clusters should only deadlock in cases where a \CFA programmer \emph{actually} write \CFA code that leads to a deadlock.}. In this case, missed signals can lead to the cluster deadlocking\footnote{Clusters should only deadlock in cases where a \CFA programmer \emph{actually} writes \CFA code that leads to a deadlock.}. Therefore, it is important that the scheduling of threads include a mechanism where signals \emph{cannot} be missed. For performance reasons, it can be advantageous to have a secondary mechanism that allows signals to be missed in cases where it cannot lead to a deadlock. To be safe, this process must include a handshake'' where it is guaranteed that either~: the sleeping processor notices that a user thread is scheduled after the sleeping processor signalled its intent to block or code scheduling threads sees the intent to sleep before scheduling and be able to wake-up the processor. To be safe, this process must include a handshake'' where it is guaranteed that either: \begin{enumerate} \item the sleeping processor notices that a user thread is scheduled after the sleeping processor signalled its intent to block or \item code scheduling threads sees the intent to sleep before scheduling and be able to wake-up the processor. \end{enumerate} This matter is complicated by the fact that pthreads and Linux offer few tools to implement this solution and no guarantee of ordering of threads waking up for most of these tools. Another important issue is avoiding kernel threads sleeping and waking frequently because there is a significant operating-system cost. This scenario happens when a program oscillates between high and low activity, needing most and then fewer processors. This scenario happens when a program oscillates between high and low activity, needing most and then few processors. A possible partial solution is to order the processors so that the one which most recently went to sleep is woken up. This allows other sleeping processors to reach deeper sleep state (when these are available) while keeping hot'' processors warmer. Processors that are unnecessarily unblocked lead to unnecessary contention, CPU usage, and power consumption, while too many sleeping processors can lead to suboptimal throughput. Furthermore, transitions from sleeping to awake and vice versa also add unnecessary latency. There is already a wealth of research on the subject\cite{schillings1996engineering, wiki:thunderherd} and I may use an existing approach for the idle-sleep heuristic in this project, \eg\cite{Karsten20}. There is already a wealth of research on the subject~\cite{schillings1996engineering, wiki:thunderherd} and I may use an existing approach for the idle-sleep heuristic in this project, \eg~\cite{Karsten20}. \subsection{Asynchronous I/O} an event-engine to (de)multiplex the operations, \item and a synchronous interface for users to use. and a synchronous interface for users. \end{enumerate} None of these components currently exist in \CFA and I will need to build all three for this project. \paragraph{OS Abstraction} One fundamental part for converting blocking I/O operations into non-blocking ones is having an underlying asynchronous I/O interface to direct the I/O operations. \paragraph{OS Asynchronous Abstraction} One fundamental part for converting blocking I/O operations into non-blocking is having an underlying asynchronous I/O interface to direct the I/O operations. While there exists many different APIs for asynchronous I/O, it is not part of this proposal to create a novel API. It is sufficient to make one work in the complex context of the \CFA runtime. \uC uses the $select$\cite{select} as its interface, which handles ttys, pipes and sockets, but not disk. \uC uses the $select$~\cite{select} as its interface, which handles ttys, pipes and sockets, but not disk. $select$ entails significant complexity and is being replaced in UNIX operating systems, which make it a less interesting alternative. Another popular interface is $epoll$\cite{epoll}, which is supposed to be cheaper than $select$. However, $epoll$ also does not handle the file system and anecdotal evidence suggest it has problems with Linux pipes and $TTY$s. A popular cross-platform alternative is $libuv$\cite{libuv}, which offers asynchronous sockets and asynchronous file system operations (among other features). Another popular interface is $epoll$~\cite{epoll}, which is supposed to be cheaper than $select$. However, $epoll$ also does not handle the file system and anecdotal evidence suggest it has problems with Linux pipes and ttys. A popular cross-platform alternative is $libuv$~\cite{libuv}, which offers asynchronous sockets and asynchronous file system operations (among other features). However, as a full-featured library it includes much more than I need and could conflict with other features of \CFA unless significant effort is made to merge them together. A very recent alternative that I am investigating is $io_uring$\cite{io_uring}. A very recent alternative that I am investigating is $io_uring$~\cite{io_uring}. It claims to address some of the issues with $epoll$ and my early investigating suggests that the claim is accurate. $io_uring$ uses a much more general approach where system calls are registered to a queue and later executed by the kernel, rather than relying on system calls to return an error instead of blocking and subsequently waiting for changes on file descriptors. I believe this approach allows for fewer problems, \eg the manpage for $open$\cite{open} states: $io_uring$ uses a much more general approach where system calls are registered to a queue and later executed by the kernel, rather than relying on system calls to subsequently wait for changes on file descriptors or return an error. I believe this approach allows for fewer problems, \eg the manpage for $open$~\cite{open} states: \begin{quote} Note that [the $O_NONBLOCK$ flag] has no effect for regular files and block devices; Since $O_NONBLOCK$ semantics might eventually be implemented, applications should not depend upon blocking behaviour when specifying this flag for regular files and block devices. \end{quote} This makes approach based on $epoll$/$select$ less reliable since they may not work for every file descriptors. For this reason, I plan to use $io_uring$ as the OS abstraction for the \CFA runtime unless further work shows problems I haven't encountered yet. However, only a small subset of the features are available in Ubuntu as of April 2020\cite{wiki:ubuntu-linux}, which will limit performance comparisons. This makes approaches based on $select$/$epoll$ less reliable since they may not work for every file descriptors. For this reason, I plan to use $io_uring$ as the OS abstraction for the \CFA runtime unless further work encounters a fatal problem. However, only a small subset of the features are available in Ubuntu as of April 2020~\cite{wiki:ubuntu-linux}, which will limit performance comparisons. I do not believe this will affect the comparison result. \paragraph{Event Engine} Laying on top of the asynchronous interface layer is the event engine. Above the OS asynchronous abstraction is the event engine. This engine is responsible for multiplexing (batching) the synchronous I/O requests into asynchronous I/O requests and demultiplexing the results to appropriate blocked user threads. This step can be straightforward for simple cases, but becomes quite complex when there are thousands of user threads performing both reads and writes, possibly on overlapping file descriptors. The interface can be novel but it is preferable to match the existing POSIX interface when possible to be compatible with existing code. Matching allows C programs written using this interface to be transparently converted to \CFA with minimal effort. Where new functionality is needed, I will create a novel interface to fill gaps and provide advanced features. Where new functionality is needed, I will add novel interface extensions to fill gaps and provide advanced features. \section{Discussion} I believe that runtime system and scheduling are still open topics. Many state of the art'' production frameworks still use single-threaded event loops because of performance considerations, \eg \cite{nginx-design}, and, to my knowledge, no widely available system language offers modern threading facilities. Many state of the art'' production frameworks still use single-threaded event loops because of performance considerations, \eg~\cite{nginx-design}, and, to my knowledge, no widely available system language offers modern threading facilities. I believe the proposed work offers a novel runtime and scheduling package, where existing work only offers fragments that users must assemble themselves when possible.