Changeset 0fec6c1 for doc/theses/thierry_delisle_PhD
- Timestamp:
- Sep 5, 2022, 9:41:11 AM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation
- Children:
- 1fcbce7, 83cb754
- Parents:
- 4dba1da
- Location:
- doc/theses/thierry_delisle_PhD/thesis/text
- Files:
-
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/text/conclusion.tex
r4dba1da r0fec6c1 5 5 Because I am the main developer for both components of this project, there is strong continuity across the design and implementation. 6 6 This continuity provides a consistent approach to advanced control-flow and concurrency, with easier development, management and maintenance of the runtime in the future. 7 I believed my Masters work would provide the background to make the Ph.D work reasonably straightforward. 8 However, I discovered two significant challenges. 7 9 8 I believed my Masters work would provide the background to make the Ph.D work reasonably straightforward. 9 However, in doing so I discovered two expected challenges. 10 First, while modern symmetric multiprocessing CPU have significant performance penalties for communicating across cores. 11 This makes implementing fair schedulers notably more difficult, since fairness generally requires \procs to be aware of each other's progress. 12 This challenge is made even harder when comparing against MQMS schedulers (see Section\ref{sched}) which have very little inter-\proc communication. 13 This is particularly true of state-of-the-art work-stealing schedulers, which can have virtually no inter-\proc communication in some common workloads. 14 This means that when adding fairness to work-stealing schedulers, extreme care must be taken to hide the communication costs so performance does not suffer. 15 Second, the kernel locking, threading, and I/O in the Linux operating system offers very little flexibility of use. 10 First, modern symmetric multiprocessing CPU have significant performance penalties for communication (often cache related). 11 A SQMS scheduler (see Section~\ref{sched}), with its \proc-shared ready-queue, has perfect load-balancing but poor affinity resulting in high communication across \procs. 12 A MQMS scheduler, with its \proc-specific ready-queues, has poor load-balancing but perfect affinity often resulting in significantly reduced communication. 13 However, implementing fairness for an MQMS scheduler is difficult, since fairness requires \procs to be aware of each other's ready-queue progress, \ie communicated knowledge. 14 % This challenge is made harder when comparing against MQMS schedulers (see Section\ref{sched}) which have very little inter-\proc communication. 15 For balanced workloads with little or no data sharing (embarrassingly parallel), an MQMS scheduler is near optimal, \eg state-of-the-art work-stealing schedulers. 16 For these kinds of fair workloads, adding fairness must be low-cost to hide the communication costs needed for global ready-queue progress or performance suffers. 17 18 Second, the kernel locking, threading, and I/O in the Linux operating system offers very little flexibility, and are not designed to facilitate user-level threading. 16 19 There are multiple concurrency aspects in Linux that require carefully following a strict procedure in order to achieve acceptable performance. 17 20 To be fair, many of these concurrency aspects were designed 30-40 years ago, when there were few multi-processor computers and concurrency knowledge was just developing. … … 21 24 The positive is that @io_uring@ supports the panoply of I/O mechanisms in Linux; 22 25 hence, the \CFA runtime uses one I/O mechanism to provide non-blocking I/O, rather than using @select@ to handle TTY I/O, @epoll@ to handle network I/O, and managing a thread pool to handle disk I/O. 23 Merging all these different I/O mechanisms into a coherent scheduling implementation would require a much more work than what is present in this thesis, as well as detailed knowledge of the I/O mechanisms in Linux.26 Merging all these different I/O mechanisms into a coherent scheduling implementation would require much more work than what is present in this thesis, as well as a detailed knowledge of multiple I/O mechanisms. 24 27 The negative is that @io_uring@ is new and developing. 25 28 As a result, there is limited documentation, few places to find usage examples, and multiple errors that required workarounds. 29 26 30 Given what I now know about @io_uring@, I would say it is insufficiently coupled with the Linux kernel to properly handle non-blocking I/O. 27 It does not seem to reach deep into the Kernel's handling of \io, and as such it must contend with the same realities that users of epollmust contend with.31 It does not seem to reach deep into the kernel's handling of \io, and as such it must contend with the same realities that users of @epoll@ must contend with. 28 32 Specifically, in cases where @O_NONBLOCK@ behaves as desired, operations must still be retried. 29 To preserve the illusion of asynchronicity , this requires delegatingoperations to kernel threads.30 This is also true of cases where @O_NONBLOCK@ does not prevent blocking.33 To preserve the illusion of asynchronicity requires delegating these operations to kernel threads. 34 This requirement is also true of cases where @O_NONBLOCK@ does not prevent blocking. 31 35 Spinning up internal kernel threads to handle blocking scenarios is what developers already do outside of the kernel, and managing these threads adds significant burden to the system. 32 36 Nonblocking I/O should not be handled in this way. … … 43 47 The OS and library presentation of disk and network I/O, and many secondary library routines that directly and indirectly use these mechanisms. 44 48 \end{itemize} 45 The key aspect of all of these mechanisms is that control flow can block, which imm idiately hinders any level above from making scheduling decision as a result.49 The key aspect of all of these mechanisms is that control flow can block, which immediately hinders any level above from making scheduling decision as a result. 46 50 Fundamentally, scheduling needs to understand all the mechanisms used by threads that affect their state changes. 47 51 … … 49 53 However, direct hardware scheduling is only possible in the OS. 50 54 Instead, this thesis is performing arms-length application scheduling of the hardware components through a set of OS interfaces that indirectly manipulate the hardware components. 51 This can quickly lead to tensions if the OS interface was built withdifferent use cases in mind.55 This can quickly lead to tensions when the OS interface has different use cases in mind. 52 56 53 57 As \CFA aims to increase productivity and safety of C, while maintaining its performance, this places a huge burden on the \CFA runtime to achieve these goals. … … 66 70 These core algorithms are further extended with a low-latency idle-sleep mechanism, which allows the \CFA runtime to stay viable for workloads that do not consistently saturate the system. 67 71 \end{enumerate} 68 Finally, the complete scheduler is fairly simple with low-cost execution, meaning the total cost of scheduling during thread state 72 Finally, the complete scheduler is fairly simple with low-cost execution, meaning the total cost of scheduling during thread state-changes is low. 69 73 70 74 \section{Future Work} … … 84 88 The mechanism uses a hand-shake between notification and sleep to ensure that no \at is missed. 85 89 \item 86 The correctness of that hand-shakeis critical when the last \proc goes to sleep but could be relaxed when several \procs are awake.90 The hand-shake correctness is critical when the last \proc goes to sleep but could be relaxed when several \procs are awake. 87 91 \item 88 92 Furthermore, organizing the sleeping \procs as a LIFO stack makes sense to keep cold \procs as cold as possible, but it might be more appropriate to attempt to keep cold CPU sockets instead. … … 91 95 For example, keeping a CPU socket cold might be appropriate for power consumption reasons but can affect overall memory bandwidth. 92 96 The balance between these approaches is not obvious. 97 I am aware there is a host of low-power research that could be tapped here. 93 98 94 99 \subsection{Hardware} … … 102 107 If the latency is due to a recent cache invalidation, it is unlikely the timestamp is old and that helping is needed. 103 108 As such, simply moving on without the result is likely to be acceptable. 104 Another option would beto read multiple memory addresses and only wait for \emph{one of} these reads to retire.105 This approach has a similar effect, where cache -lines with more traffic would be waitedon less often.109 Another option is to read multiple memory addresses and only wait for \emph{one of} these reads to retire. 110 This approach has a similar effect, where cache lines with more traffic are on less often. 106 111 In both of these examples, some care is needed to ensure that reads to an address \emph{sometime} retire. 107 112 -
doc/theses/thierry_delisle_PhD/thesis/text/eval_macro.tex
r4dba1da r0fec6c1 182 182 NGINX is an high-performance, \emph{full-service}, event-driven webserver. 183 183 It can handle both static and dynamic web content, as well as serve as a reverse proxy and a load balancer~\cite{reese2008nginx}. 184 This wealth of features comes with a variety of potential configuration, dictating bothavailable features and performance.184 This wealth of capabilities comes with a variety of potential configurations, dictating available features and performance. 185 185 The NGINX server runs a master process that performs operations such as reading configuration files, binding to ports, and controlling worker processes. 186 When running as a static webserver, uses an event driven architecture to serverincoming requests.187 Incoming connections are assigned a \emph{sta tckless} HTTP state-machine and worker processes can potentiallyhandle thousands of these state machines.188 For the following experiment, NGINX was configured to use epollto listen for events on these state machines and have each worker process independently accept new connections.189 Because of the realities of Linux, see Subsection~\ref{ononblock}, NGINX also maintains a pool of auxil ary threads to handle block\io.190 The configuration can be used to set the number of worker processes desired, as well as the size of the auxilary pool.191 However, for the following experiments NGINX was configured to let the master process decided the appropriate number of threads.186 When running as a static webserver, it uses an event-driven architecture to service incoming requests. 187 Incoming connections are assigned a \emph{stackless} HTTP state-machine and worker processes can handle thousands of these state machines. 188 For the following experiment, NGINX is configured to use @epoll@ to listen for events on these state machines and have each worker process independently accept new connections. 189 Because of the realities of Linux, see Subsection~\ref{ononblock}, NGINX also maintains a pool of auxiliary threads to handle blocking \io. 190 The configuration can set the number of worker processes desired, as well as the size of the auxiliary pool. 191 However, for the following experiments, NGINX is configured to let the master process decided the appropriate number of threads. 192 192 193 193
Note: See TracChangeset
for help on using the changeset viewer.