Ignore:
Timestamp:
Sep 8, 2022, 5:11:19 PM (21 months ago)
Author:
Thierry Delisle <tdelisle@…>
Branches:
ADT, ast-experimental, master, pthread-emulation
Children:
264f6c9
Parents:
e4855f6
Message:

First pass at spellchecking until chapter 2

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/thierry_delisle_PhD/thesis/text/front.tex

    re4855f6 r918e0137  
    127127Indeed, over-partitioning into small work-units with user threading significantly eases load bal\-ancing, while simultaneously providing advanced synchronization and mutual exclusion capabilities.
    128128To manage these high levels of concurrency, the underlying runtime must efficiently schedule many user threads across a few kernel threads;
    129 which begs of the question of how many kernel threads are needed and should the number be dynamically reevaluated.
     129which begs the question of how many kernel threads are needed and should the number be dynamically reevaluated.
    130130Furthermore, scheduling must prevent kernel threads from blocking, otherwise user-thread parallelism drops.
    131 When user-threading parallelism does drop, how and when should idle kernel-threads be put to sleep to avoid wasting CPU resources.
     131When user-threading parallelism does drop, how and when should idle \glspl{kthrd} be put to sleep to avoid wasting CPU resources.
    132132Finally, the scheduling system must provide fairness to prevent a user thread from monopolizing a kernel thread;
    133 otherwise other user threads can experience short/long term starvation or kernel threads can deadlock waiting for events to occur on busy kernel threads.
    134 
    135 This thesis analyses multiple scheduler systems, where each system attempts to fulfill the necessary requirements for user-level threading.
    136 The predominant technique for managing high levels of concurrency is sharding the ready-queue with one queue per kernel-thread and using some form of work stealing/sharing to dynamically rebalance workload shifts.
     133otherwise, other user threads can experience short/long term starvation or kernel threads can deadlock waiting for events to occur on busy kernel threads.
     134
     135This thesis analyses multiple scheduler systems, where each system attempts to fulfill the requirements for user-level threading.
     136The predominant technique for managing high levels of concurrency is sharding the ready-queue with one queue per \gls{kthrd} and using some form of work stealing/sharing to dynamically rebalance workload shifts.
    137137Preventing kernel blocking is accomplished by transforming kernel locks and I/O operations into user-level operations that do not block the kernel thread or spin up new kernel threads to manage the blocking.
    138138Fairness is handled through preemption and/or ad-hoc solutions, which leads to coarse-grained fairness with some pathological cases.
Note: See TracChangeset for help on using the changeset viewer.