Changeset 847bb6f for doc/theses/thierry_delisle_PhD/thesis
- Timestamp:
- Jul 18, 2022, 8:06:18 AM (2 years ago)
- Branches:
- ADT, ast-experimental, master, pthread-emulation, qualifiedEnum
- Children:
- 6a896b0, d677355
- Parents:
- 4f3807d
- Location:
- doc/theses/thierry_delisle_PhD/thesis
- Files:
-
- 9 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/thierry_delisle_PhD/thesis/fig/io_uring.fig
r4f3807d r847bb6f 8 8 -2 9 9 1200 2 10 6 180 3240 2025 351010 6 675 3105 2520 3375 11 11 2 1 0 1 0 7 40 -1 -1 0.000 0 0 -1 0 0 2 12 720 3240 720 351012 1215 3105 1215 3375 13 13 2 1 0 1 0 7 40 -1 -1 0.000 0 0 -1 0 0 2 14 450 3240 450 351014 945 3105 945 3375 15 15 2 2 0 1 0 7 45 -1 20 0.000 0 0 -1 0 0 5 16 180 3240 1260 3240 1260 3510 180 3510 180 324016 675 3105 1755 3105 1755 3375 675 3375 675 3105 17 17 2 1 0 1 0 7 40 -1 -1 0.000 0 0 -1 0 0 2 18 990 3240 990 351019 4 0 0 40 -1 0 12 0.0000 2 165 9 90 1035 3420{\\small S3}\00120 4 0 0 40 -1 0 12 0.0000 2 165 9 90 765 3420{\\small S2}\00121 4 0 0 40 -1 0 12 0.0000 2 165 9 90 225 3420{\\small S0}\00122 4 0 0 40 -1 0 12 0.0000 2 165 9 90 495 3420{\\small S1}\00118 1485 3105 1485 3375 19 4 0 0 40 -1 0 12 0.0000 2 165 930 1530 3285 {\\small S3}\001 20 4 0 0 40 -1 0 12 0.0000 2 165 930 1260 3285 {\\small S2}\001 21 4 0 0 40 -1 0 12 0.0000 2 165 930 720 3285 {\\small S0}\001 22 4 0 0 40 -1 0 12 0.0000 2 165 930 990 3285 {\\small S1}\001 23 23 -6 24 6 1530 2610 3240 414025 5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 2 455.714 3375.000 1890 2700 1575 3375 1890 405024 6 2025 2475 3735 4005 25 5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 2950.714 3240.000 2385 2565 2070 3240 2385 3915 26 26 1 1 1.00 60.00 120.00 27 1 3 0 1 0 7 40 -1 20 0.000 1 0.0000 2 475 3375 315 315 2475 3375 2790 337528 1 3 0 1 0 7 50 -1 20 0.000 1 0.0000 2 475 3375 765 765 2475 3375 3240 337527 1 3 0 1 0 7 40 -1 20 0.000 1 0.0000 2970 3240 315 315 2970 3240 3285 3240 28 1 3 0 1 0 7 50 -1 20 0.000 1 0.0000 2970 3240 765 765 2970 3240 3735 3240 29 29 2 1 0 1 0 7 45 -1 -1 0.000 0 0 -1 0 0 2 30 2 475 3375 2133 269030 2970 3240 2628 2555 31 31 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2 32 2 475 3375 1769 309332 2970 3240 2264 2958 33 33 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2 34 2 475 3375 1769 366134 2970 3240 2264 3526 35 35 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2 36 2 475 3375 2133 405736 2970 3240 2628 3922 37 37 2 1 1 1 0 7 35 -1 0 4.000 0 0 -1 0 0 2 38 2 205 3375 2745 337538 2700 3240 3240 3240 39 39 -6 40 6 585 2250 1485 261041 4 2 0 50 -1 0 12 0.0000 2 135 9 00 1485 2385Submission\00142 4 2 0 50 -1 0 12 0.0000 2 1 65 360 1485 2580Ring\00140 6 1080 2115 1980 2475 41 4 2 0 50 -1 0 12 0.0000 2 135 945 1980 2250 Submission\001 42 4 2 0 50 -1 0 12 0.0000 2 180 405 1980 2445 Ring\001 43 43 -6 44 6 3600 2610 5265 414045 5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 4 384.000 3375.000 4950 4050 5265 3375 4950 270044 6 4095 2475 5760 4005 45 5 1 0 1 0 7 35 -1 -1 0.000 0 1 1 0 4879.000 3240.000 5445 3915 5760 3240 5445 2565 46 46 1 1 1.00 60.00 120.00 47 1 3 0 1 0 7 40 -1 20 0.000 1 3.1416 4 365 3375 315 315 4365 3375 4050 337548 1 3 0 1 0 7 50 -1 20 0.000 1 3.1416 4 365 3375 765 765 4365 3375 3600 337547 1 3 0 1 0 7 40 -1 20 0.000 1 3.1416 4860 3240 315 315 4860 3240 4545 3240 48 1 3 0 1 0 7 50 -1 20 0.000 1 3.1416 4860 3240 765 765 4860 3240 4095 3240 49 49 2 1 0 1 0 7 45 -1 -1 0.000 0 0 -1 0 0 2 50 4 365 3375 4707 406050 4860 3240 5202 3925 51 51 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2 52 4 365 3375 5071 365752 4860 3240 5566 3522 53 53 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2 54 4 365 3375 5071 308954 4860 3240 5566 2954 55 55 2 1 0 1 0 7 45 -1 -1 4.000 0 0 -1 0 0 2 56 4 365 3375 4707 269356 4860 3240 5202 2558 57 57 2 1 1 1 0 7 35 -1 0 4.000 0 0 -1 0 0 2 58 4635 3375 4095 337558 5130 3240 4590 3240 59 59 -6 60 6 5 355 2250 6255 261061 4 0 0 50 -1 0 12 0.0000 2 1 65 360 5355 2580Ring\00162 4 0 0 50 -1 0 12 0.0000 2 1 65 900 5355 2385Completion\00160 6 5850 2115 6750 2475 61 4 0 0 50 -1 0 12 0.0000 2 180 405 5850 2445 Ring\001 62 4 0 0 50 -1 0 12 0.0000 2 180 975 5850 2250 Completion\001 63 63 -6 64 64 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 65 65 1 1 1.00 60.00 120.00 66 2925 2025 2550 248666 3420 1890 3045 2351 67 67 2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 68 68 1 1 1.00 60.00 120.00 69 4 275 2475 3825 202569 4770 2340 4320 1890 70 70 2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 71 71 1 1 1.00 60.00 120.00 72 2751 4268 3066 453872 3060 4095 3600 4410 73 73 2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 1 0 2 74 74 1 1 1.00 60.00 120.00 75 3780 4545 4275 423075 4275 4410 4770 4095 76 76 2 1 1 1 0 7 55 -1 -1 4.000 0 0 -1 0 0 2 77 0 3375 6255 337578 4 0 0 35 -1 0 12 0.0000 2 165 11 70 1845 3060{\\small \\&S2}\00179 4 0 0 35 -1 0 12 0.0000 2 165 1170 1755 3420 {\\small \\&S3}\00180 4 0 0 35 -1 0 12 0.0000 2 165 1170 1890 3735 {\\small \\&S0}\00181 4 0 0 50 -1 0 12 0.0000 6 135 360 2790 2565 Push\00182 4 0 0 50 -1 0 12 0.0000 6 165 270 2880 4230 Pop\00183 4 0 0 50 -1 0 12 0.0000 6 135 360 2025 4275 Head\00184 4 0 0 50 -1 0 12 0.0000 6 135 360 2025 2565Tail\00185 4 0 0 35 -1 0 12 0.0000 2 165 990 4635 3060 {\\small C0}\00186 4 0 0 35 -1 0 12 0.0000 2 165 990 4815 3420 {\\small C1}\00187 4 0 0 35 -1 0 12 0.0000 2 165 990 4635 3780 {\\small C2}\00188 4 0 0 50 -1 0 12 0.0000 4 135 360 4725 4275 Tail\00189 4 0 0 50 -1 0 12 0.0000 6 135 360 4590 2565Head\00190 4 0 0 50 -1 0 12 0.0000 2 135 990 5535 3285 Kernel Line\00191 4 1 0 50 -1 0 12 0.0000 2 180 1350 3375 4815 {\\Large Kernel}\00192 4 1 0 50 -1 0 12 0.0000 2 180 1 800 3375 1845 {\\Large Application}\00193 4 0 0 50 -1 0 12 0.0000 6 1 65 270 3690 2565Pop\00194 4 0 0 50 -1 0 12 0.0000 4 135 360 3465 4230 Push\00195 4 0 0 50 -1 0 12 0.0000 2 135 90 0 3285 S\00177 495 3240 6750 3240 78 4 0 0 35 -1 0 12 0.0000 2 165 1140 2340 2925 {\\small \\&S2}\001 79 4 0 0 50 -1 0 12 0.0000 6 135 390 3285 2430 Push\001 80 4 0 0 50 -1 0 12 0.0000 6 135 330 2520 2430 Tail\001 81 4 0 0 35 -1 0 12 0.0000 2 165 960 5130 2925 {\\small C0}\001 82 4 0 0 35 -1 0 12 0.0000 2 165 960 5310 3285 {\\small C1}\001 83 4 0 0 35 -1 0 12 0.0000 2 165 960 5130 3645 {\\small C2}\001 84 4 0 0 50 -1 0 12 0.0000 4 135 330 5220 4140 Tail\001 85 4 0 0 50 -1 0 12 0.0000 6 135 420 5085 2430 Head\001 86 4 0 0 50 -1 0 12 0.0000 2 135 960 6030 3150 Kernel Line\001 87 4 0 0 50 -1 0 12 0.0000 2 135 105 495 3150 S\001 88 4 0 0 35 -1 0 12 0.0000 2 165 1140 2385 3645 {\\small \\&S0}\001 89 4 0 0 50 -1 0 12 0.0000 6 135 420 2340 4140 Head\001 90 4 0 0 35 -1 0 12 0.0000 2 165 1140 2250 3285 {\\small \\&S3}\001 91 4 2 0 50 -1 0 12 0.0000 4 135 390 4500 4140 Push\001 92 4 1 0 50 -1 0 12 0.0000 2 180 1290 3915 4680 {\\Large Kernel}\001 93 4 0 0 50 -1 0 12 0.0000 6 180 315 3285 4140 Pop\001 94 4 1 0 50 -1 0 12 0.0000 2 180 1725 3915 1755 {\\Large Application}\001 95 4 2 0 50 -1 0 12 0.0000 6 180 315 4545 2430 Pop\001 -
doc/theses/thierry_delisle_PhD/thesis/local.bib
r4f3807d r847bb6f 2 2 % Cforall 3 3 @misc{cfa:frontpage, 4 url = {https://cforall.uwaterloo.ca/}4 howpublished = {\href{https://cforall.uwaterloo.ca}{https://\-cforall.uwaterloo.ca}} 5 5 } 6 6 @article{cfa:typesystem, … … 481 481 @misc{MAN:linux/cfs, 482 482 title = {{CFS} Scheduler - The Linux Kernel documentation}, 483 url = {https://www.kernel.org/doc/html/latest/scheduler/sched-design-CFS.html}483 howpublished = {\href{https://www.kernel.org/doc/html/latest/scheduler/sched-design-CFS.html}{https://\-www.kernel.org/\-doc/\-html/\-latest/\-scheduler/\-sched-design-CFS.html}} 484 484 } 485 485 … … 489 489 year = {2019}, 490 490 month = {February}, 491 url = {https://opensource.com/article/19/2/fair-scheduling-linux}491 howpublished = {\href{https://opensource.com/article/19/2/fair-scheduling-linux}{https://\-opensource.com/\-article/\-19/2\-/\-fair-scheduling-linux}} 492 492 } 493 493 … … 523 523 title = {Mach Scheduling and Thread Interfaces - Kernel Programming Guide}, 524 524 organization = {Apple Inc.}, 525 url = {https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html}525 howPublish = {\href{https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html}{https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html}} 526 526 } 527 527 … … 536 536 month = {June}, 537 537 series = {Developer Reference}, 538 url = {https://www.microsoftpressstore.com/articles/article.aspx?p=2233328&seqNum=7#:~:text=Overview\%20of\%20Windows\%20Scheduling,a\%20phenomenon\%20called\%20processor\%20affinity}538 howpublished = {\href{https://www.microsoftpressstore.com/articles/article.aspx?p=2233328&seqNum=7#:~:text=Overview\%20of\%20Windows\%20Scheduling,a\%20phenomenon\%20called\%20processor\%20affinity}{https://\-www.microsoftpressstore.com/\-articles/\-article.aspx?p=2233328&seqNum=7#:~:text=Overview\%20of\%20Windows\%20Scheduling,a\%20phenomenon\%20called\%20processor\%20affinity}} 539 539 } 540 540 … … 542 542 title = {GitHub - The Go Programming Language}, 543 543 author = {The Go Programming Language}, 544 url = {https://github.com/golang/go},544 howpublished = {\href{https://github.com/golang/go}{https://\-github.com/\-golang/\-go}}, 545 545 version = {Change-Id: If07f40b1d73b8f276ee28ffb8b7214175e56c24d} 546 546 } … … 551 551 year = {2019}, 552 552 booktitle = {Hydra}, 553 url = {https://www.youtube.com/watch?v=-K11rY57K7k&ab_channel=Hydra}553 howpublished = {\href{https://www.youtube.com/watch?v=-K11rY57K7k&ab_channel=Hydra}{https://\-www.youtube.com/\-watch?v=-K11rY57K7k&ab_channel=Hydra}} 554 554 } 555 555 … … 559 559 year = {2008}, 560 560 booktitle = {Erlang User Conference}, 561 url = {http://www.erlang.se/euc/08/euc_smp.pdf}561 howpublished = {\href{http://www.erlang.se/euc/08/euc_smp.pdf}{http://\-www.erlang.se/\-euc/\-08/\-euc_smp.pdf}} 562 562 } 563 563 … … 567 567 title = {Scheduling Algorithm - Intel{\textregistered} Threading Building Blocks Developer Reference}, 568 568 organization = {Intel{\textregistered}}, 569 url = {https://www.threadingbuildingblocks.org/docs/help/reference/task_scheduler/scheduling_algorithm.html}569 howpublished = {\href{https://www.threadingbuildingblocks.org/docs/help/reference/task_scheduler/scheduling_algorithm.html}{https://\-www.threadingbuildingblocks.org/\-docs/\-help/\-reference/\-task\_scheduler/\-scheduling\_algorithm.html}} 570 570 } 571 571 … … 573 573 title = {Quasar Core - Quasar User Manual}, 574 574 organization = {Parallel Universe}, 575 url = {https://docs.paralleluniverse.co/quasar/}575 howpublished = {\href{https://docs.paralleluniverse.co/quasar}{https://\-docs.paralleluniverse.co/\-quasar}} 576 576 } 577 577 @misc{MAN:project-loom, 578 url = {https://www.baeldung.com/openjdk-project-loom}578 howpublished = {\href{https://www.baeldung.com/openjdk-project-loom}{https://\-www.baeldung.com/\-openjdk-project-loom}} 579 579 } 580 580 581 581 @misc{MAN:java/fork-join, 582 url = {https://www.baeldung.com/java-fork-join}582 howpublished = {\href{https://www.baeldung.com/java-fork-join}{https://\-www.baeldung.com/\-java-fork-join}} 583 583 } 584 584 … … 633 633 month = "March", 634 634 version = {0,4}, 635 howpublished = {\ url{https://kernel.dk/io_uring.pdf}}635 howpublished = {\href{https://kernel.dk/io_uring.pdf}{https://\-kernel.dk/\-io\_uring.pdf}} 636 636 } 637 637 … … 642 642 title = "Control theory --- {W}ikipedia{,} The Free Encyclopedia", 643 643 year = "2020", 644 url = "https://en.wikipedia.org/wiki/Task_parallelism",644 howpublished = {\href{https://en.wikipedia.org/wiki/Task_parallelism}{https://\-en.wikipedia.org/\-wiki/\-Task\_parallelism}}, 645 645 note = "[Online; accessed 22-October-2020]" 646 646 } … … 650 650 title = "Task parallelism --- {W}ikipedia{,} The Free Encyclopedia", 651 651 year = "2020", 652 url = "https://en.wikipedia.org/wiki/Control_theory",652 howpublished = "\href{https://en.wikipedia.org/wiki/Control_theory}{https://\-en.wikipedia.org/\-wiki/\-Control\_theory}", 653 653 note = "[Online; accessed 22-October-2020]" 654 654 } … … 658 658 title = "Implicit parallelism --- {W}ikipedia{,} The Free Encyclopedia", 659 659 year = "2020", 660 url = "https://en.wikipedia.org/wiki/Implicit_parallelism",660 howpublished = "\href{https://en.wikipedia.org/wiki/Implicit_parallelism}{https://\-en.wikipedia.org/\-wiki/\-Implicit\_parallelism}", 661 661 note = "[Online; accessed 23-October-2020]" 662 662 } … … 666 666 title = "Explicit parallelism --- {W}ikipedia{,} The Free Encyclopedia", 667 667 year = "2017", 668 url = "https://en.wikipedia.org/wiki/Explicit_parallelism",668 howpublished = "\href{https://en.wikipedia.org/wiki/Explicit_parallelism}{https://\-en.wikipedia.org/\-wiki/\-Explicit\_parallelism}", 669 669 note = "[Online; accessed 23-October-2020]" 670 670 } … … 674 674 title = "Linear congruential generator --- {W}ikipedia{,} The Free Encyclopedia", 675 675 year = "2020", 676 url = "https://en.wikipedia.org/wiki/Linear_congruential_generator",676 howpublished = "\href{https://en.wikipedia.org/wiki/Linear_congruential_generator}{https://en.wikipedia.org/wiki/Linear\_congruential\_generator}", 677 677 note = "[Online; accessed 2-January-2021]" 678 678 } … … 682 682 title = "Futures and promises --- {W}ikipedia{,} The Free Encyclopedia", 683 683 year = "2020", 684 url = "https://en.wikipedia.org/wiki/Futures_and_promises",684 howpublished = "\href{https://en.wikipedia.org/wiki/Futures_and_promises}{https://\-en.wikipedia.org/\-wiki/Futures\_and\_promises}", 685 685 note = "[Online; accessed 9-February-2021]" 686 686 } … … 690 690 title = "Read-copy-update --- {W}ikipedia{,} The Free Encyclopedia", 691 691 year = "2022", 692 url = "https://en.wikipedia.org/wiki/Linear_congruential_generator",692 howpublished = "\href{https://en.wikipedia.org/wiki/Linear_congruential_generator}{https://\-en.wikipedia.org/\-wiki/\-Linear\_congruential\_generator}", 693 693 note = "[Online; accessed 12-April-2022]" 694 694 } … … 698 698 title = "Readers-writer lock --- {W}ikipedia{,} The Free Encyclopedia", 699 699 year = "2021", 700 url = "https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock",700 howpublished = "\href{https://en.wikipedia.org/wiki/Readers-writer_lock}{https://\-en.wikipedia.org/\-wiki/\-Readers-writer\_lock}", 701 701 note = "[Online; accessed 12-April-2022]" 702 702 } … … 705 705 title = "Bin packing problem --- {W}ikipedia{,} The Free Encyclopedia", 706 706 year = "2022", 707 url = "https://en.wikipedia.org/wiki/Bin_packing_problem",707 howpublished = "\href{https://en.wikipedia.org/wiki/Bin_packing_problem}{https://\-en.wikipedia.org/\-wiki/\-Bin\_packing\_problem}", 708 708 note = "[Online; accessed 29-June-2022]" 709 709 } … … 712 712 % [05/04, 12:36] Trevor Brown 713 713 % i don't know where rmr complexity was first introduced, but there are many many many papers that use the term and define it 714 % [05/04, 12:37] Trevor Brown714 % [05/04, 12:37] Trevor Brown 715 715 % here's one paper that uses the term a lot and links to many others that use it... might trace it to something useful there https://drops.dagstuhl.de/opus/volltexte/2021/14832/pdf/LIPIcs-DISC-2021-30.pdf 716 % [05/04, 12:37] Trevor Brown716 % [05/04, 12:37] Trevor Brown 717 717 % another option might be to cite a textbook 718 % [05/04, 12:42] Trevor Brown718 % [05/04, 12:42] Trevor Brown 719 719 % but i checked two textbooks in the area i'm aware of and i don't see a definition of rmr complexity in either 720 % [05/04, 12:42] Trevor Brown720 % [05/04, 12:42] Trevor Brown 721 721 % this one has a nice statement about the prevelance of rmr complexity, as well as some rough definition 722 % [05/04, 12:42] Trevor Brown722 % [05/04, 12:42] Trevor Brown 723 723 % https://dl.acm.org/doi/pdf/10.1145/3465084.3467938 724 724 … … 728 728 % 729 729 % https://doi.org/10.1137/1.9781611973099.100 730 731 732 @misc{AIORant, 733 author = "Linus Torvalds", 734 title = "Re: [PATCH 09/13] aio: add support for async openat()", 735 year = "2016", 736 month = jan, 737 howpublished = "\href{https://lwn.net/Articles/671657}{https://\-lwn.net/\-Articles/671657}", 738 note = "[Online; accessed 6-June-2022]" 739 } 740 741 @misc{apache, 742 key = {Apache Software Foundation}, 743 title = {{T}he {A}pache Web Server}, 744 howpublished = {\href{http://httpd.apache.org}{http://\-httpd.apache.org}}, 745 note = "[Online; accessed 6-June-2022]" 746 } 747 748 @misc{SeriallyReusable, 749 author = {IBM}, 750 title = {Serially reusable programs}, 751 month = mar, 752 howpublished= {\href{https://www.ibm.com/docs/en/ztpf/1.1.0.15?topic=structures-serially-reusable-programs}{https://www.ibm.com/\-docs/\-en/\-ztpf/\-1.1.0.15?\-topic=structures\--serially\--reusable-programs}}, 753 year = 2021, 754 } 755 -
doc/theses/thierry_delisle_PhD/thesis/text/core.tex
r4f3807d r847bb6f 322 322 Building a scheduler that is cache aware poses two main challenges: discovering the cache topology and matching \procs to this cache structure. 323 323 Unfortunately, there is no portable way to discover cache topology, and it is outside the scope of this thesis to solve this problem. 324 This work uses the cache topology information from Linux's \texttt{/sys/devices/system/cpu}directory.324 This work uses the cache topology information from Linux's @/sys/devices/system/cpu@ directory. 325 325 This leaves the challenge of matching \procs to cache structure, or more precisely identifying which subqueues of the ready queue are local to which subcomponents of the cache structure. 326 326 Once a matching is generated, the helping algorithm is changed to add bias so that \procs more often help subqueues local to the same cache substructure.\footnote{ … … 330 330 Instead of having each subqueue local to a specific \proc, the system is initialized with subqueues for each hardware hyperthread/core up front. 331 331 Then \procs dequeue and enqueue by first asking which CPU id they are executing on, in order to identify which subqueues are the local ones. 332 \Glspl{proc} can get the CPU id from \texttt{sched\_getcpu} or \texttt{librseq}.332 \Glspl{proc} can get the CPU id from @sched_getcpu@ or @librseq@. 333 333 334 334 This approach solves the performance problems on systems with topologies with narrow L3 caches, similar to Figure \ref{fig:cache-noshare}. -
doc/theses/thierry_delisle_PhD/thesis/text/eval_micro.tex
r4f3807d r847bb6f 7 7 All of these benchmarks are run on two distinct hardware environment, an AMD and an INTEL machine. 8 8 9 For all benchmarks, \texttt{taskset}is used to limit the experiment to 1 NUMA Node with no hyper threading.9 For all benchmarks, @taskset@ is used to limit the experiment to 1 NUMA Node with no hyper threading. 10 10 If more \glspl{hthrd} are needed, then 1 NUMA Node with hyperthreading is used. 11 11 If still more \glspl{hthrd} are needed then the experiment is limited to as few NUMA Nodes as needed. … … 35 35 \end{figure} 36 36 The most basic evaluation of any ready queue is to evaluate the latency needed to push and pop one element from the ready-queue. 37 Since these two operation also describe a \texttt{yield}operation, many systems use this as the most basic benchmark.37 Since these two operation also describe a @yield@ operation, many systems use this as the most basic benchmark. 38 38 However, yielding can be treated as a special case, since it also carries the information that the number of the ready \glspl{at} will not change. 39 39 Not all systems use this information, but those which do may appear to have better performance than they would for disconnected push/pop pairs. … … 57 57 This is to avoid the case where one of the \glspl{proc} runs out of work because of the variation on the number of ready \glspl{at} mentionned above. 58 58 59 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw \texttt{park}/\texttt{unpark} and carefully picking the order of the \texttt{P} and \texttt{V}with respect to the loop condition.59 The actual benchmark is more complicated to handle termination, but that simply requires using a binary semphore or a channel instead of raw @park@/@unpark@ and carefully picking the order of the @P@ and @V@ with respect to the loop condition. 60 60 Figure~\ref{fig:cycle:code} shows pseudo code for this benchmark. 61 61 … … 116 116 \section{Yield} 117 117 For completion, I also include the yield benchmark. 118 This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call \texttt{yield}.119 As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of \texttt{yield}, due to potential shortcuts in the routine.118 This benchmark is much simpler than the cycle tests, it simply creates many \glspl{at} that call @yield@. 119 As mentionned in the previous section, this benchmark may be less representative of usages that only make limited use of @yield@, due to potential shortcuts in the routine. 120 120 Its only interesting variable is the number of \glspl{at} per \glspl{proc}, where ratios close to 1 means the ready queue(s) could be empty. 121 121 This sometimes puts more strain on the idle sleep handling, compared to scenarios where there is clearly plenty of work to be done. … … 184 184 185 185 To achieve this the benchmark uses a fixed size array of semaphores. 186 Each \gls{at} picks a random semaphore, \texttt{V}s it to unblock a \at waiting and then \texttt{P}s on the semaphore.186 Each \gls{at} picks a random semaphore, @V@s it to unblock a \at waiting and then @P@s on the semaphore. 187 187 This creates a flow where \glspl{at} push each other out of the semaphores before being pushed out themselves. 188 188 For this benchmark to work however, the number of \glspl{at} must be equal or greater to the number of semaphores plus the number of \glspl{proc}. 189 Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to \texttt{P}not blocking.189 Note that the nature of these semaphores mean the counter can go beyond 1, which could lead to calls to @P@ not blocking. 190 190 191 191 \todo{code, setup, results} -
doc/theses/thierry_delisle_PhD/thesis/text/existing.tex
r4f3807d r847bb6f 178 178 \begin{displayquote} 179 179 \begin{enumerate} 180 \item The task returned by \textit{t} \texttt{.execute()}180 \item The task returned by \textit{t}@.execute()@ 181 181 \item The successor of t if \textit{t} was its last completed predecessor. 182 182 \item A task popped from the end of the thread's own deque. … … 193 193 \paragraph{Quasar/Project Loom} 194 194 Java has two projects, Quasar~\cite{MAN:quasar} and Project Loom~\cite{MAN:project-loom}\footnote{It is unclear if these are distinct projects.}, that are attempting to introduce lightweight thread\-ing in the form of Fibers. 195 Both projects seem to be based on the \texttt{ForkJoinPool}in Java, which appears to be a simple incarnation of randomized work-stealing~\cite{MAN:java/fork-join}.195 Both projects seem to be based on the @ForkJoinPool@ in Java, which appears to be a simple incarnation of randomized work-stealing~\cite{MAN:java/fork-join}. 196 196 197 197 \paragraph{Grand Central Dispatch} … … 204 204 % http://web.archive.org/web/20090920043909/http://images.apple.com/macosx/technology/docs/GrandCentral_TB_brief_20090903.pdf 205 205 206 In terms of semantics, the Dispatch Queues seem to be very similar to Intel\textregistered ~TBB \texttt{execute()}and predecessor semantics.206 In terms of semantics, the Dispatch Queues seem to be very similar to Intel\textregistered ~TBB @execute()@ and predecessor semantics. 207 207 208 208 \paragraph{LibFibre} -
doc/theses/thierry_delisle_PhD/thesis/text/intro.tex
r4f3807d r847bb6f 103 103 An algorithm for load-balancing and idle sleep of processors, including NUMA awareness. 104 104 \item 105 Support for user-level \glsxtrshort{io} capabilities based on Linux's \texttt{io\_uring}.105 Support for user-level \glsxtrshort{io} capabilities based on Linux's @io_uring@. 106 106 \end{enumerate} -
doc/theses/thierry_delisle_PhD/thesis/text/io.tex
r4f3807d r847bb6f 1 1 \chapter{User Level \io} 2 As mentioned in Section~\ref{prev:io}, User-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations.2 As mentioned in Section~\ref{prev:io}, user-Level \io requires multiplexing the \io operations of many \glspl{thrd} onto fewer \glspl{proc} using asynchronous \io operations. 3 3 Different operating systems offer various forms of asynchronous operations and, as mentioned in Chapter~\ref{intro}, this work is exclusively focused on the Linux operating-system. 4 4 5 5 \section{Kernel Interface} 6 Since this work fundamentally depends on operating-system support, the first step of any design is to discuss the available interfaces and pick one (or more) as the foundations of the non-blocking \io subsystem.6 Since this work fundamentally depends on operating-system support, the first step of this design is to discuss the available interfaces and pick one (or more) as the foundation for the non-blocking \io subsystem in this work. 7 7 8 8 \subsection{\lstinline{O_NONBLOCK}} … … 10 10 In this mode, ``Neither the @open()@ nor any subsequent \io operations on the [opened file descriptor] will cause the calling process to wait''~\cite{MAN:open}. 11 11 This feature can be used as the foundation for the non-blocking \io subsystem. 12 However, for the subsystem to know when an \io operation completes, @O_NONBLOCK@ must be use in conjunction with a system call that monitors when a file descriptor becomes ready, \ie, the next \io operation on it does not cause the process to wait13 \footnote{In this context, ready means \emph{some} operation can be performed without blocking.12 However, for the subsystem to know when an \io operation completes, @O_NONBLOCK@ must be used in conjunction with a system call that monitors when a file descriptor becomes ready, \ie, the next \io operation on it does not cause the process to wait.\footnote{ 13 In this context, ready means \emph{some} operation can be performed without blocking. 14 14 It does not mean an operation returning \lstinline{EAGAIN} succeeds on the next try. 15 For example, a ready read may only return a subset of bytes and the read must be issues again for the remaining bytes, at which point it may return \lstinline{EAGAIN}.}.15 For example, a ready read may only return a subset of requested bytes and the read must be issues again for the remaining bytes, at which point it may return \lstinline{EAGAIN}.} 16 16 This mechanism is also crucial in determining when all \glspl{thrd} are blocked and the application \glspl{kthrd} can now block. 17 17 18 There are three options to monitor file descriptors in Linux 19 \footnote{For simplicity, this section omits \lstinline{pselect} and \lstinline{ppoll}.18 There are three options to monitor file descriptors in Linux:\footnote{ 19 For simplicity, this section omits \lstinline{pselect} and \lstinline{ppoll}. 20 20 The difference between these system calls and \lstinline{select} and \lstinline{poll}, respectively, is not relevant for this discussion.}, 21 21 @select@~\cite{MAN:select}, @poll@~\cite{MAN:poll} and @epoll@~\cite{MAN:epoll}. 22 22 All three of these options offer a system call that blocks a \gls{kthrd} until at least one of many file descriptors becomes ready. 23 The group of file descriptors being waited is called the \newterm{interest set}. 24 25 \paragraph{\lstinline{select}} is the oldest of these options, it takes as an input a contiguous array of bits, where each bits represent a file descriptor of interest. 26 On return, it modifies the set in place to identify which of the file descriptors changed status. 27 This destructive change means that calling select in a loop requires re-initializing the array each time and the number of file descriptors supported has a hard limit. 28 Another limit of @select@ is that once the call is started, the interest set can no longer be modified. 29 Monitoring a new file descriptor generally requires aborting any in progress call to @select@ 30 \footnote{Starting a new call to \lstinline{select} is possible but requires a distinct kernel thread, and as a result is not an acceptable multiplexing solution when the interest set is large and highly dynamic unless the number of parallel calls to \lstinline{select} can be strictly bounded.}. 31 32 \paragraph{\lstinline{poll}} is an improvement over select, which removes the hard limit on the number of file descriptors and the need to re-initialize the input on every call. 33 It works using an array of structures as an input rather than an array of bits, thus allowing a more compact input for small interest sets. 34 Like @select@, @poll@ suffers from the limitation that the interest set cannot be changed while the call is blocked. 35 36 \paragraph{\lstinline{epoll}} further improves these two functions by allowing the interest set to be dynamically added to and removed from while a \gls{kthrd} is blocked on an @epoll@ call. 23 The group of file descriptors being waited on is called the \newterm{interest set}. 24 25 \paragraph{\lstinline{select}} is the oldest of these options, and takes as input a contiguous array of bits, where each bit represents a file descriptor of interest. 26 Hence, the array length must be as long as the largest FD currently of interest. 27 On return, it outputs the set in place to identify which of the file descriptors changed state. 28 This destructive change means selecting in a loop requires re-initializing the array for each iteration. 29 Another limit of @select@ is that calls from different \glspl{kthrd} sharing FDs are independent. 30 Hence, if one \gls{kthrd} is managing the select calls, other threads can only add/remove to/from the manager's interest set through synchronized calls to update the interest set. 31 However, these changes are only reflected when the manager makes its next call to @select@. 32 Note, it is possible for the manager thread to never unblock if its current interest set never changes, \eg the sockets/pipes/ttys it is waiting on never get data again. 33 Often the I/O manager has a timeout, polls, or is sent a signal on changes to mitigate this problem. 34 35 \begin{comment} 36 From: Tim Brecht <brecht@uwaterloo.ca> 37 Subject: Re: FD sets 38 Date: Wed, 6 Jul 2022 00:29:41 +0000 39 40 Large number of open files 41 -------------------------- 42 43 In order to be able to use more than the default number of open file 44 descriptors you may need to: 45 46 o increase the limit on the total number of open files /proc/sys/fs/file-max 47 (on Linux systems) 48 49 o increase the size of FD_SETSIZE 50 - the way I often do this is to figure out which include file __FD_SETSIZE 51 is defined in, copy that file into an appropriate directory in ./include, 52 and then modify it so that if you use -DBIGGER_FD_SETSIZE the larger size 53 gets used 54 55 For example on a RH 9.0 distribution I've copied 56 /usr/include/bits/typesizes.h into ./include/i386-linux/bits/typesizes.h 57 58 Then I modify typesizes.h to look something like: 59 60 #ifdef BIGGER_FD_SETSIZE 61 #define __FD_SETSIZE 32767 62 #else 63 #define __FD_SETSIZE 1024 64 #endif 65 66 Note that the since I'm moving and testing the userver on may different 67 machines the Makefiles are set up to use -I ./include/$(HOSTTYPE) 68 69 This way if you redefine the FD_SETSIZE it will get used instead of the 70 default original file. 71 \end{comment} 72 73 \paragraph{\lstinline{poll}} is the next oldest option, and takes as input an array of structures containing the FD numbers rather than their position in an array of bits, allowing a more compact input for interest sets that contain widely spaced FDs. 74 (For small interest sets with densely packed FDs, the @select@ bit mask can take less storage, and hence, copy less information into the kernel.) 75 Furthermore, @poll@ is non-destructive, so the array of structures does not have to be re-initialize on every call. 76 Like @select@, @poll@ suffers from the limitation that the interest set cannot be changed by other \gls{kthrd}, while a manager thread is blocked in @poll@. 77 78 \paragraph{\lstinline{epoll}} follows after @poll@, and places the interest set in the kernel rather than the application, where it is managed by an internal \gls{kthrd}. 79 There are two separate functions: one to add to the interest set and another to check for FDs with state changes. 37 80 This dynamic capability is accomplished by creating an \emph{epoll instance} with a persistent interest set, which is used across multiple calls. 38 This capability significantly reduces synchronization overhead on the part of the caller (in this case the \io subsystem), since the interest set can be modified when adding or removing file descriptors without having to synchronize with other \glspl{kthrd} potentially calling @epoll@. 39 40 However, all three of these system calls have limitations. 81 As the interest set is augmented, the changes become implicitly part of the interest set for a blocked manager \gls{kthrd}. 82 This capability significantly reduces synchronization between \glspl{kthrd} and the manager calling @epoll@. 83 84 However, all three of these I/O systems have limitations. 41 85 The @man@ page for @O_NONBLOCK@ mentions that ``[@O_NONBLOCK@] has no effect for regular files and block devices'', which means none of these three system calls are viable multiplexing strategies for these types of \io operations. 42 86 Furthermore, @epoll@ has been shown to have problems with pipes and ttys~\cit{Peter's examples in some fashion}. … … 53 97 It also supports batching multiple operations in a single system call. 54 98 55 AIO offers two different approach to polling: @aio_error@ can be used as a spinning form of polling, returning @EINPROGRESS@ until the operation is completed, and @aio_suspend@ can be used similarly to @select@, @poll@ or @epoll@, to wait until one or more requests have completed.99 AIO offers two different approaches to polling: @aio_error@ can be used as a spinning form of polling, returning @EINPROGRESS@ until the operation is completed, and @aio_suspend@ can be used similarly to @select@, @poll@ or @epoll@, to wait until one or more requests have completed. 56 100 For the purpose of \io multiplexing, @aio_suspend@ is the best interface. 57 101 However, even if AIO requests can be submitted concurrently, @aio_suspend@ suffers from the same limitation as @select@ and @poll@, \ie, the interest set cannot be dynamically changed while a call to @aio_suspend@ is in progress. … … 70 114 71 115 \begin{flushright} 72 -- Linus Torvalds \cit{https://lwn.net/Articles/671657/}116 -- Linus Torvalds~\cite{AIORant} 73 117 \end{flushright} 74 118 \end{displayquote} … … 85 129 A very recent addition to Linux, @io_uring@~\cite{MAN:io_uring}, is a framework that aims to solve many of the problems listed in the above interfaces. 86 130 Like AIO, it represents \io operations as entries added to a queue. 87 But like @epoll@, new requests can be submitted while a blocking call waiting for requests to completeis already in progress.131 But like @epoll@, new requests can be submitted, while a blocking call waiting for requests to complete, is already in progress. 88 132 The @io_uring@ interface uses two ring buffers (referred to simply as rings) at its core: a submit ring to which programmers push \io requests and a completion ring from which programmers poll for completion. 89 133 … … 97 141 In the worst case, where all \glspl{thrd} are consistently blocking on \io, it devolves into 1-to-1 threading. 98 142 However, regardless of the frequency of \io operations, it achieves the fundamental goal of not blocking \glspl{proc} when \glspl{thrd} are ready to run. 99 This approach is used by languages like Go\cit{Go} and frameworks like libuv\cit{libuv}, since it has the advantage that it can easily be used across multiple operating systems.143 This approach is used by languages like Go\cit{Go}, frameworks like libuv\cit{libuv}, and web servers like Apache~\cite{apache} and Nginx~\cite{nginx}, since it has the advantage that it can easily be used across multiple operating systems. 100 144 This advantage is especially relevant for languages like Go, which offer a homogeneous \glsxtrshort{api} across all platforms. 101 145 As opposed to C, which has a very limited standard api for \io, \eg, the C standard library has no networking. … … 111 155 \section{Event-Engine} 112 156 An event engine's responsibility is to use the kernel interface to multiplex many \io operations onto few \glspl{kthrd}. 113 In concrete terms, this means \glspl{thrd} enter the engine through an interface, the event engine s then starts theoperation and parks the calling \glspl{thrd}, returning control to the \gls{proc}.157 In concrete terms, this means \glspl{thrd} enter the engine through an interface, the event engine then starts an operation and parks the calling \glspl{thrd}, returning control to the \gls{proc}. 114 158 The parked \glspl{thrd} are then rescheduled by the event engine once the desired operation has completed. 115 159 … … 134 178 \begin{enumerate} 135 179 \item 136 An SQE is allocated from the pre-allocated array (denoted \emph{S} in Figure~\ref{fig:iouring}).180 An SQE is allocated from the pre-allocated array \emph{S}. 137 181 This array is created at the same time as the @io_uring@ instance, is in kernel-locked memory visible by both the kernel and the application, and has a fixed size determined at creation. 138 How these entries are allocated is not important for the functioning of @io_uring@, the only requirement is that no entry is reused before the kernel has consumed it. 182 How these entries are allocated is not important for the functioning of @io_uring@; 183 the only requirement is that no entry is reused before the kernel has consumed it. 139 184 \item 140 185 The SQE is filled according to the desired operation. 141 This step is straight forward, the only detail worth mentioning is that SQEs have a @user_data@ field that must be filled in order to match submission and completion entries. 186 This step is straight forward. 187 The only detail worth mentioning is that SQEs have a @user_data@ field that must be filled in order to match submission and completion entries. 142 188 \item 143 189 The SQE is submitted to the submission ring by appending the index of the SQE to the ring following regular ring buffer steps: \lstinline{buffer[head] = item; head++}. 144 190 Since the head is visible to the kernel, some memory barriers may be required to prevent the compiler from reordering these operations. 145 191 Since the submission ring is a regular ring buffer, more than one SQE can be added at once and the head is updated only after all entries are updated. 192 Note, SQE can be filled and submitted in any order, \eg in Figure~\ref{fig:iouring} the submission order is S0, S3, S2 and S1 has not been submitted. 146 193 \item 147 194 The kernel is notified of the change to the ring using the system call @io_uring_enter@. … … 161 208 The @io_uring_enter@ system call is protected by a lock inside the kernel. 162 209 This protection means that concurrent call to @io_uring_enter@ using the same instance are possible, but there is no performance gained from parallel calls to @io_uring_enter@. 163 It is possible to do the first three submission steps in parallel, however, doing so requires careful synchronization. 210 It is possible to do the first three submission steps in parallel; 211 however, doing so requires careful synchronization. 164 212 165 213 @io_uring@ also introduces constraints on the number of simultaneous operations that can be ``in flight''. 166 Obviously, SQEs are allocated from a fixed-size array, meaning that there is a hard limit to how many SQEs can be submitted at once.167 In addition, the @io_uring_enter@ system call can fail because ``The kernel [...] ran out of resources to handle [a request]'' or ``The application is attempting to overcommit the number of requests it can havepending.''.214 First, SQEs are allocated from a fixed-size array, meaning that there is a hard limit to how many SQEs can be submitted at once. 215 Second, the @io_uring_enter@ system call can fail because ``The kernel [...] ran out of resources to handle [a request]'' or ``The application is attempting to overcommit the number of requests it can have pending.''. 168 216 This restriction means \io request bursts may have to be subdivided and submitted in chunks at a later time. 169 217 170 218 \subsection{Multiplexing \io: Submission} 219 171 220 The submission side is the most complicated aspect of @io_uring@ and the completion side effectively follows from the design decisions made in the submission side. 172 While it is possible to do the first steps of submission in parallel, the duration of the system call scales with number of entries submitted. 221 While there is freedom in designing the submission side, there are some realities of @io_uring@ that must be taken into account. 222 It is possible to do the first steps of submission in parallel; 223 however, the duration of the system call scales with the number of entries submitted. 173 224 The consequence is that the amount of parallelism used to prepare submissions for the next system call is limited. 174 225 Beyond this limit, the length of the system call is the throughput limiting factor. 175 I concluded from early experiments that preparing submissions seems to take at most as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}. 176 Therefore the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances. 177 Similarly to scheduling, this sharding can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two. 178 Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continously 179 \footnote{As will be described in Chapter~\ref{practice}, this does not translate into constant cpu usage.}. 226 I concluded from early experiments that preparing submissions seems to take almost as long as the system call itself, which means that with a single @io_uring@ instance, there is no benefit in terms of \io throughput to having more than two \glspl{hthrd}. 227 Therefore, the design of the submission engine must manage multiple instances of @io_uring@ running in parallel, effectively sharding @io_uring@ instances. 228 Since completions are sent to the instance where requests were submitted, all instances with pending operations must be polled continuously\footnote{ 229 As described in Chapter~\ref{practice}, this does not translate into constant CPU usage.}. 180 230 Note that once an operation completes, there is nothing that ties it to the @io_uring@ instance that handled it. 181 There is nothing preventing a new operation with, for example,the same file descriptors to a different @io_uring@ instance.231 There is nothing preventing a new operation with, \eg the same file descriptors to a different @io_uring@ instance. 182 232 183 233 A complicating aspect of submission is @io_uring@'s support for chains of operations, where the completion of an operation triggers the submission of the next operation on the link. 184 234 SQEs forming a chain must be allocated from the same instance and must be contiguous in the Submission Ring (see Figure~\ref{fig:iouring}). 185 The consequence of this feature is that filling SQEs can be arbitrarly complex and therefore users may need to run arbitrary code between allocation and submission. 186 Supporting chains is a requirement of the \io subsystem, but it is still valuable. 187 Support for this feature can be fulfilled simply to supporting arbitrary user code between allocation and submission. 188 189 \subsubsection{Public Instances} 190 One approach is to have multiple shared instances. 191 \Glspl{thrd} attempting \io operations pick one of the available instances and submit operations to that instance. 192 Since there is no coupling between \glspl{proc} and @io_uring@ instances in this approach, \glspl{thrd} running on more than one \gls{proc} can attempt to submit to the same instance concurrently. 193 Since @io_uring@ effectively sets the amount of sharding needed to avoid contention on its internal locks, performance in this approach is based on two aspects: the synchronization needed to submit does not induce more contention than @io_uring@ already does and the scheme to route \io requests to specific @io_uring@ instances does not introduce contention. 194 This second aspect has an oversized importance because it comes into play before the sharding of instances, and as such, all \glspl{hthrd} can contend on the routing algorithm. 195 196 Allocation in this scheme can be handled fairly easily. 197 Free SQEs, \ie, SQEs that aren't currently being used to represent a request, can be written to safely and have a field called @user_data@ which the kernel only reads to copy to @cqe@s. 198 Allocation also requires no ordering guarantee as all free SQEs are interchangeable. 199 This requires a simple concurrent bag. 200 The only added complexity is that the number of SQEs is fixed, which means allocation can fail. 201 202 Allocation failures need to be pushed up to a routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient SQEs available. 203 Furthermore, the routing algorithm should block operations up-front if none of the instances have available SQEs. 204 205 Once an SQE is allocated, \glspl{thrd} can fill them normally, they simply need to keep track of the SQE index and which instance it belongs to. 206 207 Once an SQE is filled in, what needs to happen is that the SQE must be added to the submission ring buffer, an operation that is not thread-safe on itself, and the kernel must be notified using the @io_uring_enter@ system call. 208 The submission ring buffer is the same size as the pre-allocated SQE buffer, therefore pushing to the ring buffer cannot fail 209 \footnote{This is because it is invalid to have the same \lstinline{sqe} multiple times in the ring buffer.}. 210 However, as mentioned, the system call itself can fail with the expectation that it will be retried once some of the already submitted operations complete. 211 Since multiple SQEs can be submitted to the kernel at once, it is important to strike a balance between batching and latency. 212 Operations that are ready to be submitted should be batched together in few system calls, but at the same time, operations should not be left pending for long period of times before being submitted. 213 This can be handled by either designating one of the submitting \glspl{thrd} as the being responsible for the system call for the current batch of SQEs or by having some other party regularly submitting all ready SQEs, \eg, the poller \gls{thrd} mentioned later in this section. 214 215 In the case of designating a \gls{thrd}, ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests would be batched together and one of the \glspl{thrd} would do the system call on behalf of the others, referred to as the \newterm{submitter}. 216 In practice however, it is important that the \io requests are not left pending indefinitely and as such, it may be required to have a ``next submitter'' that guarentees everything that is missed by the current submitter is seen by the next one. 217 Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call will include their request. 218 Once the system call is done, the submitter must also free SQEs so that the allocator can reused them. 219 220 Finally, the completion side is much simpler since the @io_uring@ system call enforces a natural synchronization point. 221 Polling simply needs to regularly do the system call, go through the produced CQEs and communicate the result back to the originating \glspl{thrd}. 222 Since CQEs only own a signed 32 bit result, in addition to the copy of the @user_data@ field, all that is needed to communicate the result is a simple future~\cite{wiki:future}. 223 If the submission side does not designate submitters, polling can also submit all SQEs as it is polling events. 224 A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled. 225 226 With this pool of instances approach, the big advantage is that it is fairly flexible. 227 It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions. 228 It also can gracefully handle running out of ressources, SQEs or the kernel returning @EBUSY@. 229 The down side to this is that many of the steps used for submitting need complex synchronization to work properly. 230 The routing and allocation algorithm needs to keep track of which ring instances have available SQEs, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for SQEs and handle SQEs being freed. 231 The submission side needs to safely append SQEs to the ring buffer, correctly handle chains, make sure no SQE is dropped or left pending forever, notify the allocation side when SQEs can be reused and handle the kernel returning @EBUSY@. 232 All this synchronization may have a significant cost and, compared to the next approach presented, this synchronization is entirely overhead. 235 The consequence of this feature is that filling SQEs can be arbitrarily complex, and therefore, users may need to run arbitrary code between allocation and submission. 236 Supporting chains is not a requirement of the \io subsystem, but it is still valuable. 237 Support for this feature can be fulfilled simply by supporting arbitrary user code between allocation and submission. 238 239 Similar to scheduling, sharding @io_uring@ instances can be done privately, \ie, one instance per \glspl{proc}, in decoupled pools, \ie, a pool of \glspl{proc} use a pool of @io_uring@ instances without one-to-one coupling between any given instance and any given \gls{proc}, or some mix of the two. 240 These three sharding approaches are analyzed. 233 241 234 242 \subsubsection{Private Instances} 235 Another approach is to simply create one ring instance per \gls{proc}. 236 This alleviates the need for synchronization on the submissions, requiring only that \glspl{thrd} are not interrupted in between two submission steps. 237 This is effectively the same requirement as using @thread_local@ variables. 238 Since SQEs that are allocated must be submitted to the same ring, on the same \gls{proc}, this effectively forces the application to submit SQEs in allocation order 239 \footnote{The actual requirement is that \glspl{thrd} cannot context switch between allocation and submission. 240 This requirement means that from the subsystem's point of view, the allocation and submission are sequential. 241 To remove this requirement, a \gls{thrd} would need the ability to ``yield to a specific \gls{proc}'', \ie, park with the promise that it will be run next on a specific \gls{proc}, the \gls{proc} attached to the correct ring.} 242 , greatly simplifying both allocation and submission. 243 In this design, allocation and submission form a partitionned ring buffer as shown in Figure~\ref{fig:pring}. 244 Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to do the system call. 243 The private approach creates one ring instance per \gls{proc}, \ie one-to-one coupling. 244 This alleviates the need for synchronization on the submissions, requiring only that \glspl{thrd} are not time-sliced during submission steps. 245 This requirement is the same as accessing @thread_local@ variables, where a \gls{thrd} is accessing kernel-thread data, is time-sliced, and continues execution on another kernel thread but is now accessing the wrong data. 246 This failure is the serially reusable problem~\cite{SeriallyReusable}. 247 Hence, allocated SQEs must be submitted to the same ring on the same \gls{proc}, which effectively forces the application to submit SQEs in allocation order.\footnote{ 248 To remove this requirement, a \gls{thrd} needs the ability to ``yield to a specific \gls{proc}'', \ie, park with the guarantee it unparks on a specific \gls{proc}, \ie the \gls{proc} attached to the correct ring.} 249 From the subsystem's point of view, the allocation and submission are sequential, greatly simplifying both. 250 In this design, allocation and submission form a partitioned ring buffer as shown in Figure~\ref{fig:pring}. 251 Once added to the ring buffer, the attached \gls{proc} has a significant amount of flexibility with regards to when to perform the system call. 245 252 Possible options are: when the \gls{proc} runs out of \glspl{thrd} to run, after running a given number of \glspl{thrd}, etc. 246 253 … … 254 261 \end{figure} 255 262 256 This approach has the advantage that it does not require much of the synchronization needed in the shared approach. 257 This comes at the cost that \glspl{thrd} submitting \io operations have less flexibility, they cannot park or yield, and several exceptional cases are handled poorly. 258 Instances running out of SQEs cannot run \glspl{thrd} wanting to do \io operations, in such a case the \gls{thrd} needs to be moved to a different \gls{proc}, the only current way of achieving this would be to @yield()@ hoping to be scheduled on a different \gls{proc}, which is not guaranteed. 259 260 A more involved version of this approach can seem to solve most of these problems, using a pattern called \newterm{helping}. 261 \Glspl{thrd} that wish to submit \io operations but cannot do so 262 \footnote{either because of an allocation failure or because they were migrate to a different \gls{proc} between allocation and submission} 263 create an object representing what they wish to achieve and add it to a list somewhere. 264 For this particular problem, one solution would be to have a list of pending submissions per \gls{proc} and a list of pending allocations, probably per cluster. 265 The problem with these ``solutions'' is that they are still bound by the strong coupling between \glspl{proc} and @io_uring@ instances. 266 These data structures would allow moving \glspl{thrd} to a specific \gls{proc} when the current \gls{proc} cannot fulfill the \io request. 267 268 Imagine a simple case with two \glspl{thrd} on two \glspl{proc}, one \gls{thrd} submits an \io operation and then sets a flag, the other \gls{thrd} spins until the flag is set. 269 If the first \gls{thrd} is preempted between allocation and submission and moves to the other \gls{proc}, the original \gls{proc} could start running the spinning \gls{thrd}. 270 If this happens, the helping ``solution'' is for the \io \gls{thrd}to added append an item to the submission list of the \gls{proc} where the allocation was made. 263 This approach has the advantage that it does not require much of the synchronization needed in a shared approach. 264 However, this benefit means \glspl{thrd} submitting \io operations have less flexibility: they cannot park or yield, and several exceptional cases are handled poorly. 265 Instances running out of SQEs cannot run \glspl{thrd} wanting to do \io operations. 266 In this case, the \io \gls{thrd} needs to be moved to a different \gls{proc}, and the only current way of achieving this is to @yield()@ hoping to be scheduled on a different \gls{proc} with free SQEs, which is not guaranteed. 267 268 A more involved version of this approach tries to solve these problems using a pattern called \newterm{helping}. 269 \Glspl{thrd} that cannot submit \io operations, either because of an allocation failure or migration to a different \gls{proc} between allocation and submission, create an \io object and add it to a list of pending submissions per \gls{proc} and a list of pending allocations, probably per cluster. 270 While there is still the strong coupling between \glspl{proc} and @io_uring@ instances, these data structures allow moving \glspl{thrd} to a specific \gls{proc}, when the current \gls{proc} cannot fulfill the \io request. 271 272 Imagine a simple scenario with two \glspl{thrd} on two \glspl{proc}, where one \gls{thrd} submits an \io operation and then sets a flag, while the other \gls{thrd} spins until the flag is set. 273 Assume both \glspl{thrd} are running on the same \gls{proc}, and the \io \gls{thrd} is preempted between allocation and submission, moved to the second \gls{proc}, and the original \gls{proc} starts running the spinning \gls{thrd}. 274 In this case, the helping solution has the \io \gls{thrd} append an \io object to the submission list of the first \gls{proc}, where the allocation was made. 271 275 No other \gls{proc} can help the \gls{thrd} since @io_uring@ instances are strongly coupled to \glspl{proc}. 272 However, in this case, the \gls{proc} is unable to help because it is executing the spinning \gls{thrd} mentioned when first expression this case 273 \footnote{This particular example is completely artificial, but in the presence of many more \glspl{thrd}, it is not impossible that this problem would arise ``in the wild''. 274 Furthermore, this pattern is difficult to reliably detect and avoid.} 275 resulting in a deadlock. 276 Once in this situation, the only escape is to interrupted the execution of the \gls{thrd}, either directly or due to regular preemption, only then can the \gls{proc} take the time to handle the pending request to help. 277 Interrupting \glspl{thrd} for this purpose is far from desireable, the cost is significant and the situation may be hard to detect. 278 However, a more subtle reason why interrupting the \gls{thrd} is not a satisfying solution is that the \gls{proc} is not actually using the instance it is tied to. 279 If it were to use it, then helping could be done as part of the usage. 276 However, the \io \gls{proc} is unable to help because it is executing the spinning \gls{thrd} resulting in a deadlock. 277 While this example is artificial, in the presence of many \glspl{thrd}, it is possible for this problem to arise ``in the wild''. 278 Furthermore, this pattern is difficult to reliably detect and avoid. 279 Once in this situation, the only escape is to interrupted the spinning \gls{thrd}, either directly or via some regular preemption (\eg time slicing). 280 Having to interrupt \glspl{thrd} for this purpose is costly, the latency can be large between interrupts, and the situation may be hard to detect. 281 % However, a more important reason why interrupting the \gls{thrd} is not a satisfying solution is that the \gls{proc} is using the instance it is tied to. 282 % If it were to use it, then helping could be done as part of the usage. 280 283 Interrupts are needed here entirely because the \gls{proc} is tied to an instance it is not using. 281 Therefore a more satisfying solution would be for the \gls{thrd} submitting the operation to simply notice that the instance is unused and simply go ahead and use it. 282 This is the approach presented next. 284 Therefore, a more satisfying solution is for the \gls{thrd} submitting the operation to notice that the instance is unused and simply go ahead and use it. 285 This approach is presented shortly. 286 287 \subsubsection{Public Instances} 288 The public approach creates decoupled pools of @io_uring@ instances and processors, \ie without one-to-one coupling. 289 \Glspl{thrd} attempting an \io operation pick one of the available instances and submit the operation to that instance. 290 Since there is no coupling between @io_uring@ instances and \glspl{proc} in this approach, \glspl{thrd} running on more than one \gls{proc} can attempt to submit to the same instance concurrently. 291 Because @io_uring@ effectively sets the amount of sharding needed to avoid contention on its internal locks, performance in this approach is based on two aspects: 292 \begin{itemize} 293 \item 294 The synchronization needed to submit does not induce more contention than @io_uring@ already does. 295 \item 296 The scheme to route \io requests to specific @io_uring@ instances does not introduce contention. 297 This aspect has an oversized importance because it comes into play before the sharding of instances, and as such, all \glspl{hthrd} can contend on the routing algorithm. 298 \end{itemize} 299 300 Allocation in this scheme is fairly easy. 301 Free SQEs, \ie, SQEs that are not currently being used to represent a request, can be written to safely and have a field called @user_data@ that the kernel only reads to copy to @cqe@s. 302 Allocation also requires no ordering guarantee as all free SQEs are interchangeable. 303 % This requires a simple concurrent bag. 304 The only added complexity is that the number of SQEs is fixed, which means allocation can fail. 305 306 Allocation failures need to be pushed to a routing algorithm: \glspl{thrd} attempting \io operations must not be directed to @io_uring@ instances without sufficient SQEs available. 307 Furthermore, the routing algorithm should block operations up-front, if none of the instances have available SQEs. 308 309 Once an SQE is allocated, \glspl{thrd} insert the \io request information, and keep track of the SQE index and the instance it belongs to. 310 311 Once an SQE is filled in, it is added to the submission ring buffer, an operation that is not thread-safe, and then the kernel must be notified using the @io_uring_enter@ system call. 312 The submission ring buffer is the same size as the pre-allocated SQE buffer, therefore pushing to the ring buffer cannot fail because it is invalid to have the same \lstinline{sqe} multiple times in a ring buffer. 313 However, as mentioned, the system call itself can fail with the expectation that it can be retried once some submitted operations complete. 314 315 Since multiple SQEs can be submitted to the kernel at once, it is important to strike a balance between batching and latency. 316 Operations that are ready to be submitted should be batched together in few system calls, but at the same time, operations should not be left pending for long period of times before being submitted. 317 Balancing submission can be handled by either designating one of the submitting \glspl{thrd} as the being responsible for the system call for the current batch of SQEs or by having some other party regularly submitting all ready SQEs, \eg, the poller \gls{thrd} mentioned later in this section. 318 319 Ideally, when multiple \glspl{thrd} attempt to submit operations to the same @io_uring@ instance, all requests should be batched together and one of the \glspl{thrd} is designated to do the system call on behalf of the others, called the \newterm{submitter}. 320 However, in practice, \io requests must be handed promptly so there is a need to guarantee everything missed by the current submitter is seen by the next one. 321 Indeed, as long as there is a ``next'' submitter, \glspl{thrd} submitting new \io requests can move on, knowing that some future system call includes their request. 322 Once the system call is done, the submitter must also free SQEs so that the allocator can reused them. 323 324 Finally, the completion side is much simpler since the @io_uring@ system-call enforces a natural synchronization point. 325 Polling simply needs to regularly do the system call, go through the produced CQEs and communicate the result back to the originating \glspl{thrd}. 326 Since CQEs only own a signed 32 bit result, in addition to the copy of the @user_data@ field, all that is needed to communicate the result is a simple future~\cite{wiki:future}. 327 If the submission side does not designate submitters, polling can also submit all SQEs as it is polling events. 328 A simple approach to polling is to allocate a \gls{thrd} per @io_uring@ instance and simply let the poller \glspl{thrd} poll their respective instances when scheduled. 329 330 With the pool of SEQ instances approach, the big advantage is that it is fairly flexible. 331 It does not impose restrictions on what \glspl{thrd} submitting \io operations can and cannot do between allocations and submissions. 332 It also can gracefully handle running out of resources, SQEs or the kernel returning @EBUSY@. 333 The down side to this approach is that many of the steps used for submitting need complex synchronization to work properly. 334 The routing and allocation algorithm needs to keep track of which ring instances have available SQEs, block incoming requests if no instance is available, prevent barging if \glspl{thrd} are already queued up waiting for SQEs and handle SQEs being freed. 335 The submission side needs to safely append SQEs to the ring buffer, correctly handle chains, make sure no SQE is dropped or left pending forever, notify the allocation side when SQEs can be reused, and handle the kernel returning @EBUSY@. 336 All this synchronization has a significant cost, and compared to the private-instance approach, this synchronization is entirely overhead. 283 337 284 338 \subsubsection{Instance borrowing} 285 Both of the approaches presented above have undesirable aspects that stem from too loose or too tight coupling between @io_uring@ and \glspl{proc}. 286 In the first approach, loose coupling meant that all operations have synchronization overhead that a tighter coupling can avoid. 287 The second approach on the other hand suffers from tight coupling causing problems when the \gls{proc} do not benefit from the coupling. 288 While \glspl{proc} are continously issuing \io operations tight coupling is valuable since it avoids synchronization costs. 289 However, in unlikely failure cases or when \glspl{proc} are not making use of their instance, tight coupling is no longer advantageous. 290 A compromise between these approaches would be to allow tight coupling but have the option to revoke this coupling dynamically when failure cases arise. 291 I call this approach ``instance borrowing''\footnote{While it looks similar to work-sharing and work-stealing, I think it is different enough from either to warrant a different verb to avoid confusion.}. 292 293 In this approach, each cluster owns a pool of @io_uring@ instances managed by an arbiter. 339 Both of the prior approaches have undesirable aspects that stem from tight or loose coupling between @io_uring@ and \glspl{proc}. 340 The first approach suffers from tight coupling causing problems when a \gls{proc} does not benefit from the coupling. 341 The second approach suffers from loose coupling causing operations to have synchronization overhead, which tighter coupling avoids. 342 When \glspl{proc} are continuously issuing \io operations, tight coupling is valuable since it avoids synchronization costs. 343 However, in unlikely failure cases or when \glspl{proc} are not using their instances, tight coupling is no longer advantageous. 344 A compromise between these approaches is to allow tight coupling but have the option to revoke the coupling dynamically when failure cases arise. 345 I call this approach \newterm{instance borrowing}.\footnote{ 346 While instance borrowing looks similar to work sharing and stealing, I think it is different enough to warrant a different verb to avoid confusion.} 347 348 In this approach, each cluster (see Figure~\ref{fig:system}) owns a pool of @io_uring@ instances managed by an \newterm{arbiter}. 294 349 When a \gls{thrd} attempts to issue an \io operation, it ask for an instance from the arbiter and issues requests to that instance. 295 However, in doing so it ties to the instance to the \gls{proc} it is currentlyrunning on.296 This coupling is kept until the arbiter decides to revoke it, taking back the instance and reverting the \gls{proc} to its initial state with respect to \io.297 This tight coupling means that synchronization can be minimal since only one \gls{proc} can use the instance at a ny giventime, akin to the private instances approach.298 However, where it differs is that revocation from the arbitermeans this approach does not suffer from the deadlock scenario described above.350 This instance is now bound to the \gls{proc} the \gls{thrd} is running on. 351 This binding is kept until the arbiter decides to revoke it, taking back the instance and reverting the \gls{proc} to its initial state with respect to \io. 352 This tight coupling means that synchronization can be minimal since only one \gls{proc} can use the instance at a time, akin to the private instances approach. 353 However, it differs in that revocation by the arbiter (an interrupt) means this approach does not suffer from the deadlock scenario described above. 299 354 300 355 Arbitration is needed in the following cases: 301 356 \begin{enumerate} 302 \item The current \gls{proc} does not currentlyhold an instance.357 \item The current \gls{proc} does not hold an instance. 303 358 \item The current instance does not have sufficient SQEs to satisfy the request. 304 \item The current \gls{proc} has the wrong instance, this happens if the submitting \gls{thrd} context-switched between allocation and submission. 305 I will refer to these as \newterm{External Submissions}. 359 \item The current \gls{proc} has a wrong instance, this happens if the submitting \gls{thrd} context-switched between allocation and submission, called \newterm{external submissions}. 306 360 \end{enumerate} 307 However, even when the arbiter is not directly needed, \glspl{proc} need to make sure that their ownership of the instance is not being revoked.308 This can be accomplished by a lock-less handshake\footnote{Note that the handshake is not Lock-\emph{Free} since it lacks the proper progress guarantee.}. 361 However, even when the arbiter is not directly needed, \glspl{proc} need to make sure that their instance ownership is not being revoked, which is accomplished by a lock-\emph{less} handshake.\footnote{ 362 Note the handshake is not lock \emph{free} since it lacks the proper progress guarantee.} 309 363 A \gls{proc} raises a local flag before using its borrowed instance and checks if the instance is marked as revoked or if the arbiter has raised its flag. 310 If not it proceeds, otherwise it delegates the operation to the arbiter.364 If not, it proceeds, otherwise it delegates the operation to the arbiter. 311 365 Once the operation is completed, the \gls{proc} lowers its local flag. 312 366 313 Correspondingly, before revoking an instance the arbiter marks the instance and then waits for the \gls{proc} using it to lower its local flag.367 Correspondingly, before revoking an instance, the arbiter marks the instance and then waits for the \gls{proc} using it to lower its local flag. 314 368 Only then does it reclaim the instance and potentially assign it to an other \gls{proc}. 315 369 … … 323 377 324 378 \paragraph{External Submissions} are handled by the arbiter by revoking the appropriate instance and adding the submission to the submission ring. 325 There is no need to immediately revoke the instance however.379 However, there is no need to immediately revoke the instance. 326 380 External submissions must simply be added to the ring before the next system call, \ie, when the submission ring is flushed. 327 This means that whoever is responsible for the system call first checks if the instance has any external submissions. 328 If it is the case, it asks the arbiter to revoke the instance and add the external submissions to the ring. 329 330 \paragraph{Pending Allocations} can be more complicated to handle. 331 If the arbiter has available instances, the arbiter can attempt to directly hand over the instance and satisfy the request. 332 Otherwise it must hold onto the list of threads until SQEs are made available again. 333 This handling becomes that much more complex if pending allocation require more than one SQE, since the arbiter must make a decision between statisfying requests in FIFO ordering or satisfy requests for fewer SQEs first. 334 335 While this arbiter has the potential to solve many of the problems mentionned in above, it also introduces a significant amount of complexity. 381 This means whoever is responsible for the system call, first checks if the instance has any external submissions. 382 If so, it asks the arbiter to revoke the instance and add the external submissions to the ring. 383 384 \paragraph{Pending Allocations} are handled by the arbiter when it has available instances and can directly hand over the instance and satisfy the request. 385 Otherwise, it must hold onto the list of threads until SQEs are made available again. 386 This handling is more complex when an allocation requires multiple SQEs, since the arbiter must make a decision between satisfying requests in FIFO ordering or for fewer SQEs. 387 388 While an arbiter has the potential to solve many of the problems mentioned above, it also introduces a significant amount of complexity. 336 389 Tracking which processors are borrowing which instances and which instances have SQEs available ends-up adding a significant synchronization prelude to any I/O operation. 337 390 Any submission must start with a handshake that pins the currently borrowed instance, if available. 338 391 An attempt to allocate is then made, but the arbiter can concurrently be attempting to allocate from the same instance from a different \gls{hthrd}. 339 Once the allocation is completed, the submission must still check that the instance is still burrowed before attemptto flush.340 These extra synchronization steps end-up having a similar cost to the multiple sharedinstances approach.392 Once the allocation is completed, the submission must check that the instance is still burrowed before attempting to flush. 393 These synchronization steps turn out to have a similar cost to the multiple shared-instances approach. 341 394 Furthermore, if the number of instances does not match the number of processors actively submitting I/O, the system can fall into a state where instances are constantly being revoked and end-up cycling the processors, which leads to significant cache deterioration. 342 Because ofthese reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice.395 For these reasons, this approach, which sounds promising on paper, does not improve on the private instance approach in practice. 343 396 344 397 \subsubsection{Private Instances V2} 345 398 346 347 348 399 % Verbs of this design 349 400 350 401 % Allocation: obtaining an sqe from which to fill in the io request, enforces the io instance to use since it must be the one which provided the sqe. Must interact with the arbiter if the instance does not have enough sqe for the allocation. (Typical allocation will ask for only one sqe, but chained sqe must be allocated from the same context so chains of sqe must be allocated in bulks) 351 402 352 % Submi tion: simply adds the sqe(s) to some data structure to communicate that they are ready to go. This operation can't fail because there are as many spots in the submit buffer than there are sqes. Must interact with the arbiter only if the thread was moved between the allocation and the submission.403 % Submission: simply adds the sqe(s) to some data structure to communicate that they are ready to go. This operation can't fail because there are as many spots in the submit buffer than there are sqes. Must interact with the arbiter only if the thread was moved between the allocation and the submission. 353 404 354 405 % Flushing: Taking all the sqes that were submitted and making them visible to the kernel, also counting them in order to figure out what to_submit should be. Must be thread-safe with submission. Has to interact with the Arbiter if there are external submissions. Can't simply use a protected queue because adding to the array is not safe if the ring is still available for submitters. Flushing must therefore: check if there are external pending requests if so, ask the arbiter to flush otherwise use the fast flush operation. … … 357 408 358 409 % Handle: process all the produced cqe. No need to interact with any of the submission operations or the arbiter. 359 360 361 410 362 411 … … 404 453 405 454 \section{Interface} 406 Finally, the last important part of the \io subsystem is it's interface. There are multiple approaches that can be offered to programmers, each with advantages and disadvantages. The new \io subsystem can replace the C runtime's API or extend it. And in the later case the interface can go from very similar to vastly different. The following sections discuss some useful options using @read@ as an example. The standard Linux interface for C is : 407 408 @ssize_t read(int fd, void *buf, size_t count);@ 455 456 The last important part of the \io subsystem is its interface. 457 There are multiple approaches that can be offered to programmers, each with advantages and disadvantages. 458 The new \io subsystem can replace the C runtime API or extend it, and in the later case, the interface can go from very similar to vastly different. 459 The following sections discuss some useful options using @read@ as an example. 460 The standard Linux interface for C is : 461 \begin{lstlisting} 462 ssize_t read(int fd, void *buf, size_t count); 463 \end{lstlisting} 409 464 410 465 \subsection{Replacement} 411 466 Replacing the C \glsxtrshort{api} is the more intrusive and draconian approach. 412 467 The goal is to convince the compiler and linker to replace any calls to @read@ to direct them to the \CFA implementation instead of glibc's. 413 This has the advantage of potentiallyworking transparently and supporting existing binaries without needing recompilation.468 This rerouting has the advantage of working transparently and supporting existing binaries without needing recompilation. 414 469 It also offers a, presumably, well known and familiar API that C programmers can simply continue to work with. 415 However, this approach also entails a plethora of subtle technical challenges which generally boils down to making a perfect replacement.470 However, this approach also entails a plethora of subtle technical challenges, which generally boils down to making a perfect replacement. 416 471 If the \CFA interface replaces only \emph{some} of the calls to glibc, then this can easily lead to esoteric concurrency bugs. 417 Since the gcc ecosystems does not offer a scheme for suchperfect replacement, this approach was rejected as being laudable but infeasible.472 Since the gcc ecosystems does not offer a scheme for perfect replacement, this approach was rejected as being laudable but infeasible. 418 473 419 474 \subsection{Synchronous Extension} 420 An other interface option is to simply offer an interface that is different in name only. For example: 421 422 @ssize_t cfa_read(int fd, void *buf, size_t count);@ 423 424 \noindent This is much more feasible but still familiar to C programmers. 425 It comes with the caveat that any code attempting to use it must be recompiled, which can be a big problem considering the amount of existing legacy C binaries. 475 Another interface option is to offer an interface different in name only. 476 For example: 477 \begin{lstlisting} 478 ssize_t cfa_read(int fd, void *buf, size_t count); 479 \end{lstlisting} 480 This approach is feasible and still familiar to C programmers. 481 It comes with the caveat that any code attempting to use it must be recompiled, which is a problem considering the amount of existing legacy C binaries. 426 482 However, it has the advantage of implementation simplicity. 483 Finally, there is a certain irony to using a blocking synchronous interfaces for a feature often referred to as ``non-blocking'' \io. 427 484 428 485 \subsection{Asynchronous Extension} 429 It is important to mention that there is a certain irony to using only synchronous, therefore blocking, interfaces for a feature often referred to as ``non-blocking'' \io. 430 A fairly traditional way of doing this is using futures\cit{wikipedia futures}. 431 As simple way of doing so is as follows: 432 433 @future(ssize_t) read(int fd, void *buf, size_t count);@ 434 435 \noindent Note that this approach is not necessarily the most idiomatic usage of futures. 436 The definition of read above ``returns'' the read content through an output parameter which cannot be synchronized on. 437 A more classical asynchronous API could look more like: 438 439 @future([ssize_t, void *]) read(int fd, size_t count);@ 440 441 \noindent However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned. 442 Because of the performance implications of this, the first approach is considered preferable as it is more familiar to C programmers. 443 444 \subsection{Interface directly to \lstinline{io_uring}} 445 Finally, an other interface that can be relevant is to simply expose directly the underlying \texttt{io\_uring} interface. For example: 446 447 @array(SQE, want) cfa_io_allocate(int want);@ 448 449 @void cfa_io_submit( const array(SQE, have) & );@ 450 451 \noindent This offers more flexibility to users wanting to fully use all of the \texttt{io\_uring} features. 486 A fairly traditional way of providing asynchronous interactions is using a future mechanism~\cite{multilisp}, \eg: 487 \begin{lstlisting} 488 future(ssize_t) read(int fd, void *buf, size_t count); 489 \end{lstlisting} 490 where the generic @future@ is fulfilled when the read completes and it contains the number of bytes read, which may be less than the number of bytes requested. 491 The data read is placed in @buf@. 492 The problem is that both the bytes read and data form the synchronization object, not just the bytes read. 493 Hence, the buffer cannot be reused until the operation completes but the synchronization does not cover the buffer. 494 A classical asynchronous API is: 495 \begin{lstlisting} 496 future([ssize_t, void *]) read(int fd, size_t count); 497 \end{lstlisting} 498 where the future tuple covers the components that require synchronization. 499 However, this interface immediately introduces memory lifetime challenges since the call must effectively allocate a buffer to be returned. 500 Because of the performance implications of this API, the first approach is considered preferable as it is more familiar to C programmers. 501 502 \subsection{Direct \lstinline{io_uring} Interface} 503 The last interface directly exposes the underlying @io_uring@ interface, \eg: 504 \begin{lstlisting} 505 array(SQE, want) cfa_io_allocate(int want); 506 void cfa_io_submit( const array(SQE, have) & ); 507 \end{lstlisting} 508 where the generic @array@ contains an array of SQEs with a size that may be less than the request. 509 This offers more flexibility to users wanting to fully utilize all of the @io_uring@ features. 452 510 However, it is not the most user-friendly option. 453 It obviously imposes a strong dependency between user code and \texttt{io\_uring} but at the same time restricting users to usages that are compatible with how \CFA internally uses \texttt{io\_uring}. 454 455 511 It obviously imposes a strong dependency between user code and @io_uring@ but at the same time restricting users to usages that are compatible with how \CFA internally uses @io_uring@. -
doc/theses/thierry_delisle_PhD/thesis/text/practice.tex
r4f3807d r847bb6f 15 15 Programmers are mostly expected to resize clusters on startup or teardown. 16 16 Therefore dynamically changing the number of \procs is an appropriate moment to allocate or free resources to match the new state. 17 As such all internal arrays that are sized based on the number of \procs need to be \texttt{realloc}ed.17 As such all internal arrays that are sized based on the number of \procs need to be @realloc@ed. 18 18 This also means that any references into these arrays, pointers or indexes, may need to be fixed when shrinking\footnote{Indexes may still need fixing when shrinkingbecause some indexes are expected to refer to dense contiguous resources and there is no guarantee the resource being removed has the highest index.}. 19 19 … … 107 107 First some data structure needs to keep track of all \procs that are in idle sleep. 108 108 Because of idle sleep can be spurious, this data structure has strict performance requirements in addition to the strict correctness requirements. 109 Next, some tool must be used to block kernel threads \glspl{kthrd}, \eg \texttt{pthread\_cond\_wait}, pthread semaphores.109 Next, some tool must be used to block kernel threads \glspl{kthrd}, \eg @pthread_cond_wait@, pthread semaphores. 110 110 The complexity here is to support \at parking and unparking, timers, \io operations and all other \CFA features with minimal complexity. 111 111 Finally, idle sleep also includes a heuristic to determine the appropriate number of \procs to be in idle sleep an any given time. … … 117 117 In terms of blocking a \gls{kthrd} until some event occurs the linux kernel has many available options: 118 118 119 \paragraph{\ texttt{pthread\_mutex}/\texttt{pthread\_cond}}120 The most classic option is to use some combination of \texttt{pthread\_mutex} and \texttt{pthread\_cond}.121 These serve as straight forward mutual exclusion and synchronization tools and allow a \gls{kthrd} to wait on a \texttt{pthread\_cond}until signalled.122 While this approach is generally perfectly appropriate for \glspl{kthrd} waiting after eachother, \io operations do not signal \texttt{pthread\_cond}s.123 For \io results to wake a \proc waiting on a \texttt{pthread\_cond}means that a different \glspl{kthrd} must be woken up first, and then the \proc can be signalled.124 125 \subsection{\ texttt{io\_uring} and Epoll}126 An alternative is to flip the problem on its head and block waiting for \io, using \texttt{io\_uring} or even \texttt{epoll}.119 \paragraph{\lstinline{pthread_mutex}/\lstinline{pthread_cond}} 120 The most classic option is to use some combination of @pthread_mutex@ and @pthread_cond@. 121 These serve as straight forward mutual exclusion and synchronization tools and allow a \gls{kthrd} to wait on a @pthread_cond@ until signalled. 122 While this approach is generally perfectly appropriate for \glspl{kthrd} waiting after eachother, \io operations do not signal @pthread_cond@s. 123 For \io results to wake a \proc waiting on a @pthread_cond@ means that a different \glspl{kthrd} must be woken up first, and then the \proc can be signalled. 124 125 \subsection{\lstinline{io_uring} and Epoll} 126 An alternative is to flip the problem on its head and block waiting for \io, using @io_uring@ or even @epoll@. 127 127 This creates the inverse situation, where \io operations directly wake sleeping \procs but waking \proc from a running \gls{kthrd} must use an indirect scheme. 128 128 This generally takes the form of creating a file descriptor, \eg, a dummy file, a pipe or an event fd, and using that file descriptor when \procs need to wake eachother. … … 132 132 \subsection{Event FDs} 133 133 Another interesting approach is to use an event file descriptor\cit{eventfd}. 134 This is a Linux feature that is a file descriptor that behaves like \io, \ie, uses \texttt{read} and \texttt{write}, but also behaves like a semaphore.134 This is a Linux feature that is a file descriptor that behaves like \io, \ie, uses @read@ and @write@, but also behaves like a semaphore. 135 135 Indeed, all read and writes must use 64bits large values\footnote{On 64-bit Linux, a 32-bit Linux would use 32 bits values.}. 136 Writes add their values to the buffer, that is arithmetic addition and not buffer append, and reads zero out the buffer and return the buffer values so far\footnote{This is without the \texttt{EFD\_SEMAPHORE} flag. This flags changes the behavior of \texttt{read} but is not needed for this work.}. 136 Writes add their values to the buffer, that is arithmetic addition and not buffer append, and reads zero out the buffer and return the buffer values so far\footnote{ 137 This is without the \lstinline{EFD_SEMAPHORE} flag. This flags changes the behavior of \lstinline{read} but is not needed for this work.}. 137 138 If a read is made while the buffer is already 0, the read blocks until a non-0 value is added. 138 What makes this feature particularly interesting is that \texttt{io\_uring} supports the \texttt{IORING\_REGISTER\_EVENTFD}command, to register an event fd to a particular instance.139 Once that instance is registered, any \io completion will result in \texttt{io\_uring}writing to the event FD.139 What makes this feature particularly interesting is that @io_uring@ supports the @IORING_REGISTER_EVENTFD@ command, to register an event fd to a particular instance. 140 Once that instance is registered, any \io completion will result in @io\_uring@ writing to the event FD. 140 141 This means that a \proc waiting on the event FD can be \emph{directly} woken up by either other \procs or incomming \io. 141 142 … … 172 173 This means that whichever entity removes idle \procs from the sleeper list must be able to do so in any order. 173 174 Using a simple lock over this data structure makes the removal much simpler than using a lock-free data structure. 174 The notification process then simply needs to wake-up the desired idle \proc, using \texttt{pthread\_cond\_signal}, \texttt{write}on an fd, etc., and the \proc will handle the rest.175 The notification process then simply needs to wake-up the desired idle \proc, using @pthread_cond_signal@, @write@ on an fd, etc., and the \proc will handle the rest. 175 176 176 177 \subsection{Reducing Latency} … … 190 191 The contention is mostly due to the lock on the list needing to be held to get to the head \proc. 191 192 That lock can be contended by \procs attempting to go to sleep, \procs waking or notification attempts. 192 The contentention from the \procs attempting to go to sleep can be mitigated slightly by using \texttt{try\_acquire}instead, so the \procs simply continue searching for \ats if the lock is held.193 The contentention from the \procs attempting to go to sleep can be mitigated slightly by using @try\_acquire@ instead, so the \procs simply continue searching for \ats if the lock is held. 193 194 This trick cannot be used for waking \procs since they are not in a state where they can run \ats. 194 195 However, it is worth nothing that notification does not strictly require accessing the list or the head \proc. 195 196 Therefore, contention can be reduced notably by having notifiers avoid the lock entirely and adding a pointer to the event fd of the first idle \proc, as in Figure~\ref{fig:idle2}. 196 To avoid contention between the notifiers, instead of simply reading the atomic pointer, notifiers atomically exchange it to \texttt{null}so only only notifier will contend on the system call.197 To avoid contention between the notifiers, instead of simply reading the atomic pointer, notifiers atomically exchange it to @null@ so only only notifier will contend on the system call. 197 198 198 199 \begin{figure} … … 206 207 This can be done by adding what is effectively a benaphore\cit{benaphore} in front of the event fd. 207 208 A simple three state flag is added beside the event fd to avoid unnecessary system calls, as shown in Figure~\ref{fig:idle:state}. 208 The flag starts in state \texttt{SEARCH}, while the \proc is searching for \ats to run.209 The \proc then confirms the sleep by atomically swaping the state to \texttt{SLEEP}.210 If the previous state was still \texttt{SEARCH}, then the \proc does read the event fd.211 Meanwhile, notifiers atomically exchange the state to \texttt{AWAKE}state.212 if the previous state was \texttt{SLEEP}, then the notifier must write to the event fd.209 The flag starts in state @SEARCH@, while the \proc is searching for \ats to run. 210 The \proc then confirms the sleep by atomically swaping the state to @SLEEP@. 211 If the previous state was still @SEARCH@, then the \proc does read the event fd. 212 Meanwhile, notifiers atomically exchange the state to @AWAKE@ state. 213 if the previous state was @SLEEP@, then the notifier must write to the event fd. 213 214 However, if the notify arrives almost immediately after the \proc marks itself idle, then both reads and writes on the event fd can be omitted, which reduces latency notably. 214 215 This leads to the final data structure shown in Figure~\ref{fig:idle}. -
doc/theses/thierry_delisle_PhD/thesis/thesis.tex
r4f3807d r847bb6f 108 108 citecolor=OliveGreen, % color of links to bibliography 109 109 filecolor=magenta, % color of file links 110 urlcolor=cyan % color of external links 110 urlcolor=blue, % color of external links 111 breaklinks=true 111 112 } 112 113 \ifthenelse{\boolean{PrintVersion}}{ % for improved print quality, change some hyperref options
Note: See TracChangeset
for help on using the changeset viewer.