Ignore:
Timestamp:
Apr 28, 2026, 5:09:43 AM (20 hours ago)
Author:
Michael Brooks <mlbrooks@…>
Branches:
master
Children:
5faa3a5
Parents:
9a35b43
Message:

tidy list 1ord discussion with refreshed numbers and stronger commentary

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/mike_brooks_MMath/list.tex

    r9a35b43 rbf8112b  
    13791379
    13801380\VRef[Figure]{fig:plot-list-1ord} gives the first-order effects.
    1381 The first breakdown, architecture/size-zone (left), shows the overall performance of all 12 experiment on the two different hardware architectures for small and medium lists (624 / 4 = 156 experiments per column).
     1381The first breakdown, architecture/size-zone (left), shows the overall performance of all configurations, split by the two different hardware architectures and by small \vs medium lists (624 / 4 = 156 experiments per column).
    13821382% The relative experiment duration for each experiment is shown as a bar in each column and the black bar in that column shows the average of all 12 experiments.
    13831383By inspection of the averages, Intel runs faster than AMD.
    13841384Within an architecture, the small zone (lists of 4--16 elements) runs faster than the medium zone (lists of 50--200 elements).
    1385 The overall slower execution on the AMD results from its smaller L3 cache \vs the larger cache on the Intel.
     1385The overall slower execution on the AMD results from its smaller cache \vs the larger cache on the Intel.
    13861386(No NUMA effects for these list sizes.)
    13871387Specifically, a 20\% standard deviation exists here, between the means of the four physical-effect categories.
     
    13931393The second breakdown, use case (middle), shows the overall performance for each of the 12 use cases from \VRef[Figure]{f:ExperimentOperations} (624 / 12 = 52 experiments per column).
    13941394% A similar situation comes from \VRef[Figure]{fig:plot-list-1ord}'s second comparison, by use case.
    1395 While specific differences do occur, like framework X doing better on stacks than on queues, the overall range of the standard deviation of the individual use cases' means is only 9\%, indicating no unusual cases.
     1395 The standard deviation of the individual use cases' means is 10\%.
    13961396A more detailed analysis occurs in the discussion of \VRef[Figure]{fig:plot-list-2ord}.
    13971397% But they are so irrelevant to the issue of picking a winning framework that it is sufficient here to number the use cases opaquely.
     
    14011401The third breakdown, framework (right), shows the overall performance of the 4 list implementations (624 / 3.25 = 192).
    14021402Here, \CFA runs similarly to \uCpp and LQ-@list@ runs similarly to @tailq@.
    1403 The standard deviation of the frameworks' means is 8\%.
     1403The standard deviation of the frameworks' means is 7\%.
    14041404% Framework choice has, therefore, less impact on your speed than the lottery tickets you already hold.
    14051405Now, \CFA/\uCpp run slower than LQ-@list@/@tailq@ by 15\%, a fact explored further in \VRef{s:SweetSoreSpots}.
    14061406But so too does use case X typically beat use case II by 38\%.
    14071407As does a small size on the Intel typically beat a medium size on the AMD by 66\%.
    1408 Hence, architecture and usage patterns have a significant affect on the specific framework.
     1408Hence, architecture and usage pattern have a more significant effect on speed than the selection of a framework.
    14091409
    14101410
Note: See TracChangeset for help on using the changeset viewer.