ADTaaron-thesisarm-ehast-experimentalcleanup-dtorsdeferred_resndemanglerenumforall-pointer-decayjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-exprnew-envno_listpersistent-indexerpthread-emulationqualifiedEnumresolv-newwith_gc
Last change
on this file since 3edc2df was
64b272a,
checked in by Thierry Delisle <tdelisle@…>, 7 years ago
|
Prereview commit
|
-
Property mode set to
100644
|
File size:
3.3 KB
|
Line | |
---|
1 | % ====================================================================== |
---|
2 | % ====================================================================== |
---|
3 | \chapter{Future Work} |
---|
4 | % ====================================================================== |
---|
5 | % ====================================================================== |
---|
6 | |
---|
7 | \section{Flexible Scheduling} \label{futur:sched} |
---|
8 | |
---|
9 | |
---|
10 | \section{Non-Blocking IO} \label{futur:nbio} |
---|
11 | While most of the parallelism tools |
---|
12 | However, many modern workloads are not bound on computation but on IO operations, an common case being webservers and XaaS (anything as a service). These type of workloads often require significant engineering around amortising costs of blocking IO operations. While improving throughtput of these operations is outside what \CFA can do as a language, it can help users to make better use of the CPU time otherwise spent waiting on IO operations. The current trend is to use asynchronous programming using tools like callbacks and/or futurs and promises\cit. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear |
---|
13 | |
---|
14 | |
---|
15 | |
---|
16 | \section{Other concurrency tools} \label{futur:tools} |
---|
17 | |
---|
18 | |
---|
19 | \section{Implicit threading} \label{futur:implcit} |
---|
20 | Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The cannonical example of implcit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithm\cit. Listing \ref{lst:parfor} shows three different code examples that accomplish pointwise sums of large arrays. Note that none of these example explicitly declare any concurrency or parallelism objects. |
---|
21 | |
---|
22 | \begin{figure} |
---|
23 | \begin{center} |
---|
24 | \begin{tabular}[t]{|c|c|c|} |
---|
25 | Sequential & Library Parallel & Language Parallel \\ |
---|
26 | \begin{cfacode}[tabsize=3] |
---|
27 | void big_sum( |
---|
28 | int* a, int* b, |
---|
29 | int* o, |
---|
30 | size_t len) |
---|
31 | { |
---|
32 | for( |
---|
33 | int i = 0; |
---|
34 | i < len; |
---|
35 | ++i ) |
---|
36 | { |
---|
37 | o[i]=a[i]+b[i]; |
---|
38 | } |
---|
39 | } |
---|
40 | |
---|
41 | |
---|
42 | |
---|
43 | |
---|
44 | |
---|
45 | int* a[10000]; |
---|
46 | int* b[10000]; |
---|
47 | int* c[10000]; |
---|
48 | //... fill in a & b |
---|
49 | big_sum(a,b,c,10000); |
---|
50 | \end{cfacode} &\begin{cfacode}[tabsize=3] |
---|
51 | void big_sum( |
---|
52 | int* a, int* b, |
---|
53 | int* o, |
---|
54 | size_t len) |
---|
55 | { |
---|
56 | range ar(a, a+len); |
---|
57 | range br(b, b+len); |
---|
58 | range or(o, o+len); |
---|
59 | parfor( ai, bi, oi, |
---|
60 | []( int* ai, |
---|
61 | int* bi, |
---|
62 | int* oi) |
---|
63 | { |
---|
64 | oi=ai+bi; |
---|
65 | }); |
---|
66 | } |
---|
67 | |
---|
68 | |
---|
69 | int* a[10000]; |
---|
70 | int* b[10000]; |
---|
71 | int* c[10000]; |
---|
72 | //... fill in a & b |
---|
73 | big_sum(a,b,c,10000); |
---|
74 | \end{cfacode}&\begin{cfacode}[tabsize=3] |
---|
75 | void big_sum( |
---|
76 | int* a, int* b, |
---|
77 | int* o, |
---|
78 | size_t len) |
---|
79 | { |
---|
80 | parfor (ai,bi,oi) |
---|
81 | in (a, b, o ) |
---|
82 | { |
---|
83 | oi = ai + bi; |
---|
84 | } |
---|
85 | } |
---|
86 | |
---|
87 | |
---|
88 | |
---|
89 | |
---|
90 | |
---|
91 | |
---|
92 | |
---|
93 | int* a[10000]; |
---|
94 | int* b[10000]; |
---|
95 | int* c[10000]; |
---|
96 | //... fill in a & b |
---|
97 | big_sum(a,b,c,10000); |
---|
98 | \end{cfacode} |
---|
99 | \end{tabular} |
---|
100 | \end{center} |
---|
101 | \caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.} |
---|
102 | \label{lst:parfor} |
---|
103 | \end{figure} |
---|
104 | |
---|
105 | Implicit parallelism is a general solution and therefore is |
---|
106 | |
---|
107 | \section{Multiple Paradigms} \label{futur:paradigms} |
---|
108 | |
---|
109 | |
---|
110 | \section{Transactions} \label{futur:transaction} |
---|
111 | Concurrency and parallelism is still a very active field that strongly benefits from hardware advances. As such certain features that aren't necessarily mature enough in their current state could become relevant in the lifetime of \CFA. |
---|
Note: See
TracBrowser
for help on using the repository browser.