Changeset fb31cb8 for doc/proposals/concurrency/text
- Timestamp:
- Oct 17, 2017, 12:06:15 PM (7 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 633c711
- Parents:
- 0aaac0e
- Location:
- doc/proposals/concurrency/text
- Files:
-
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/concurrency/text/basics.tex
r0aaac0e rfb31cb8 328 328 329 329 \begin{cfacode} 330 330 thread foo {}; 331 331 \end{cfacode} 332 332 … … 343 343 Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics using overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as 344 344 \begin{cfacode} 345 346 347 348 349 345 thread foo {}; 346 347 void main(foo & this) { 348 sout | "Hello World!" | endl; 349 } 350 350 \end{cfacode} 351 351 352 352 In this example, threads of type \code{foo} start execution in the \code{void main(foo &)} routine, which prints \code{"Hello World!"}. While this thesis encourages this approach to enforce strongly-typed programming, users may prefer to use the routine-based thread semantics for the sake of simplicity. With these semantics it is trivial to write a thread type that takes a function pointer as a parameter and executes it on its stack asynchronously 353 353 \begin{cfacode} 354 355 356 357 358 359 360 361 362 363 364 365 366 367 354 typedef void (*voidFunc)(int); 355 356 thread FuncRunner { 357 voidFunc func; 358 int arg; 359 }; 360 361 void ?{}(FuncRunner & this, voidFunc inFunc, int arg) { 362 this.func = inFunc; 363 } 364 365 void main(FuncRunner & this) { 366 this.func( this.arg ); 367 } 368 368 \end{cfacode} 369 369 -
doc/proposals/concurrency/text/concurrency.tex
r0aaac0e rfb31cb8 28 28 A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java~\cite{Java} or \uC~\cite{uC++book} but does not strictly require OO semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it : 29 29 \begin{cfacode} 30 31 32 33 34 35 36 30 typedef /*some monitor type*/ monitor; 31 int f(monitor & m); 32 33 int main() { 34 monitor m; //Handle m 35 f(m); //Routine using handle 36 } 37 37 \end{cfacode} 38 38 … … 47 47 48 48 \begin{cfacode} 49 50 51 52 53 54 55 49 monitor counter_t { /*...see section $\ref{data}$...*/ }; 50 51 void ?{}(counter_t & nomutex this); //constructor 52 size_t ++?(counter_t & mutex this); //increment 53 54 //need for mutex is platform dependent 55 void ?{}(size_t * this, counter_t & mutex cnt); //conversion 56 56 \end{cfacode} 57 57 This counter is used as follows: … … 125 125 The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to routines using the same monitors as arguments. However, since \CFA monitors use \gls{multi-acq} locks, users can effectively force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order: 126 126 \begin{cfacode} 127 128 129 130 131 132 133 134 135 136 137 127 void foo(A & mutex a, B & mutex b) { //acquire a & b 128 ... 129 } 130 131 void bar(A & mutex a, B & /*nomutex*/ b) { //acquire a 132 ... foo(a, b); ... //acquire b 133 } 134 135 void baz(A & /*nomutex*/ a, B & mutex b) { //acquire b 136 ... foo(a, b); ... //acquire a 137 } 138 138 \end{cfacode} 139 139 The \gls{multi-acq} monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. … … 159 159 This example shows a trivial solution to the bank account transfer problem\cit. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering. 160 160 161 \subsubsection{\code{mutex} statement} \label{mutex-stmt} 162 163 The call semantics discussed aboved have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem\cit. Listing \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters. 164 165 \begin{figure} 166 \begin{center} 167 \begin{tabular}{|c|c|} 168 function call & \code{mutex} statement \\ 169 \hline 170 \begin{cfacode}[tabsize=3] 171 monitor M {}; 172 void foo( M & mutex m ) { 173 //critical section 174 } 175 176 void bar( M & m ) { 177 foo( m ); 178 } 179 \end{cfacode}&\begin{cfacode}[tabsize=3] 180 monitor M {}; 181 void bar( M & m ) { 182 mutex(m) { 183 //critical section 184 } 185 } 186 187 188 \end{cfacode} 189 \end{tabular} 190 \end{center} 191 \caption{Regular call semantics vs. \code{mutex} statement} 192 \label{lst:mutex-stmt} 193 \end{figure} 194 161 195 % ====================================================================== 162 196 % ====================================================================== … … 195 229 196 230 \begin{cfacode} 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 231 monitor A { 232 condition e; 233 } 234 235 void foo(A & mutex a) { 236 ... 237 //Wait for cooperation from bar() 238 wait(a.e); 239 ... 240 } 241 242 void bar(A & mutex a) { 243 //Provide cooperation for foo() 244 ... 245 //Unblock foo 246 signal(a.e); 247 } 214 248 \end{cfacode} 215 249 … … 223 257 % ====================================================================== 224 258 % ====================================================================== 225 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. 259 It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use a single condition as paremeter and waits on the monitors associated with the condition. 226 260 227 261 \begin{multicols}{2} … … 305 339 \end{multicols} 306 340 307 The next example is where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics. 308 341 Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics. Listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. Note that listing \ref{lst:int-bulk-cfa} uses non-\code{mutex} parameter to introduce monitor \code{b} into context. However, for the purpose of translating the given pseudo-code into \CFA-code any method of introducing new monitors into context, other than a \code{mutex} parameter, is acceptable, e.g. global variables, pointer parameters or using locals with the \code{mutex}-statement. 342 343 \begin{figure}[!b] 309 344 \begin{multicols}{2} 310 345 Waiting thread … … 336 371 \end{pseudo} 337 372 \end{multicols} 338 \begin{center} 339 Listing 1 340 \end{center} 373 \caption{Internal scheduling with \gls{bulk-acq}} 374 \label{lst:int-bulk-pseudo} 375 \end{figure} 376 377 \begin{figure}[!b] 378 \begin{multicols}{2} 379 Waiting thread 380 \begin{cfacode} 381 monitor A; 382 monitor B; 383 extern condition c; 384 void foo(A & mutex a, B & b) { 385 //Code Section 1 386 mutex(a, b) { 387 //Code Section 2 388 wait(c); 389 //Code Section 3 390 } 391 //Code Section 4 392 } 393 \end{cfacode} 394 395 \columnbreak 396 397 Signalling thread 398 \begin{cfacode} 399 monitor A; 400 monitor B; 401 extern condition c; 402 void foo(A & mutex a, B & b) { 403 //Code Section 5 404 mutex(a, b) { 405 //Code Section 6 406 signal(c); 407 //Code Section 7 408 } 409 //Code Section 8 410 } 411 \end{cfacode} 412 \end{multicols} 413 \caption{Equivalent \CFA code for listing \ref{lst:int-bulk-pseudo}} 414 \label{lst:int-bulk-cfa} 415 \end{figure} 341 416 342 417 It is particularly important to pay attention to code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should "release A \& B" (line 16), it must actually transfer ownership of monitor B to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor A, simply waking up the waiting thread is not an option because it would violate mutual exclusion. There are three options. … … 388 463 389 464 Thread 3 390 \begin{pseudo}[numbers=left, firstnumber= 10]465 \begin{pseudo}[numbers=left, firstnumber=9] 391 466 acquire A 392 467 acquire A & B … … 480 555 \end{center} 481 556 \label{fig:dependency} 482 \caption{Dependency graph of the stat ments in listing \ref{lst:dependency}}557 \caption{Dependency graph of the statements in listing \ref{lst:dependency}} 483 558 \end{figure} 484 559 485 Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs as well as the corresponding dependency graph. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a stat ment of one of the three threads, and the arrows the dependency of that statment. The extra challenge is that this dependency graph is effectively post-mortem, but the run time system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one.560 Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs as well as the corresponding dependency graph. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement. The extra challenge is that this dependency graph is effectively post-mortem, but the run time system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one. 486 561 487 562 \subsubsection{Partial signalling} \label{partial-sig} … … 675 750 676 751 \begin{cfacode} 677 678 679 680 681 682 683 684 void f(A & mutex a, int); //New different F added in scope685 686 687 752 monitor A {}; 753 754 void f(A & mutex a); 755 void g(A & mutex a) { 756 waitfor(f); //Obvious which f() to wait for 757 } 758 759 void f(A & mutex a, int); //New different F added in scope 760 void h(A & mutex a) { 761 waitfor(f); //Less obvious which f() to wait for 762 } 688 763 \end{cfacode} 689 764 … … 732 807 External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax. Even in the simplest possible case, some new semantics need to be established: 733 808 \begin{cfacode} 734 735 736 737 738 739 740 809 monitor M {}; 810 811 void f(M & mutex a); 812 813 void g(M & mutex a, M & mutex b) { 814 waitfor(f); //ambiguous, keep a pass b or other way around? 815 } 741 816 \end{cfacode} 742 817 … … 744 819 745 820 \begin{cfacode} 746 747 748 749 750 751 752 753 \end{cfacode} 754 755 This syntax is unambiguous. Both locks are acquired and kept. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor waitfor stat ment as follows.756 757 \begin{cfacode} 758 759 760 761 762 763 764 821 monitor M {}; 822 823 void f(M & mutex a); 824 825 void g(M & mutex a, M & mutex b) { 826 waitfor( f, b ); 827 } 828 \end{cfacode} 829 830 This syntax is unambiguous. Both locks are acquired and kept. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor waitfor statement as follows. 831 832 \begin{cfacode} 833 monitor M {}; 834 835 void f(M & mutex a, M & mutex b); 836 837 void g(M & mutex a, M & mutex b) { 838 waitfor( f, a, b); 839 } 765 840 \end{cfacode} 766 841 … … 770 845 771 846 \begin{cfacode} 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 847 mutex struct A {}; 848 849 mutex struct B {}; 850 851 void g(A & mutex a, B & mutex b) { 852 waitfor(f, a, b); 853 } 854 855 A a1, a2; 856 B b; 857 858 void foo() { 859 g(a1, b); //block on accept 860 } 861 862 void bar() { 863 f(a2, b); //fufill cooperation 864 } 790 865 \end{cfacode} 791 866 … … 794 869 % ====================================================================== 795 870 % ====================================================================== 796 \subsection{Waitfor semantics} 797 % ====================================================================== 798 % ====================================================================== 871 \subsection{\code{waitfor} semantics} 872 % ====================================================================== 873 % ====================================================================== 874 875 Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted. This is because the compiler validates at compile time the validity of the waitfor statement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading. 876 \begin{figure} 877 \begin{cfacode} 878 monitor A{}; 879 monitor B{}; 880 881 void f1( A & mutex ); 882 void f2( A & mutex, B & mutex ); 883 void f3( A & mutex, int ); 884 void f4( A & mutex, int ); 885 void f4( A & mutex, double ); 886 887 void foo( A & mutex a1, A & mutex a2, B & mutex b1, B & b2 ) { 888 A * ap = & a1; 889 void (*fp)( A & mutex ) = f1; 890 891 waitfor(f1, a1); //Correct : 1 monitor case 892 waitfor(f2, a1, b1); //Correct : 2 monitor case 893 waitfor(f3, a1); //Correct : non-mutex arguments are ignored 894 waitfor(f1, *ap); //Correct : expression as argument 895 896 waitfor(f1, a1, b1); //Incorrect : Too many mutex arguments 897 waitfor(f2, a1); //Incorrect : Too few mutex arguments 898 waitfor(f2, a1, a2); //Incorrect : Mutex arguments don't match 899 waitfor(f1, 1); //Incorrect : 1 not a mutex argument 900 waitfor(f4, a1); //Incorrect : f9 not a function 901 waitfor(*fp, a1 ); //Incorrect : fp not a identifier 902 waitfor(f4, a1); //Incorrect : f4 ambiguous 903 904 waitfor(f2, a1, b2); //Undefined Behaviour : b2 may not acquired 905 } 906 \end{cfacode} 907 \caption{Various correct and incorrect uses of the waitfor statement} 908 \label{lst:waitfor} 909 \end{figure} 910 911 Finally, for added flexibility, \CFA supports constructing complex waitfor mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain will form a single statement which will baton-pass to any one function that fits one of the function+monitor set which was passed in. To eanble users to tell which was the accepted function, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching function already arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the mask based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones. 912 913 \begin{figure} 914 \begin{cfacode} 915 monitor A{}; 916 917 void f1( A & mutex ); 918 void f2( A & mutex ); 919 920 void foo( A & mutex a, bool b, int t ) { 921 //Correct : blocking case 922 waitfor(f1, a); 923 924 //Correct : block with statement 925 waitfor(f1, a) { 926 sout | "f1" | endl; 927 } 928 929 //Correct : block waiting for f1 or f2 930 waitfor(f1, a) { 931 sout | "f1" | endl; 932 } or waitfor(f2, a) { 933 sout | "f2" | endl; 934 } 935 936 //Correct : non-blocking case 937 waitfor(f1, a); or else; 938 939 //Correct : non-blocking case 940 waitfor(f1, a) { 941 sout | "blocked" | endl; 942 } or else { 943 sout | "didn't block" | endl; 944 } 945 946 //Correct : block at most 10 seconds 947 waitfor(f1, a) { 948 sout | "blocked" | endl; 949 } or timeout( 10`s) { 950 sout | "didn't block" | endl; 951 } 952 953 //Correct : block only if b == true 954 //if b == false, don't even make the call 955 when(b) waitfor(f1, a); 956 957 //Correct : block only if b == true 958 //if b == false, make non-blocking call 959 waitfor(f1, a); or when(!b) else; 960 961 //Correct : block only of t > 1 962 waitfor(f1, a); or when(t > 1) timeout(t); or else; 963 964 //Incorrect : timeout clause is dead code 965 waitfor(f1, a); or timeout(t); or else; 966 967 //Incorrect : order must be 968 //waitfor [or waitfor... [or timeout] [or else]] 969 timeout(t); or waitfor(f1, a); or else; 970 } 971 \end{cfacode} 972 \caption{Various correct and incorrect uses of the or, else, and timeout clause around a waitfor statement} 973 \label{lst:waitfor2} 974 \end{figure}
Note: See TracChangeset
for help on using the changeset viewer.