Changeset 21a5dde1
- Timestamp:
- Jul 20, 2017, 11:33:59 PM (8 years ago)
- Branches:
- ADT, aaron-thesis, arm-eh, ast-experimental, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
- Children:
- 6d54c3a
- Parents:
- dab7ac7 (diff), e1e4aa9 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the(diff)
links above to see all the changes relative to each parent. - Files:
-
- 2 deleted
- 30 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/proposals/virtual.txt
rdab7ac7 r21a5dde1 1 1 Proposal for virtual functionality 2 3 There are two types of virtual inheritance in this proposal, relaxed 4 (implicit) and strict (explicit). Relaxed is the simpler case that uses the 5 existing trait system with the addition of trait references and vtables. 6 Strict adds some constraints and requires some additional notation but allows 7 for down-casting. 8 9 Relaxed Virtual Inheritance: 2 10 3 11 Imagine the following code : … … 20 28 void draw(line*); 21 29 22 While all the members of this simple UI support drawing creating a UI that easily23 supports both these UI requires some tedious boiler-plate code:30 While all the members of this simple UI support drawing, creating a UI that 31 easily supports both these UI requires some tedious boiler-plate code: 24 32 25 33 enum type_t { text, line }; … … 41 49 } 42 50 43 While this code will work as indented, adding any new widgets or any new widget behaviors 44 requires changing existing code to add the desired functionality. To ease this maintenance 45 effort required CFA introduces the concept of dynamic types, in a manner similar to C++. 46 47 A simple usage of dynamic type with the previous example would look like : 48 49 drawable* objects[10]; 51 While this code will work as implemented, adding any new widgets or any new 52 widget behaviors requires changing existing code to add the desired 53 functionality. To ease this maintenance effort required CFA introduces the 54 concept of trait references. 55 56 Using trait references to implement the above gives the following : 57 58 trait drawable objects[10]; 50 59 fill_objects(objects); 51 60 52 61 while(running) { 53 for(drawable *object : objects) {62 for(drawable object : objects) { 54 63 draw(object); 55 64 } 56 65 } 57 66 58 However, this is not currently do-able in the current CFA and furthermore is not 59 possible to implement statically. Therefore we need to add a new feature to handle 60 having dynamic types like this (That is types that are found dynamically not types 61 that change dynamically). 62 63 C++ uses inheritance and virtual functions to find the 64 desired type dynamically. CFA takes inspiration from this solution. 65 66 What we really want to do is express the fact that calling draw() on a object 67 should find the dynamic type of the parameter before calling the routine, much like the 68 hand written example given above. We can express this by adding the virtual keyword on 69 the parameter of the constraints on our trait: 67 The keyword trait is optional (by the same rules as the struct keyword). This 68 is not currently supported in CFA and the lookup is not possible to implement 69 statically. Therefore we need to add a new feature to handle having dynamic 70 lookups like this. 71 72 What we really want to do is express the fact that calling draw() on a trait 73 reference should find the underlying type of the given parameter and find how 74 it implements the routine, as in the example with the enumeration and union. 75 76 For instance specifying that the drawable trait reference looks up the type 77 of the first argument to find the implementation would be : 70 78 71 79 trait drawable(otype T) { … … 73 81 }; 74 82 75 This expresses the idea that drawable is similar to an abstract base class in C++ and 76 also gives meaning to trying to take a pointer of drawable. That is anything that can 77 be cast to a drawable pointer has the necessary information to call the draw routine on 78 that type. Before that drawable was only a abstract type while now it also points to a 79 piece of storage which specify which behavior the object will have at run time. 80 81 This storage needs to be allocate somewhere. C++ just adds an invisible pointer at 82 the beginning of the struct but we can do something more explicit for users, actually 83 have a visible special field : 84 85 struct text { 86 char* text; 87 vtable drawable; 88 }; 89 90 struct line{ 91 vtable drawable; 92 vec2 start; 93 vec2 end; 94 }; 95 96 With these semantics, adding a "vtable drawable" means that text pointers and line pointers are now 97 convertible to drawable pointers. This conversion will not necessarily be a type only change however, indeed, 98 the drawable pointer will point to the field "vtable drawable" not the head of the struct. However, since all 99 the types are known at compile time, converting pointers becomes a simple offset operations. 100 101 The vtable field contains a pointer to a vtable which contains all the information needed for the caller 102 to find the function pointer of the desired behavior. 103 104 One of the limitations of this design is that it does not support double dispatching, which 105 concretely means traits cannot have routines with more than one virtual parameter. This design 106 would have many ambiguities if it did support multiple virtual parameter. A futher limitation is 107 that traits over more than one type cannot have vtables meaningfully defined for them, as the 108 particular vtable to use would be a function of the other type(s) the trait is defined over. 109 110 It is worth noting that the function pointers in these vtables are bound at object construction, rather than 111 function call-site, as in Cforall's existing polymorphic functions. As such, it is possible that two objects 112 with the same static type would have a different vtable (consider what happens if draw(line*) is overridden 113 between the definitions of two line objects). Given that the virtual drawable* erases static types though, 114 this should not be confusing in practice. A more distressing possibility is that of creating an object that 115 outlives the scope of one of the functions in its vtable. This is certainly a possible bug, but it is of a 116 type that C programmers are familiar with, and should be able to avoid by the usual methods. 117 118 Extensibility. 119 120 One of the obvious critics of this implementation is that it lacks extensibility for classes 121 that cannot be modified (ex: Linux C headers). However this solution can be extended to 122 allow more extensibility by adding "Fat pointers". 123 124 Indeed, users could already "solve" this issue by writing their own fat pointers as such: 125 126 trait MyContext(otype T) { 127 void* get_stack(virtual T*) 128 }; 129 130 void* get_stack(ucontext_t *context); 131 132 struct fat_ucontext_t { 133 vtable MyContext; 134 ucontext_t *context; 135 } 136 137 //Tedious forwarding routine 138 void* get_stack(fat_ucontext_t *ptr) { 139 return get_stack(ptr->context); 140 } 141 142 However, users would have to write all the virtual methods they want to override and make 143 them all simply forward to the existing method that takes the corresponding POCO(Plain Old C Object). 144 145 The alternative we propose is to use language level fat pointers : 146 147 trait MyContext(otype T) { 148 void* get_stack(virtual T*) 149 }; 150 151 void* get_stack(ucontext_t *context); 152 153 //The type vptr(ucontext_t) all 154 vptr(ucontext_t) context; 155 156 These behave exactly as the previous example but all the forwarding routines are automatically generated. 157 158 Bikeshedding. 159 160 It may be desirable to add fewer new keywords than discussed in this proposal; it is possible that "virtual" 161 could replace both "vtable" and "vptr" above with unambiguous contextual meaning. However, for purposes of 162 clarity in the design discussion it is beneficial to keep the keywords for separate concepts distinct. 163 83 This could be implied in simple cases like this one (single parameter on the 84 trait and single generic parameter on the function). In more complex cases it 85 would have to be explicitly given, or a strong convention would have to be 86 enforced (e.g. implementation of trait functions is always drawn from the 87 first polymorphic parameter). 88 89 Once a function in a trait has been marked as virtual it defines a new 90 function that takes in that trait's reference and then dynamically calls the 91 underlying type implementation. Hence a trait reference becomes a kind of 92 abstract type, cannot be directly instantiated but can still be used. 93 94 One of the limitations of this design is that it does not support double 95 dispatching, which concretely means traits cannot have routines with more than 96 one virtual parameter. The program must have a single table to look up the 97 function on. Using trait references with traits with more than one parameter 98 is also restricted, initially forbidden, see extension. 99 100 Extension: Multi-parameter Virtual Traits: 101 102 This implementation can be extended to traits with multiple parameters if 103 one is called out as being the virtual trait. For example : 104 105 trait iterator(otype T, dtype Item) { 106 Maybe(Item) next(virtual T *); 107 } 108 109 iterator(int) generators[10]; 110 111 Which creates a collection of iterators that produce integers, regardless of 112 how those iterators are implemented. This may require a note that this trait 113 is virtual on T and not Item, but noting it on the functions may be enough. 114 115 116 Strict Virtual Inheritance: 117 118 One powerful feature relaxed virtual does not support is the idea of down 119 casting. Once something has been converted into a trait reference there is 120 very little we can do to recover and of the type information, only the trait's 121 required function implementations are kept. 122 123 To allow down casting strict virtual requires that all traits and structures 124 involved be organized into a tree. Each trait or struct must have a unique 125 position on this tree (no multiple inheritance). 126 127 This is declared as follows : 128 129 trait error(otype T) virtual { 130 const char * msg(T *); 131 } 132 133 trait io_error(otype T) virtual error { 134 FILE * src(T *); 135 } 136 137 struct eof_error virtual io_error { 138 FILE * fd; 139 }; 140 141 So the trait error is the head of a new tree and io_error is a child of it. 142 143 Also the parent trait is implicitly part of the assertions of the children, 144 so all children implement the same operations as the parent. By the unique 145 path down the tree, we can also uniquely order them so that a prefix of a 146 child's vtable has the same format as its parent's. 147 148 This gives us an important extra feature, runtime checking of the parent-child 149 relationship with a C++ dynamic_cast like operation. Allowing checked 150 conversions from trait references to more particular references, which works 151 if the underlying type is, or is a child of, the new trait type. 152 153 Extension: Multiple Parents 154 155 Although each trait/struct must have a unique position on each tree, it could 156 have positions on multiple trees. All this requires is the ability to give 157 multiple parents, as here : 158 159 trait region(otype T) virtual drawable, collider; 160 161 The restriction being, the parents must come from different trees. This 162 object (and all of its children) can be cast to either tree. This is handled 163 by generating a separate vtable for each tree the structure is in. 164 165 Extension: Multi-parameter Strict Virtual 166 167 If a trait has multiple parameters then one must be called out to be the one 168 we generate separate vtables for, as in : 169 170 trait example(otype T, otype U) virtual(T) ... 171 172 This can generate a separate vtable for each U for which all the T+U 173 implementations are provided. These are then separate nodes in the tree (or 174 the root of different trees) as if each was created individually. Providing a 175 single unique instance of these nodes would be the most difficult aspect of 176 this extension, possibly intractable, though with sufficient hoisting and 177 link-once duplication it may be possible. 178 179 Example: 180 181 trait argument(otype T) virtual { 182 char short_name(virtual T *); 183 bool is_set(virtual T *); 184 }; 185 186 trait value_argument(otype T, otype U) virtual(T) argument { 187 U get_value(virtual T *); 188 }; 189 190 Extension: Structural Inheritance 191 192 Currently traits must be the internal nodes and structs the leaf nodes. 193 Structs could be made internal nodes as well, in which case the child structs 194 would likely structurally inherit the fields of their parents. 195 196 197 Storing the Virtual Lookup Table (vtable): 198 199 We have so far been silent on how the vtable is created, stored and accessed. 200 201 Creation happens at compile time. Function pointers are found by using the 202 same best match rules as elsewhere (additional rules for defaults from the 203 parent may or may not be required). For strict virtual this must happen at the 204 global scope and forbidding static functions, to ensure that a single unique 205 vtable is created. Similarly, there may have to be stricter matching rules 206 for the functions that go into the vtable, possibly requiring an exact match. 207 Relaxed virtual could relax both restrictions, if we allow different vtable 208 at different conversion (struct to trait reference) sites. If it is allowed 209 local functions being bound to a vtable could cause issues when they go out 210 of scope, however this should follow the lifetime rules most C programs 211 already follow implicitly. 212 213 Most vtables should be stored statically, the only exception being some of 214 the relaxed vtables that could have local function pointers. These may be able 215 to be stack allocated. All vtables should be immutable and require no manual 216 cleanup. 217 218 Access has two main options: 219 220 The first is through the use of fat pointers, or a tuple of pointers. When the 221 object is converted to a trait reference, the pointers to its vtables are 222 stored along side it. 223 224 This allows for compatibility with existing structures (such as those imported 225 from C) and is the default storage method unless a different one is given. 226 227 The other is by inlining the vtable pointer as "intrusive vtables". This adds 228 a field to the structure to the vtable. The trait reference then has a single 229 pointer to this field, the vtable includes an offset to find the beginning of 230 the structure again. 231 232 This is used if you specify a vtable field in the structure. If given in the 233 trait the vtable pointer in the trait reference can then become a single 234 pointer to the vtable field and use that to recover the original object 235 pointer as well as retrieve all operations. 236 237 trait drawable(otype T) { 238 vtable drawable; 239 }; 240 241 struct line { 242 vtable drawable; 243 vec2 start; 244 vec2 end; 245 }; 246 247 This inline code allows trait references to be converted to plain pointers 248 (although they still must be called specially). The vtable field may just be 249 an opaque block of memory or it may allow user access to the vtable. If so 250 then there should be some way to retrieve the type of the vtable, which will be 251 autogenerated and often unique. 252 253 254 Keyword Usage: 255 256 It may be desirable to add fewer new keywords than discussed in this proposal. 257 It is possible that "virtual" could replace both "vtable" above with 258 unambiguous contextual meaning. However, for purposes of clarity in the design 259 discussion it is beneficial to keep the keywords for separate concepts distinct. 260 261 262 Trait References and Operations: 263 264 sizeof(drawable) will return the size of the trait object itself. However : 265 266 line a_line; 267 drawable widget = a_line; 268 sizeof(widget); 269 270 Will instead return the sizeof the underlying object, although the trait must 271 require that its implementation is sized for there to be a meaningful value 272 to return. You may also get the size of the trait reference with 273 274 sizeof(&widget); 275 276 Calling free on a trait reference will free the memory for the object. It will 277 leave the vtables alone, as those are (always?) statically allocated. -
src/Common/VectorMap.h
rdab7ac7 r21a5dde1 5 5 // file "LICENCE" distributed with Cforall. 6 6 // 7 // ScopedMap.h --7 // VectorMap.h -- 8 8 // 9 9 // Author : Aaron B. Moss -
src/ControlStruct/ExceptTranslate.cc
rdab7ac7 r21a5dde1 10 10 // Created On : Wed Jun 14 16:49:00 2017 11 11 // Last Modified By : Andrew Beach 12 // Last Modified On : Wed Jul 12 15:07:00 201713 // Update Count : 312 // Last Modified On : Tus Jul 18 10:09:00 2017 13 // Update Count : 4 14 14 // 15 15 … … 50 50 LinkageSpec::Cforall, 51 51 /*bitfieldWidth*/ NULL, 52 new BasicType( emptyQualifiers, BasicType::SignedInt ),52 new BasicType( noQualifiers, BasicType::SignedInt ), 53 53 /*init*/ NULL 54 54 ); … … 59 59 /*bitfieldWidth*/ NULL, 60 60 new PointerType( 61 emptyQualifiers,62 new BasicType( emptyQualifiers, BasicType::SignedInt )61 noQualifiers, 62 new BasicType( noQualifiers, BasicType::SignedInt ) 63 63 ), 64 64 /*init*/ NULL … … 69 69 LinkageSpec::Cforall, 70 70 /*bitfieldWidth*/ NULL, 71 new BasicType( emptyQualifiers, BasicType::Bool),71 new BasicType(noQualifiers, BasicType::Bool), 72 72 /*init*/ NULL 73 73 ); … … 78 78 NULL, 79 79 new PointerType( 80 emptyQualifiers,80 noQualifiers, 81 81 new VoidType( 82 emptyQualifiers82 noQualifiers 83 83 ), 84 84 std::list<Attribute *>{new Attribute("unused")} … … 143 143 LinkageSpec::Cforall, 144 144 NULL, 145 new BasicType( emptyQualifiers, BasicType::SignedInt ),145 new BasicType( noQualifiers, BasicType::SignedInt ), 146 146 new SingleInit( throwStmt->get_expr() ) 147 147 ); … … 444 444 nullptr, 445 445 new StructInstType( 446 emptyQualifiers,446 noQualifiers, 447 447 hook_decl 448 448 ), -
src/GenPoly/Box.cc
rdab7ac7 r21a5dde1 202 202 }; 203 203 204 /// Replaces initialization of polymorphic values with alloca, declaration of dtype/ftype with appropriate void expression, and sizeof expressions of polymorphic types with the proper variable204 /// Replaces initialization of polymorphic values with alloca, declaration of dtype/ftype with appropriate void expression, sizeof expressions of polymorphic types with the proper variable, and strips fields from generic struct declarations. 205 205 class Pass3 final : public PolyMutator { 206 206 public: … … 210 210 using PolyMutator::mutate; 211 211 virtual DeclarationWithType *mutate( FunctionDecl *functionDecl ) override; 212 virtual Declaration *mutate( StructDecl *structDecl ) override; 213 virtual Declaration *mutate( UnionDecl *unionDecl ) override; 212 214 virtual ObjectDecl *mutate( ObjectDecl *objectDecl ) override; 213 215 virtual TypedefDecl *mutate( TypedefDecl *objectDecl ) override; … … 1868 1870 } 1869 1871 1872 /// Strips the members from a generic aggregate 1873 void stripGenericMembers(AggregateDecl* decl) { 1874 if ( ! decl->get_parameters().empty() ) decl->get_members().clear(); 1875 } 1876 1877 Declaration *Pass3::mutate( StructDecl *structDecl ) { 1878 stripGenericMembers( structDecl ); 1879 return structDecl; 1880 } 1881 1882 Declaration *Pass3::mutate( UnionDecl *unionDecl ) { 1883 stripGenericMembers( unionDecl ); 1884 return unionDecl; 1885 } 1886 1870 1887 TypeDecl * Pass3::mutate( TypeDecl *typeDecl ) { 1871 1888 // Initializer *init = 0; -
src/Parser/ExpressionNode.cc
rdab7ac7 r21a5dde1 9 9 // Author : Rodolfo G. Esteves 10 10 // Created On : Sat May 16 13:17:07 2015 11 // Last Modified By : Peter A. Buhr12 // Last Modified On : Sat Jul 15 16:09:04201713 // Update Count : 5 4911 // Last Modified By : Andrew Beach 12 // Last Modified On : Tus Jul 18 10:08:00 2017 13 // Update Count : 550 14 14 // 15 15 … … 46 46 // type. 47 47 48 Type::Qualifiers emptyQualifiers; // no qualifiers on constants48 Type::Qualifiers noQualifiers; // no qualifiers on constants 49 49 50 50 static inline bool checkU( char c ) { return c == 'u' || c == 'U'; } … … 118 118 } // if 119 119 120 Expression * ret = new ConstantExpr( Constant( new BasicType( emptyQualifiers, kind[Unsigned][size] ), str, v ) );120 Expression * ret = new ConstantExpr( Constant( new BasicType( noQualifiers, kind[Unsigned][size] ), str, v ) ); 121 121 delete &str; // created by lex 122 122 return ret; … … 153 153 } // if 154 154 155 Expression * ret = new ConstantExpr( Constant( new BasicType( emptyQualifiers, kind[complx][size] ), str, v ) );155 Expression * ret = new ConstantExpr( Constant( new BasicType( noQualifiers, kind[complx][size] ), str, v ) ); 156 156 delete &str; // created by lex 157 157 return ret; … … 159 159 160 160 Expression *build_constantChar( const std::string & str ) { 161 Expression * ret = new ConstantExpr( Constant( new BasicType( emptyQualifiers, BasicType::Char ), str, (unsigned long long int)(unsigned char)str[1] ) );161 Expression * ret = new ConstantExpr( Constant( new BasicType( noQualifiers, BasicType::Char ), str, (unsigned long long int)(unsigned char)str[1] ) ); 162 162 delete &str; // created by lex 163 163 return ret; … … 166 166 ConstantExpr *build_constantStr( const std::string & str ) { 167 167 // string should probably be a primitive type 168 ArrayType *at = new ArrayType( emptyQualifiers, new BasicType( Type::Qualifiers( Type::Const ), BasicType::Char ),168 ArrayType *at = new ArrayType( noQualifiers, new BasicType( Type::Qualifiers( Type::Const ), BasicType::Char ), 169 169 new ConstantExpr( Constant::from_ulong( str.size() + 1 - 2 ) ), // +1 for '\0' and -2 for '"' 170 170 false, false ); … … 176 176 177 177 Expression *build_constantZeroOne( const std::string & str ) { 178 Expression * ret = new ConstantExpr( Constant( str == "0" ? (Type *)new ZeroType( emptyQualifiers ) : (Type*)new OneType( emptyQualifiers ), str,178 Expression * ret = new ConstantExpr( Constant( str == "0" ? (Type *)new ZeroType( noQualifiers ) : (Type*)new OneType( noQualifiers ), str, 179 179 str == "0" ? (unsigned long long int)0 : (unsigned long long int)1 ) ); 180 180 delete &str; // created by lex -
src/Parser/TypeData.cc
rdab7ac7 r21a5dde1 10 10 // Created On : Sat May 16 15:12:51 2015 11 11 // Last Modified By : Andrew Beach 12 // Last Modified On : Fri Jul 14 16:58:00 201713 // Update Count : 56 512 // Last Modified On : Tus Jul 18 10:10:00 2017 13 // Update Count : 566 14 14 // 15 15 … … 454 454 case TypeData::Builtin: 455 455 if(td->builtintype == DeclarationNode::Zero) { 456 return new ZeroType( emptyQualifiers );456 return new ZeroType( noQualifiers ); 457 457 } 458 458 else if(td->builtintype == DeclarationNode::One) { 459 return new OneType( emptyQualifiers );459 return new OneType( noQualifiers ); 460 460 } 461 461 else { -
src/Parser/parserutility.cc
rdab7ac7 r21a5dde1 9 9 // Author : Rodolfo G. Esteves 10 10 // Created On : Sat May 16 15:30:39 2015 11 // Last Modified By : Peter A. Buhr12 // Last Modified On : Wed Jun 28 22:11:32201713 // Update Count : 711 // Last Modified By : Andrew Beach 12 // Last Modified On : Tus Jul 18 10:12:00 2017 13 // Update Count : 8 14 14 // 15 15 … … 26 26 UntypedExpr *comparison = new UntypedExpr( new NameExpr( "?!=?" ) ); 27 27 comparison->get_args().push_back( orig ); 28 comparison->get_args().push_back( new ConstantExpr( Constant( new ZeroType( emptyQualifiers ), "0", (unsigned long long int)0 ) ) );28 comparison->get_args().push_back( new ConstantExpr( Constant( new ZeroType( noQualifiers ), "0", (unsigned long long int)0 ) ) ); 29 29 return new CastExpr( comparison, new BasicType( Type::Qualifiers(), BasicType::SignedInt ) ); 30 30 } -
src/SynTree/Type.h
rdab7ac7 r21a5dde1 9 9 // Author : Richard C. Bilson 10 10 // Created On : Mon May 18 07:44:20 2015 11 // Last Modified By : Peter A. Buhr12 // Last Modified On : T hu Mar 23 16:16:36201713 // Update Count : 1 4911 // Last Modified By : Andrew Beach 12 // Last Modified On : Tus Jul 18 10:06:00 2017 13 // Update Count : 150 14 14 // 15 15 … … 172 172 }; 173 173 174 extern Type::Qualifiers emptyQualifiers; // no qualifiers on constants174 extern Type::Qualifiers noQualifiers; // no qualifiers on constants 175 175 176 176 class VoidType : public Type { -
src/libcfa/concurrency/alarm.c
rdab7ac7 r21a5dde1 31 31 32 32 //============================================================================================= 33 // time type 34 //============================================================================================= 35 36 #define one_second 1_000_000_000ul 37 #define one_milisecond 1_000_000ul 38 #define one_microsecond 1_000ul 39 #define one_nanosecond 1ul 40 41 __cfa_time_t zero_time = { 0 }; 42 43 void ?{}( __cfa_time_t * this ) { this->val = 0; } 44 void ?{}( __cfa_time_t * this, zero_t zero ) { this->val = 0; } 45 46 void ?{}( itimerval * this, __cfa_time_t * alarm ) { 47 this->it_value.tv_sec = alarm->val / one_second; // seconds 48 this->it_value.tv_usec = max( (alarm->val % one_second) / one_microsecond, 1000 ); // microseconds 49 this->it_interval.tv_sec = 0; 50 this->it_interval.tv_usec = 0; 51 } 52 53 54 void ?{}( __cfa_time_t * this, timespec * curr ) { 55 uint64_t secs = curr->tv_sec; 56 uint64_t nsecs = curr->tv_nsec; 57 this->val = (secs * one_second) + nsecs; 58 } 59 60 __cfa_time_t ?=?( __cfa_time_t * this, zero_t rhs ) { 61 this->val = 0; 62 return *this; 63 } 64 65 __cfa_time_t from_s ( uint64_t val ) { __cfa_time_t ret; ret.val = val * 1_000_000_000ul; return ret; } 66 __cfa_time_t from_ms( uint64_t val ) { __cfa_time_t ret; ret.val = val * 1_000_000ul; return ret; } 67 __cfa_time_t from_us( uint64_t val ) { __cfa_time_t ret; ret.val = val * 1_000ul; return ret; } 68 __cfa_time_t from_ns( uint64_t val ) { __cfa_time_t ret; ret.val = val * 1ul; return ret; } 69 70 //============================================================================================= 33 71 // Clock logic 34 72 //============================================================================================= … … 37 75 timespec curr; 38 76 clock_gettime( CLOCK_REALTIME, &curr ); 39 __cfa_time_t curr_time = ((__cfa_time_t)curr.tv_sec * TIMEGRAN) + curr.tv_nsec; 40 // LIB_DEBUG_PRINT_BUFFER_DECL( STDERR_FILENO, "Kernel : current time is %lu\n", curr_time ); 41 return curr_time; 77 return (__cfa_time_t){ &curr }; 42 78 } 43 79 44 80 void __kernel_set_timer( __cfa_time_t alarm ) { 45 LIB_DEBUG_PRINT_BUFFER_DECL( STDERR_FILENO, "Kernel : set timer to %llu\n", (__cfa_time_t)alarm ); 46 itimerval val; 47 val.it_value.tv_sec = alarm / TIMEGRAN; // seconds 48 val.it_value.tv_usec = max( (alarm % TIMEGRAN) / ( TIMEGRAN / 1_000_000L ), 1000 ); // microseconds 49 val.it_interval.tv_sec = 0; 50 val.it_interval.tv_usec = 0; 81 itimerval val = { &alarm }; 51 82 setitimer( ITIMER_REAL, &val, NULL ); 52 83 } … … 56 87 //============================================================================================= 57 88 58 void ?{}( alarm_node_t * this, thread_desc * thrd, __cfa_time_t alarm = 0, __cfa_time_t period = 0) {89 void ?{}( alarm_node_t * this, thread_desc * thrd, __cfa_time_t alarm = zero_time, __cfa_time_t period = zero_time ) { 59 90 this->thrd = thrd; 60 91 this->alarm = alarm; … … 65 96 } 66 97 67 void ?{}( alarm_node_t * this, processor * proc, __cfa_time_t alarm = 0, __cfa_time_t period = 0) {98 void ?{}( alarm_node_t * this, processor * proc, __cfa_time_t alarm = zero_time, __cfa_time_t period = zero_time ) { 68 99 this->proc = proc; 69 100 this->alarm = alarm; … … 153 184 154 185 void register_self( alarm_node_t * this ) { 186 alarm_list_t * alarms = &event_kernel->alarms; 187 155 188 disable_interrupts(); 156 verify( !systemProcessor->pending_alarm ); 157 lock( &systemProcessor->alarm_lock DEBUG_CTX2 ); 189 lock( &event_kernel->lock DEBUG_CTX2 ); 158 190 { 159 verify( validate( &systemProcessor->alarms ) );160 bool first = ! systemProcessor->alarms.head;161 162 insert( &systemProcessor->alarms, this );163 if( systemProcessor->pending_alarm) {164 tick_preemption();191 verify( validate( alarms ) ); 192 bool first = !alarms->head; 193 194 insert( alarms, this ); 195 if( first ) { 196 __kernel_set_timer( alarms->head->alarm - __kernel_get_time() ); 165 197 } 166 if( first ) { 167 __kernel_set_timer( systemProcessor->alarms.head->alarm - __kernel_get_time() ); 168 } 169 } 170 unlock( &systemProcessor->alarm_lock ); 198 } 199 unlock( &event_kernel->lock ); 171 200 this->set = true; 172 201 enable_interrupts( DEBUG_CTX ); … … 174 203 175 204 void unregister_self( alarm_node_t * this ) { 176 // LIB_DEBUG_PRINT_BUFFER_DECL( STDERR_FILENO, "Kernel : unregister %p start\n", this );177 205 disable_interrupts(); 178 lock( & systemProcessor->alarm_lock DEBUG_CTX2 );206 lock( &event_kernel->lock DEBUG_CTX2 ); 179 207 { 180 verify( validate( & systemProcessor->alarms ) );181 remove( & systemProcessor->alarms, this );182 } 183 unlock( & systemProcessor->alarm_lock );208 verify( validate( &event_kernel->alarms ) ); 209 remove( &event_kernel->alarms, this ); 210 } 211 unlock( &event_kernel->lock ); 184 212 enable_interrupts( DEBUG_CTX ); 185 213 this->set = false; 186 // LIB_DEBUG_PRINT_BUFFER_LOCAL( STDERR_FILENO, "Kernel : unregister %p end\n", this ); 187 } 214 } -
src/libcfa/concurrency/alarm.h
rdab7ac7 r21a5dde1 23 23 #include <assert.h> 24 24 25 typedef uint64_t __cfa_time_t;26 27 25 struct thread_desc; 28 26 struct processor; 27 28 struct timespec; 29 struct itimerval; 30 31 //============================================================================================= 32 // time type 33 //============================================================================================= 34 35 struct __cfa_time_t { 36 uint64_t val; 37 }; 38 39 // ctors 40 void ?{}( __cfa_time_t * this ); 41 void ?{}( __cfa_time_t * this, zero_t zero ); 42 void ?{}( __cfa_time_t * this, timespec * curr ); 43 void ?{}( itimerval * this, __cfa_time_t * alarm ); 44 45 __cfa_time_t ?=?( __cfa_time_t * this, zero_t rhs ); 46 47 // logical ops 48 static inline bool ?==?( __cfa_time_t lhs, __cfa_time_t rhs ) { return lhs.val == rhs.val; } 49 static inline bool ?!=?( __cfa_time_t lhs, __cfa_time_t rhs ) { return lhs.val != rhs.val; } 50 static inline bool ?>? ( __cfa_time_t lhs, __cfa_time_t rhs ) { return lhs.val > rhs.val; } 51 static inline bool ?<? ( __cfa_time_t lhs, __cfa_time_t rhs ) { return lhs.val < rhs.val; } 52 static inline bool ?>=?( __cfa_time_t lhs, __cfa_time_t rhs ) { return lhs.val >= rhs.val; } 53 static inline bool ?<=?( __cfa_time_t lhs, __cfa_time_t rhs ) { return lhs.val <= rhs.val; } 54 55 static inline bool ?==?( __cfa_time_t lhs, zero_t rhs ) { return lhs.val == rhs; } 56 static inline bool ?!=?( __cfa_time_t lhs, zero_t rhs ) { return lhs.val != rhs; } 57 static inline bool ?>? ( __cfa_time_t lhs, zero_t rhs ) { return lhs.val > rhs; } 58 static inline bool ?<? ( __cfa_time_t lhs, zero_t rhs ) { return lhs.val < rhs; } 59 static inline bool ?>=?( __cfa_time_t lhs, zero_t rhs ) { return lhs.val >= rhs; } 60 static inline bool ?<=?( __cfa_time_t lhs, zero_t rhs ) { return lhs.val <= rhs; } 61 62 // addition/substract 63 static inline __cfa_time_t ?+?( __cfa_time_t lhs, __cfa_time_t rhs ) { 64 __cfa_time_t ret; 65 ret.val = lhs.val + rhs.val; 66 return ret; 67 } 68 69 static inline __cfa_time_t ?-?( __cfa_time_t lhs, __cfa_time_t rhs ) { 70 __cfa_time_t ret; 71 ret.val = lhs.val - rhs.val; 72 return ret; 73 } 74 75 __cfa_time_t from_s ( uint64_t ); 76 __cfa_time_t from_ms( uint64_t ); 77 __cfa_time_t from_us( uint64_t ); 78 __cfa_time_t from_ns( uint64_t ); 79 80 extern __cfa_time_t zero_time; 29 81 30 82 //============================================================================================= 31 83 // Clock logic 32 84 //============================================================================================= 33 34 #define TIMEGRAN 1_000_000_000L // nanosecond granularity, except for timeval35 85 36 86 __cfa_time_t __kernel_get_time(); … … 57 107 typedef alarm_node_t ** __alarm_it_t; 58 108 59 void ?{}( alarm_node_t * this, thread_desc * thrd, __cfa_time_t alarm = 0, __cfa_time_t period = 0);60 void ?{}( alarm_node_t * this, processor * proc, __cfa_time_t alarm = 0, __cfa_time_t period = 0);109 void ?{}( alarm_node_t * this, thread_desc * thrd, __cfa_time_t alarm = zero_time, __cfa_time_t period = zero_time ); 110 void ?{}( alarm_node_t * this, processor * proc, __cfa_time_t alarm = zero_time, __cfa_time_t period = zero_time ); 61 111 void ^?{}( alarm_node_t * this ); 62 112 -
src/libcfa/concurrency/coroutine
rdab7ac7 r21a5dde1 63 63 64 64 // Get current coroutine 65 extern volatile thread_local coroutine_desc *this_coroutine;65 extern thread_local coroutine_desc * volatile this_coroutine; 66 66 67 67 // Private wrappers for context switch and stack creation -
src/libcfa/concurrency/coroutine.c
rdab7ac7 r21a5dde1 26 26 } 27 27 28 #include "kernel" 29 #include "libhdr.h" 28 #include "kernel_private.h" 30 29 31 30 #define __CFA_INVOKE_PRIVATE__ 32 31 #include "invoke.h" 33 32 34 extern volatile thread_local processor * this_processor;35 33 36 34 //----------------------------------------------------------------------------- -
src/libcfa/concurrency/kernel
rdab7ac7 r21a5dde1 28 28 //----------------------------------------------------------------------------- 29 29 // Locks 30 bool try_lock ( spinlock * DEBUG_CTX_PARAM2 ); 31 void lock ( spinlock * DEBUG_CTX_PARAM2 );32 void lock_yield( spinlock * DEBUG_CTX_PARAM2 ); 33 void unlock ( spinlock * ); 30 void lock ( spinlock * DEBUG_CTX_PARAM2 ); // Lock the spinlock, spin if already acquired 31 void lock_yield( spinlock * DEBUG_CTX_PARAM2 ); // Lock the spinlock, yield repeatedly if already acquired 32 bool try_lock ( spinlock * DEBUG_CTX_PARAM2 ); // Lock the spinlock, return false if already acquired 33 void unlock ( spinlock * ); // Unlock the spinlock 34 34 35 35 struct semaphore { … … 48 48 // Cluster 49 49 struct cluster { 50 __thread_queue_t ready_queue; 51 spinlock lock; 50 spinlock ready_queue_lock; // Ready queue locks 51 __thread_queue_t ready_queue; // Ready queue for threads 52 unsigned long long int preemption; // Preemption rate on this cluster 52 53 }; 53 54 … … 76 77 static inline void ^?{}(FinishAction * this) {} 77 78 79 // Processor 80 // Wrapper around kernel threads 78 81 struct processor { 79 struct processorCtx_t * runner; 80 cluster * cltr; 81 pthread_t kernel_thread; 82 // Main state 83 struct processorCtx_t * runner; // Coroutine ctx who does keeps the state of the processor 84 cluster * cltr; // Cluster from which to get threads 85 pthread_t kernel_thread; // Handle to pthreads 82 86 83 semaphore terminated; 84 volatile bool is_terminated; 87 // Termination 88 volatile bool do_terminate; // Set to true to notify the processor should terminate 89 semaphore terminated; // Termination synchronisation 85 90 86 struct FinishAction finish; 91 // RunThread data 92 struct FinishAction finish; // Action to do after a thread is ran 87 93 88 struct alarm_node_t * preemption_alarm; 89 unsigned int preemption; 94 // Preemption data 95 struct alarm_node_t * preemption_alarm; // Node which is added in the discrete event simulaiton 96 bool pending_preemption; // If true, a preemption was triggered in an unsafe region, the processor must preempt as soon as possible 90 97 91 bool pending_preemption; 92 93 char * last_enable; 98 #ifdef __CFA_DEBUG__ 99 char * last_enable; // Last function to enable preemption on this processor 100 #endif 94 101 }; 95 102 -
src/libcfa/concurrency/kernel.c
rdab7ac7 r21a5dde1 42 42 //----------------------------------------------------------------------------- 43 43 // Kernel storage 44 #define KERNEL_STORAGE(T,X) static char X##Storage[sizeof(T)] 45 46 KERNEL_STORAGE(processorCtx_t, systemProcessorCtx); 47 KERNEL_STORAGE(cluster, systemCluster); 48 KERNEL_STORAGE(system_proc_t, systemProcessor); 49 KERNEL_STORAGE(thread_desc, mainThread); 44 KERNEL_STORAGE(cluster, mainCluster); 45 KERNEL_STORAGE(processor, mainProcessor); 46 KERNEL_STORAGE(processorCtx_t, mainProcessorCtx); 47 KERNEL_STORAGE(thread_desc, mainThread); 50 48 KERNEL_STORAGE(machine_context_t, mainThreadCtx); 51 49 52 cluster * systemCluster;53 system_proc_t * systemProcessor;50 cluster * mainCluster; 51 processor * mainProcessor; 54 52 thread_desc * mainThread; 55 53 … … 57 55 // Global state 58 56 59 volatile thread_local processor * this_processor; 60 volatile thread_local coroutine_desc * this_coroutine; 61 volatile thread_local thread_desc * this_thread; 57 thread_local coroutine_desc * volatile this_coroutine; 58 thread_local thread_desc * volatile this_thread; 59 thread_local processor * volatile this_processor; 60 62 61 volatile thread_local bool preemption_in_progress = 0; 63 62 volatile thread_local unsigned short disable_preempt_count = 1; … … 85 84 86 85 this->limit = (void *)(((intptr_t)this->base) - this->size); 87 this->context = & mainThreadCtxStorage;86 this->context = &storage_mainThreadCtx; 88 87 this->top = this->base; 89 88 } … … 125 124 126 125 void ?{}(processor * this) { 127 this{ systemCluster };126 this{ mainCluster }; 128 127 } 129 128 … … 131 130 this->cltr = cltr; 132 131 (&this->terminated){ 0 }; 133 this-> is_terminated= false;132 this->do_terminate = false; 134 133 this->preemption_alarm = NULL; 135 this->preemption = default_preemption();136 134 this->pending_preemption = false; 137 135 … … 142 140 this->cltr = cltr; 143 141 (&this->terminated){ 0 }; 144 this-> is_terminated= false;142 this->do_terminate = false; 145 143 this->preemption_alarm = NULL; 146 this->preemption = default_preemption();147 144 this->pending_preemption = false; 148 145 this->kernel_thread = pthread_self(); 149 146 150 147 this->runner = runner; 151 LIB_DEBUG_PRINT_SAFE("Kernel : constructing systemprocessor context %p\n", runner);148 LIB_DEBUG_PRINT_SAFE("Kernel : constructing main processor context %p\n", runner); 152 149 runner{ this }; 153 150 } 154 151 155 LIB_DEBUG_DO( bool validate( alarm_list_t * this ); )156 157 void ?{}(system_proc_t * this, cluster * cltr, processorCtx_t * runner) {158 (&this->alarms){};159 (&this->alarm_lock){};160 this->pending_alarm = false;161 162 (&this->proc){ cltr, runner };163 164 verify( validate( &this->alarms ) );165 }166 167 152 void ^?{}(processor * this) { 168 if( ! this-> is_terminated) {153 if( ! this->do_terminate ) { 169 154 LIB_DEBUG_PRINT_SAFE("Kernel : core %p signaling termination\n", this); 170 this-> is_terminated= true;155 this->do_terminate = true; 171 156 P( &this->terminated ); 172 157 pthread_join( this->kernel_thread, NULL ); … … 176 161 void ?{}(cluster * this) { 177 162 ( &this->ready_queue ){}; 178 ( &this->lock ){}; 163 ( &this->ready_queue_lock ){}; 164 165 this->preemption = default_preemption(); 179 166 } 180 167 … … 199 186 200 187 thread_desc * readyThread = NULL; 201 for( unsigned int spin_count = 0; ! this-> is_terminated; spin_count++ )188 for( unsigned int spin_count = 0; ! this->do_terminate; spin_count++ ) 202 189 { 203 190 readyThread = nextThread( this->cltr ); … … 343 330 verifyf( thrd->next == NULL, "Expected null got %p", thrd->next ); 344 331 345 lock( &systemProcessor->proc.cltr->lock DEBUG_CTX2 );346 append( & systemProcessor->proc.cltr->ready_queue, thrd );347 unlock( & systemProcessor->proc.cltr->lock );332 lock( &this_processor->cltr->ready_queue_lock DEBUG_CTX2 ); 333 append( &this_processor->cltr->ready_queue, thrd ); 334 unlock( &this_processor->cltr->ready_queue_lock ); 348 335 349 336 verify( disable_preempt_count > 0 ); … … 352 339 thread_desc * nextThread(cluster * this) { 353 340 verify( disable_preempt_count > 0 ); 354 lock( &this-> lock DEBUG_CTX2 );341 lock( &this->ready_queue_lock DEBUG_CTX2 ); 355 342 thread_desc * head = pop_head( &this->ready_queue ); 356 unlock( &this-> lock );343 unlock( &this->ready_queue_lock ); 357 344 verify( disable_preempt_count > 0 ); 358 345 return head; … … 452 439 // Start by initializing the main thread 453 440 // SKULLDUGGERY: the mainThread steals the process main thread 454 // which will then be scheduled by the systemProcessor normally455 mainThread = (thread_desc *)& mainThreadStorage;441 // which will then be scheduled by the mainProcessor normally 442 mainThread = (thread_desc *)&storage_mainThread; 456 443 current_stack_info_t info; 457 444 mainThread{ &info }; … … 459 446 LIB_DEBUG_PRINT_SAFE("Kernel : Main thread ready\n"); 460 447 461 // Initialize the systemcluster462 systemCluster = (cluster *)&systemClusterStorage;463 systemCluster{};464 465 LIB_DEBUG_PRINT_SAFE("Kernel : Systemcluster ready\n");466 467 // Initialize the system processor and the systemprocessor ctx448 // Initialize the main cluster 449 mainCluster = (cluster *)&storage_mainCluster; 450 mainCluster{}; 451 452 LIB_DEBUG_PRINT_SAFE("Kernel : main cluster ready\n"); 453 454 // Initialize the main processor and the main processor ctx 468 455 // (the coroutine that contains the processing control flow) 469 systemProcessor = (system_proc_t *)&systemProcessorStorage; 470 systemProcessor{ systemCluster, (processorCtx_t *)&systemProcessorCtxStorage }; 471 472 // Add the main thread to the ready queue 473 // once resume is called on systemProcessor->runner the mainThread needs to be scheduled like any normal thread 474 ScheduleThread(mainThread); 456 mainProcessor = (processor *)&storage_mainProcessor; 457 mainProcessor{ mainCluster, (processorCtx_t *)&storage_mainProcessorCtx }; 475 458 476 459 //initialize the global state variables 477 this_processor = &systemProcessor->proc;460 this_processor = mainProcessor; 478 461 this_thread = mainThread; 479 462 this_coroutine = &mainThread->cor; 480 disable_preempt_count = 1;481 463 482 464 // Enable preemption 483 465 kernel_start_preemption(); 484 466 485 // SKULLDUGGERY: Force a context switch to the system processor to set the main thread's context to the current UNIX 467 // Add the main thread to the ready queue 468 // once resume is called on mainProcessor->runner the mainThread needs to be scheduled like any normal thread 469 ScheduleThread(mainThread); 470 471 // SKULLDUGGERY: Force a context switch to the main processor to set the main thread's context to the current UNIX 486 472 // context. Hence, the main thread does not begin through CtxInvokeThread, like all other threads. The trick here is that 487 473 // mainThread is on the ready queue when this call is made. 488 resume( systemProcessor->proc.runner );474 resume( mainProcessor->runner ); 489 475 490 476 … … 501 487 disable_interrupts(); 502 488 503 // SKULLDUGGERY: Notify the systemProcessor it needs to terminates.489 // SKULLDUGGERY: Notify the mainProcessor it needs to terminates. 504 490 // When its coroutine terminates, it return control to the mainThread 505 491 // which is currently here 506 systemProcessor->proc.is_terminated= true;492 mainProcessor->do_terminate = true; 507 493 suspend(); 508 494 … … 512 498 kernel_stop_preemption(); 513 499 514 // Destroy the systemprocessor and its context in reverse order of construction500 // Destroy the main processor and its context in reverse order of construction 515 501 // These were manually constructed so we need manually destroy them 516 ^( systemProcessor->proc.runner){};517 ^( systemProcessor){};502 ^(mainProcessor->runner){}; 503 ^(mainProcessor){}; 518 504 519 505 // Final step, destroy the main thread since it is no longer needed -
src/libcfa/concurrency/kernel_private.h
rdab7ac7 r21a5dde1 31 31 extern "C" { 32 32 void disable_interrupts(); 33 void enable_interrupts_no RF();33 void enable_interrupts_noPoll(); 34 34 void enable_interrupts( DEBUG_CTX_PARAM ); 35 35 } … … 45 45 thread_desc * nextThread(cluster * this); 46 46 47 //Block current thread and release/wake-up the following resources 47 48 void BlockInternal(void); 48 49 void BlockInternal(spinlock * lock); … … 65 66 void spin(processor * this, unsigned int * spin_count); 66 67 67 struct system_proc_t { 68 processor proc; 69 68 struct event_kernel_t { 70 69 alarm_list_t alarms; 71 spinlock alarm_lock; 72 73 bool pending_alarm; 70 spinlock lock; 74 71 }; 75 72 76 extern cluster * systemCluster; 77 extern system_proc_t * systemProcessor; 78 extern volatile thread_local processor * this_processor; 79 extern volatile thread_local coroutine_desc * this_coroutine; 80 extern volatile thread_local thread_desc * this_thread; 73 extern event_kernel_t * event_kernel; 74 75 extern thread_local coroutine_desc * volatile this_coroutine; 76 extern thread_local thread_desc * volatile this_thread; 77 extern thread_local processor * volatile this_processor; 78 81 79 extern volatile thread_local bool preemption_in_progress; 82 80 extern volatile thread_local unsigned short disable_preempt_count; … … 91 89 extern void ThreadCtxSwitch(coroutine_desc * src, coroutine_desc * dst); 92 90 91 //----------------------------------------------------------------------------- 92 // Utils 93 #define KERNEL_STORAGE(T,X) static char storage_##X[sizeof(T)] 94 93 95 #endif //KERNEL_PRIVATE_H 94 96 -
src/libcfa/concurrency/preemption.c
rdab7ac7 r21a5dde1 34 34 #endif 35 35 36 //TODO move to defaults 36 37 #define __CFA_DEFAULT_PREEMPTION__ 10000 37 38 39 //TODO move to defaults 38 40 __attribute__((weak)) unsigned int default_preemption() { 39 41 return __CFA_DEFAULT_PREEMPTION__; 40 42 } 41 43 44 // Short hands for signal context information 42 45 #define __CFA_SIGCXT__ ucontext_t * 43 46 #define __CFA_SIGPARMS__ __attribute__((unused)) int sig, __attribute__((unused)) siginfo_t *sfp, __attribute__((unused)) __CFA_SIGCXT__ cxt 44 47 48 // FwdDeclarations : timeout handlers 45 49 static void preempt( processor * this ); 46 50 static void timeout( thread_desc * this ); 47 51 52 // FwdDeclarations : Signal handlers 48 53 void sigHandler_ctxSwitch( __CFA_SIGPARMS__ ); 49 void sigHandler_alarm ( __CFA_SIGPARMS__ );50 54 void sigHandler_segv ( __CFA_SIGPARMS__ ); 51 55 void sigHandler_abort ( __CFA_SIGPARMS__ ); 52 56 57 // FwdDeclarations : sigaction wrapper 53 58 static void __kernel_sigaction( int sig, void (*handler)(__CFA_SIGPARMS__), int flags ); 54 LIB_DEBUG_DO( bool validate( alarm_list_t * this ); ) 55 59 60 // FwdDeclarations : alarm thread main 61 void * alarm_loop( __attribute__((unused)) void * args ); 62 63 // Machine specific register name 56 64 #ifdef __x86_64__ 57 65 #define CFA_REG_IP REG_RIP … … 60 68 #endif 61 69 70 KERNEL_STORAGE(event_kernel_t, event_kernel); // private storage for event kernel 71 event_kernel_t * event_kernel; // kernel public handle to even kernel 72 static pthread_t alarm_thread; // pthread handle to alarm thread 73 74 void ?{}(event_kernel_t * this) { 75 (&this->alarms){}; 76 (&this->lock){}; 77 } 62 78 63 79 //============================================================================================= … … 65 81 //============================================================================================= 66 82 83 // Get next expired node 84 static inline alarm_node_t * get_expired( alarm_list_t * alarms, __cfa_time_t currtime ) { 85 if( !alarms->head ) return NULL; // If no alarms return null 86 if( alarms->head->alarm >= currtime ) return NULL; // If alarms head not expired return null 87 return pop(alarms); // Otherwise just pop head 88 } 89 90 // Tick one frame of the Discrete Event Simulation for alarms 67 91 void tick_preemption() { 68 alarm_ list_t * alarms = &systemProcessor->alarms;69 __cfa_time_t currtime = __kernel_get_time();70 71 // LIB_DEBUG_PRINT_BUFFER_DECL( STDERR_FILENO, "Ticking preemption @ %llu\n", currtime ); 72 while( alarms->head && alarms->head->alarm < currtime ) {73 alarm_node_t * node = pop(alarms);74 // LIB_DEBUG_PRINT_BUFFER_LOCAL( STDERR_FILENO, "Ticking %p\n", node ); 75 92 alarm_node_t * node = NULL; // Used in the while loop but cannot be declared in the while condition 93 alarm_list_t * alarms = &event_kernel->alarms; // Local copy for ease of reading 94 __cfa_time_t currtime = __kernel_get_time(); // Check current time once so we everything "happens at once" 95 96 //Loop throught every thing expired 97 while( node = get_expired( alarms, currtime ) ) { 98 99 // Check if this is a kernel 76 100 if( node->kernel_alarm ) { 77 101 preempt( node->proc ); … … 81 105 } 82 106 83 verify( validate( alarms ) ); 84 107 // Check if this is a periodic alarm 85 108 __cfa_time_t period = node->period; 86 109 if( period > 0 ) { 87 node->alarm = currtime + period; 88 // LIB_DEBUG_PRINT_BUFFER_LOCAL( STDERR_FILENO, "Reinsert %p @ %llu (%llu + %llu)\n", node, node->alarm, currtime, period ); 89 insert( alarms, node ); 110 node->alarm = currtime + period; // Alarm is periodic, add currtime to it (used cached current time) 111 insert( alarms, node ); // Reinsert the node for the next time it triggers 90 112 } 91 113 else { 92 node->set = false; 93 } 94 } 95 96 if( alarms->head ) { 97 __kernel_set_timer( alarms->head->alarm - currtime ); 98 } 99 100 verify( validate( alarms ) ); 101 // LIB_DEBUG_PRINT_BUFFER_LOCAL( STDERR_FILENO, "Ticking preemption done\n" ); 102 } 103 114 node->set = false; // Node is one-shot, just mark it as not pending 115 } 116 } 117 118 // If there are still alarms pending, reset the timer 119 if( alarms->head ) { __kernel_set_timer( alarms->head->alarm - currtime ); } 120 } 121 122 // Update the preemption of a processor and notify interested parties 104 123 void update_preemption( processor * this, __cfa_time_t duration ) { 105 LIB_DEBUG_PRINT_BUFFER_DECL( STDERR_FILENO, "Processor : %p updating preemption to %llu\n", this, duration );106 107 124 alarm_node_t * alarm = this->preemption_alarm; 108 duration *= 1000;109 125 110 126 // Alarms need to be enabled … … 136 152 137 153 extern "C" { 154 // Disable interrupts by incrementing the counter 138 155 void disable_interrupts() { 139 156 __attribute__((unused)) unsigned short new_val = __atomic_add_fetch_2( &disable_preempt_count, 1, __ATOMIC_SEQ_CST ); 140 verify( new_val < (unsigned short)65_000 ); 141 verify( new_val != (unsigned short) 0 ); 142 } 143 144 void enable_interrupts_noRF() { 145 __attribute__((unused)) unsigned short prev = __atomic_fetch_add_2( &disable_preempt_count, -1, __ATOMIC_SEQ_CST ); 146 verify( prev != (unsigned short) 0 ); 147 } 148 157 verify( new_val < 65_000u ); // If this triggers someone is disabling interrupts without enabling them 158 } 159 160 // Enable interrupts by decrementing the counter 161 // If counter reaches 0, execute any pending CtxSwitch 149 162 void enable_interrupts( DEBUG_CTX_PARAM ) { 150 processor * proc = this_processor; 151 thread_desc * thrd = this_thread; 163 processor * proc = this_processor; // Cache the processor now since interrupts can start happening after the atomic add 164 thread_desc * thrd = this_thread; // Cache the thread now since interrupts can start happening after the atomic add 165 152 166 unsigned short prev = __atomic_fetch_add_2( &disable_preempt_count, -1, __ATOMIC_SEQ_CST ); 153 verify( prev != (unsigned short) 0 ); 167 verify( prev != 0u ); // If this triggers someone is enabled already enabled interruptsverify( prev != 0u ); 168 169 // Check if we need to prempt the thread because an interrupt was missed 154 170 if( prev == 1 && proc->pending_preemption ) { 155 171 proc->pending_preemption = false; … … 157 173 } 158 174 175 // For debugging purposes : keep track of the last person to enable the interrupts 159 176 LIB_DEBUG_DO( proc->last_enable = caller; ) 160 177 } 161 } 162 178 179 // Disable interrupts by incrementint the counter 180 // Don't execute any pending CtxSwitch even if counter reaches 0 181 void enable_interrupts_noPoll() { 182 __attribute__((unused)) unsigned short prev = __atomic_fetch_add_2( &disable_preempt_count, -1, __ATOMIC_SEQ_CST ); 183 verify( prev != 0u ); // If this triggers someone is enabled already enabled interrupts 184 } 185 } 186 187 // sigprocmask wrapper : unblock a single signal 163 188 static inline void signal_unblock( int sig ) { 164 189 sigset_t mask; … … 171 196 } 172 197 198 // sigprocmask wrapper : block a single signal 173 199 static inline void signal_block( int sig ) { 174 200 sigset_t mask; … … 181 207 } 182 208 183 static inline bool preemption_ready() { 184 return disable_preempt_count == 0 && !preemption_in_progress; 185 } 186 187 static inline void defer_ctxSwitch() { 188 this_processor->pending_preemption = true; 189 } 190 191 static inline void defer_alarm() { 192 systemProcessor->pending_alarm = true; 193 } 194 209 // kill wrapper : signal a processor 195 210 static void preempt( processor * this ) { 196 211 pthread_kill( this->kernel_thread, SIGUSR1 ); 197 212 } 198 213 214 // reserved for future use 199 215 static void timeout( thread_desc * this ) { 200 216 //TODO : implement waking threads 201 217 } 202 218 219 220 // Check if a CtxSwitch signal handler shoud defer 221 // If true : preemption is safe 222 // If false : preemption is unsafe and marked as pending 223 static inline bool preemption_ready() { 224 bool ready = disable_preempt_count == 0 && !preemption_in_progress; // Check if preemption is safe 225 this_processor->pending_preemption = !ready; // Adjust the pending flag accordingly 226 return ready; 227 } 228 203 229 //============================================================================================= 204 230 // Kernel Signal Startup/Shutdown logic 205 231 //============================================================================================= 206 232 207 static pthread_t alarm_thread; 208 void * alarm_loop( __attribute__((unused)) void * args ); 209 233 // Startup routine to activate preemption 234 // Called from kernel_startup 210 235 void kernel_start_preemption() { 211 236 LIB_DEBUG_PRINT_SAFE("Kernel : Starting preemption\n"); 212 __kernel_sigaction( SIGUSR1, sigHandler_ctxSwitch, SA_SIGINFO ); 213 // __kernel_sigaction( SIGSEGV, sigHandler_segv , SA_SIGINFO ); 214 // __kernel_sigaction( SIGBUS , sigHandler_segv , SA_SIGINFO ); 237 238 // Start with preemption disabled until ready 239 disable_preempt_count = 1; 240 241 // Initialize the event kernel 242 event_kernel = (event_kernel_t *)&storage_event_kernel; 243 event_kernel{}; 244 245 // Setup proper signal handlers 246 __kernel_sigaction( SIGUSR1, sigHandler_ctxSwitch, SA_SIGINFO ); // CtxSwitch handler 247 // __kernel_sigaction( SIGSEGV, sigHandler_segv , SA_SIGINFO ); // Failure handler 248 // __kernel_sigaction( SIGBUS , sigHandler_segv , SA_SIGINFO ); // Failure handler 215 249 216 250 signal_block( SIGALRM ); … … 219 253 } 220 254 255 // Shutdown routine to deactivate preemption 256 // Called from kernel_shutdown 221 257 void kernel_stop_preemption() { 222 258 LIB_DEBUG_PRINT_SAFE("Kernel : Preemption stopping\n"); 223 259 260 // Block all signals since we are already shutting down 224 261 sigset_t mask; 225 262 sigfillset( &mask ); 226 263 sigprocmask( SIG_BLOCK, &mask, NULL ); 227 264 265 // Notify the alarm thread of the shutdown 228 266 sigval val = { 1 }; 229 267 pthread_sigqueue( alarm_thread, SIGALRM, val ); 268 269 // Wait for the preemption thread to finish 230 270 pthread_join( alarm_thread, NULL ); 271 272 // Preemption is now fully stopped 273 231 274 LIB_DEBUG_PRINT_SAFE("Kernel : Preemption stopped\n"); 232 275 } 233 276 277 // Raii ctor/dtor for the preemption_scope 278 // Used by thread to control when they want to receive preemption signals 234 279 void ?{}( preemption_scope * this, processor * proc ) { 235 (&this->alarm){ proc };280 (&this->alarm){ proc, zero_time, zero_time }; 236 281 this->proc = proc; 237 282 this->proc->preemption_alarm = &this->alarm; 238 update_preemption( this->proc, this->proc->preemption ); 283 284 update_preemption( this->proc, from_us(this->proc->cltr->preemption) ); 239 285 } 240 286 … … 242 288 disable_interrupts(); 243 289 244 update_preemption( this->proc, 0);290 update_preemption( this->proc, zero_time ); 245 291 } 246 292 … … 249 295 //============================================================================================= 250 296 297 // Context switch signal handler 298 // Receives SIGUSR1 signal and causes the current thread to yield 251 299 void sigHandler_ctxSwitch( __CFA_SIGPARMS__ ) { 252 300 LIB_DEBUG_DO( last_interrupt = (void *)(cxt->uc_mcontext.gregs[CFA_REG_IP]); ) 253 if( preemption_ready() ) { 254 preemption_in_progress = true; 255 signal_unblock( SIGUSR1 ); 256 this_processor->pending_preemption = false; 257 preemption_in_progress = false; 258 BlockInternal( (thread_desc*)this_thread ); 259 } 260 else { 261 defer_ctxSwitch(); 262 } 263 } 264 301 302 // Check if it is safe to preempt here 303 if( !preemption_ready() ) { return; } 304 305 preemption_in_progress = true; // Sync flag : prevent recursive calls to the signal handler 306 signal_unblock( SIGUSR1 ); // We are about to CtxSwitch out of the signal handler, let other handlers in 307 preemption_in_progress = false; // Clear the in progress flag 308 309 // Preemption can occur here 310 311 BlockInternal( (thread_desc*)this_thread ); // Do the actual CtxSwitch 312 } 313 314 // Main of the alarm thread 315 // Waits on SIGALRM and send SIGUSR1 to whom ever needs it 265 316 void * alarm_loop( __attribute__((unused)) void * args ) { 317 // Block sigalrms to control when they arrive 266 318 sigset_t mask; 267 319 sigemptyset( &mask ); … … 272 324 } 273 325 326 // Main loop 274 327 while( true ) { 328 // Wait for a sigalrm 275 329 siginfo_t info; 276 330 int sig = sigwaitinfo( &mask, &info ); 331 332 // If another signal arrived something went wrong 277 333 assertf(sig == SIGALRM, "Kernel Internal Error, sigwait: Unexpected signal %d (%d : %d)\n", sig, info.si_code, info.si_value.sival_int); 278 334 279 335 LIB_DEBUG_PRINT_SAFE("Kernel : Caught alarm from %d with %d\n", info.si_code, info.si_value.sival_int ); 336 // Switch on the code (a.k.a. the sender) to 280 337 switch( info.si_code ) 281 338 { 339 // Timers can apparently be marked as sent for the kernel 340 // In either case, tick preemption 282 341 case SI_TIMER: 283 342 case SI_KERNEL: 284 343 LIB_DEBUG_PRINT_SAFE("Kernel : Preemption thread tick\n"); 285 lock( & systemProcessor->alarm_lock DEBUG_CTX2 );344 lock( &event_kernel->lock DEBUG_CTX2 ); 286 345 tick_preemption(); 287 unlock( & systemProcessor->alarm_lock );346 unlock( &event_kernel->lock ); 288 347 break; 348 // Signal was not sent by the kernel but by an other thread 289 349 case SI_QUEUE: 350 // For now, other thread only signal the alarm thread to shut it down 351 // If this needs to change use info.si_value and handle the case here 290 352 goto EXIT; 291 353 } … … 297 359 } 298 360 361 // Sigaction wrapper : register an signal handler 299 362 static void __kernel_sigaction( int sig, void (*handler)(__CFA_SIGPARMS__), int flags ) { 300 363 struct sigaction act; … … 312 375 } 313 376 314 typedef void (*sa_handler_t)(int); 315 377 // Sigaction wrapper : restore default handler 316 378 static void __kernel_sigdefault( int sig ) { 317 379 struct sigaction act; 318 380 319 //act.sa_handler = SIG_DFL;381 act.sa_handler = SIG_DFL; 320 382 act.sa_flags = 0; 321 383 sigemptyset( &act.sa_mask ); -
src/libcfa/concurrency/thread
rdab7ac7 r21a5dde1 54 54 } 55 55 56 extern volatile thread_local thread_desc *this_thread;56 extern thread_local thread_desc * volatile this_thread; 57 57 58 58 forall( dtype T | is_thread(T) ) -
src/libcfa/concurrency/thread.c
rdab7ac7 r21a5dde1 87 87 88 88 void yield( void ) { 89 BlockInternal( (thread_desc *)this_thread );89 BlockInternal( this_thread ); 90 90 } 91 91 -
src/tests/preempt_longrun/Makefile.am
rdab7ac7 r21a5dde1 25 25 CC = @CFA_BINDIR@/@CFA_NAME@ 26 26 27 TESTS = b arge block create disjoint enter enter3 processor stack wait yield27 TESTS = block create disjoint enter enter3 processor stack wait yield 28 28 29 29 .INTERMEDIATE: ${TESTS} -
src/tests/preempt_longrun/Makefile.in
rdab7ac7 r21a5dde1 453 453 REPEAT = ${abs_top_srcdir}/tools/repeat -s 454 454 BUILD_FLAGS = -g -Wall -Wno-unused-function -quiet @CFA_FLAGS@ -debug -O2 -DPREEMPTION_RATE=${preempt} 455 TESTS = b arge block create disjoint enter enter3 processor stack wait yield455 TESTS = block create disjoint enter enter3 processor stack wait yield 456 456 all: all-am 457 457 … … 635 635 TEST_LOGS="$$log_list"; \ 636 636 exit $$? 637 barge.log: barge638 @p='barge'; \639 b='barge'; \640 $(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \641 --log-file $$b.log --trs-file $$b.trs \642 $(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \643 "$$tst" $(AM_TESTS_FD_REDIRECT)644 637 block.log: block 645 638 @p='block'; \ -
src/tests/preempt_longrun/create.c
rdab7ac7 r21a5dde1 1 1 #include <kernel> 2 2 #include <thread> 3 4 static const unsigned long N = 2_000ul; 3 5 4 6 #ifndef PREEMPTION_RATE … … 16 18 int main(int argc, char* argv[]) { 17 19 processor p; 18 for(int i = 0; i < 10_000ul; i++) {20 for(int i = 0; i < N; i++) { 19 21 worker_t w[7]; 20 22 } -
src/tests/preempt_longrun/enter.c
rdab7ac7 r21a5dde1 3 3 #include <thread> 4 4 5 #undef N6 5 static const unsigned long N = 70_000ul; 7 6 -
src/tests/preempt_longrun/enter3.c
rdab7ac7 r21a5dde1 3 3 #include <thread> 4 4 5 #undef N6 5 static const unsigned long N = 50_000ul; 7 6 -
src/tests/preempt_longrun/processor.c
rdab7ac7 r21a5dde1 1 1 #include <kernel> 2 2 #include <thread> 3 4 static const unsigned long N = 5_000ul; 3 5 4 6 #ifndef PREEMPTION_RATE … … 15 17 16 18 int main(int argc, char* argv[]) { 17 for(int i = 0; i < 10_000ul; i++) {19 for(int i = 0; i < N; i++) { 18 20 processor p; 19 21 } -
src/tests/preempt_longrun/yield.c
rdab7ac7 r21a5dde1 1 1 #include <kernel> 2 2 #include <thread> 3 4 static const unsigned long N = 325_000ul; 3 5 4 6 #ifndef PREEMPTION_RATE … … 13 15 14 16 void main(worker_t * this) { 15 for(int i = 0; i < 325_000ul; i++) {17 for(int i = 0; i < N; i++) { 16 18 yield(); 17 19 } -
src/tests/sched-int-barge.c
rdab7ac7 r21a5dde1 5 5 #include <thread> 6 6 7 #ifndef N 8 #define N 100_000 7 static const unsigned long N = 50_000ul; 8 9 #ifndef PREEMPTION_RATE 10 #define PREEMPTION_RATE 10_000ul 9 11 #endif 10 12 13 unsigned int default_preemption() { 14 return 0; 15 } 11 16 enum state_t { WAIT, SIGNAL, BARGE }; 12 17 … … 14 19 15 20 monitor global_data_t { 16 bool done;21 volatile bool done; 17 22 int counter; 18 23 state_t state; … … 55 60 c->do_wait2 = ((unsigned)rand48()) % (c->do_signal); 56 61 57 //if(c->do_wait1 == c->do_wait2) sout | "Same" | endl;62 if(c->do_wait1 == c->do_wait2) sout | "Same" | endl; 58 63 } 59 64 … … 93 98 } 94 99 100 static thread_desc * volatile the_threads; 101 95 102 int main(int argc, char* argv[]) { 96 rand48seed(0); 97 processor p; 98 { 99 Threads t[17]; 100 } 103 rand48seed(0); 104 processor p; 105 { 106 Threads t[17]; 107 the_threads = (thread_desc*)t; 108 } 101 109 } -
src/tests/sched-int-block.c
rdab7ac7 r21a5dde1 5 5 #include <thread> 6 6 7 #ifndef N 8 #define N 10_000 7 #include <time.h> 8 9 static const unsigned long N = 5_000ul; 10 11 #ifndef PREEMPTION_RATE 12 #define PREEMPTION_RATE 10_000ul 9 13 #endif 14 15 unsigned int default_preemption() { 16 return PREEMPTION_RATE; 17 } 10 18 11 19 enum state_t { WAITED, SIGNAL, BARGE }; … … 101 109 102 110 int main(int argc, char* argv[]) { 103 rand48seed( 0);111 rand48seed( time( NULL ) ); 104 112 done = false; 105 113 processor p; -
src/tests/sched-int-disjoint.c
rdab7ac7 r21a5dde1 4 4 #include <thread> 5 5 6 #ifndef N 7 #define N 10_000 6 static const unsigned long N = 10_000ul; 7 8 #ifndef PREEMPTION_RATE 9 #define PREEMPTION_RATE 10_000ul 8 10 #endif 11 12 unsigned int default_preemption() { 13 return PREEMPTION_RATE; 14 } 9 15 10 16 enum state_t { WAIT, SIGNAL, BARGE }; -
src/tests/sched-int-wait.c
rdab7ac7 r21a5dde1 5 5 #include <thread> 6 6 7 #ifndef N 8 #define N 10_000 7 static const unsigned long N = 10_000ul; 8 9 #ifndef PREEMPTION_RATE 10 #define PREEMPTION_RATE 10_000ul 9 11 #endif 12 13 unsigned int default_preemption() { 14 return PREEMPTION_RATE; 15 } 10 16 11 17 monitor global_t {}; … … 114 120 int main(int argc, char* argv[]) { 115 121 waiter_left = 4; 116 processor p ;122 processor p[2]; 117 123 sout | "Starting" | endl; 118 124 { -
src/tests/test.py
rdab7ac7 r21a5dde1 221 221 if retcode == TestResult.SUCCESS: result_txt = "Done" 222 222 elif retcode == TestResult.TIMEOUT: result_txt = "TIMEOUT" 223 else : result_txt = "ERROR "223 else : result_txt = "ERROR code %d" % retcode 224 224 else : 225 225 if retcode == TestResult.SUCCESS: result_txt = "PASSED" 226 226 elif retcode == TestResult.TIMEOUT: result_txt = "TIMEOUT" 227 else : result_txt = "FAILED "227 else : result_txt = "FAILED with code %d" % retcode 228 228 229 229 #print result with error if needed
Note: See TracChangeset
for help on using the changeset viewer.