sort by:
Revision Author Date Message Commit Date
8a0ebb9 Get tests running/passing under Linux (#194) * Get tests running/passing under Linux - Fix up `dlopen` abstraction - Fix up some test cases to request hlsl (rather than default to dxbc) so they can run on non-Windows targets - Fix up test runner ignore tests that can't run on current platform (and not count those as failure) - Fix file handle leeak in process spawner absttraction - Get additional test-related applications building - More tweaks to Travis script; in theory deployment is set up now (yeah, right) * fixup * fixup: Travis environment variable syntax * fixup: Buffer->begin * fixup: actually run full tests on one config * fixup: add build status badge for Travis 29 September 2017, 20:43:08 UTC
74f2f47 First attempt at a Linux build (#193) * First attempt at a Linux build - Fix up places where C++ idioms were written assuming lenient behavior of Microsoft's compiler - Add a few more alternatives for platform-specific behavior where Windows was the only platform accounted for. - Add a basic Makefile that can at least invoke our build, even if it isn't going good dependency tracking, etc. - Build `libslang.so` and `slangc` that depends on it, using a relative `RPATH` to make the binary portable (I hope) - Add an initial `.travis.yml` to see if we can trigger their build process. * Fixup: const bug in `List::Sort` I'm not clear why this gets picked up by the gcc *and* clang that Travis uses, but not the (newer) gcc I'm using on Ubuntu here, but I'm hoping it is just some missing `const` qualifiers. * Fixup: reorder specialization of "class info" Clang complains about things being specialized after being instantiated (implicilty), and I hope it is just the fact that I generate the class info for the roots of the hierarchy after the other cases. We'll see. * Fixup: add `platform.cpp` to unified/lumped build * Fixup: Windows uses `FreeLibrary` and not `UnloadLibrary` * Fixup: fix Windows project file to include new source file This obviously points to the fact that we are going to need to be generating these files sooner or later. 27 September 2017, 18:17:39 UTC
b6cf0f4 Merge pull request #191 from tfoleyNV/ir-work More work on IR-based lowering and cross-compilation 25 September 2017, 18:27:54 UTC
0aa440a Fixup: deal with hitting `.obj` size limits for VS When using the lumped/"unity" build approach for Slang, the resulting `.obj` files run into number-of-sections limits in the VS linker. For now I'm using the `/bigobj` command-line flag to work around this for the `hello` example, just so I can be sure the lumped build still works, but longer term it seems like we need to just drop that approach anyway. The `render-test` application was switched to link against `slang.dll` since there is no reason to have multiple apps use the lumped approach. 25 September 2017, 16:35:34 UTC
4ea0c26 Fixup: typo in `core` library meta-code This seems to be a case where I edited the generated output file, which meant that its timestamp was newer than the input and it didn't trigger a rebuild, so I didn't notice the problem until the change went into CI (which did a fresh build). 25 September 2017, 15:42:37 UTC
1972fa3 More work on IR-based lowering and cross-compilation None of these changes are made "live" at the moment. I'm just trying to get them checked in to avoid divering too far from `master` at any point during development. - Add basic emit logic to produce GLSL from the IR in a few cases (the existing IR emit logic was ad hoc and HLSL-specific) - When lowering a function declaration, walk up its chain of parent declarations to collect additional parameters as needed - When lowering a call, make sure to add generic arguments that come from the declaration reference being called - Attach a "mangled name" to symbols when lowering, so that we can eventually use that name to resolve things for linkage. - After the above work, I had to apply some fixups to make sure that generic arguments *don't* get added when the user is calling an `__intrinsic_op` function, since those should map 1-to-1 down to instructions with just their ordinary parameter list. A big open question right now is whether I should continue to represent the generic arguments as just part of the ordinary argument list for a function, or split them out into separate `applyGeneric` and `apply` steps. A strongly related question is whether a declaration with generic parameters should lower into a single declaration, or one declaration nested inside an outer generic declaration. A good future step at this point would be to eliminate a lot of the `__intrinsic_op` stuff in favor of having the builtin functions include their own definitions, which might be in terms of a new expression-level construct for writing inline IR operations. This can't be done until the existing AST-to-AST path is no longer needed for cross-compilation purposes. More immediate next steps here: - We need a way to round-trip calls to external declaration that get handled by this mangled-name logic. Basically, if we are asked to output HLSL and we see a call to `_S...GetDimensions...(float4, t, a, ...)` we need to be able to walk the mangled name and get back to `t.getDimensions(a, ...)` without a whole lot of manual definitions to make things round-trip. - In the other case, where a declaration isn't built-in for the chosen target, we need to be able to load a module of target-specific definitions (which will somehow map back to symbols with certain mangled names) and then look these up (by mangled name) and then load/link/inline them into the user's IR to satisfy requirements in their code. 22 September 2017, 21:18:44 UTC
b206af7 Merge pull request #190 from tfoleyNV/use-warp-for-d3d11-tests Use WARP for D3D rendering tests. 21 September 2017, 18:20:57 UTC
289a71e Use WARP for D3D rendering tests. This should in principle allow for the D3D-only tests to run on AppVeyor so that we can validate running things for CI purposes (and is probably a whole lot easier than trying to plug the VM up to the rendering tests). I've switched up one of the tests so that it should run even on AppVeyor, so fingers crossed that it will actually run. 21 September 2017, 17:53:56 UTC
0116717 Initial work on a "VM" for Slang code (#189) At a high level, this commit adds two things: 1. A "bytecode" format for serializing Slang IR instructions and related structure (functions, "registers") 2. A virtual machine that can load and then execute code in that bytecode format. The reason for kicking off this work right now is that we *need* a way to run tests on Slang code generation that doesn't rely on having a GPU present (given that our CI runs on VM instances without GPUs), nor on textual comparison to the output of other compilers. With these features I've implemented a slapdash `slang-eval-test` test fixture that can run a (trivial) compute shader to very our compilation flow through to bytecode. Some key design constraints/challenges: - The bytecode format should be "position independent" so that a user can just load a blob of data and then inspect it without having to deserialize into another format, allocate memory, etc. Eventually the bytecode format might be a replacement for out current reflection API (we used to base reflection off a similar format, but the cost/benefit wasn't there at the time and we switched to just using the AST). - The VM should be able to execute bytecode functions without doing any per-operation translation, JIT, etc. (translation of more coarse-grained symbols is okay). For now the VM is just being used to run tests, but eventually I'd like it to be viable for: - Running Slang-based code in the context of the compiler itself. This starts with stuff like constant-folding in the front-end, but could expand to more general metaprogramming features. - Running Slang-based ocde within a runtime application (e.g., a game engine) that wants to be able to run things like "parameter shader" code, or even just evaluate compute-like code on CPU (e.g., when supporting particles on both CPU and GPU). - Finally, the bytecode format should ideally be able to round-trip back to the IR without unacceptable loss of information. This requirement and the previous one play off of each other, because things like a traditional SSA phi operation is ugly when you have to actually *execute* it. This doesn't matter right now when we don't have SSA yet, but it might be part of the decision-making here. The actual implementation is centralized in `bytecode.{h,cpp}` and `vm.{h.cpp}`. Big picture notes: - The space of opcodes is shared between IR and bytecode (BC), with the hope that this makes translation of operations between the two easy. - The actual bytecode instruction stream relies on a variable-length encoding for integer values, including opcodes and operand numbers, so that the common case is single-byte encoding. - In the long term I intend to have a rule that if you use a single-byte encoding for an opcode, then all operands are required to use single-byte encodings too. Operations that need multi-byte operands would then be forced to use a multi-byte encoding of the op, and would be sent down a slower path in the interpeter. - The "bytecode"'s outer structure is based on ordinary data structures linked with pointers, but they are "relative pointers" so the actual structure is position-independent. - There are two main kinds of operands: registers and "constants." An operand is a signed integer where non-negatie values indicate registers (with `index == operandVal`) and negative values indicate constants (with `index == ~operandVal`). - Registers are stored in the "stack frame" for a VM function call, and each has a fixed offset based on the size of the type and those that come before it. Conceptually, registers are allowed to overlap if they aren't live at the same time, and we manage this with a simple stack model: every register is supposed to identify the register that comes directly before it (this isn't implemented yet). - "Constants" are more realistically a representation of "captured" values, but they are currently also how constants come in. Basically we can use a compact range of indices in the bytecode for a function, and each of these indices indirectly refers to some value in the next outer scope. - The actual encoding of bytecode instructions right now is largely ad-hoc and very wasteful (we encode the type on everything, and we also encode everything as if it had varargs). - In some cases, an instruction needs to know the types of the values involved (e.g., because it needs to load an array element, which means copying a number of bytes based on the size). The way the VM works we have types attached to our registers, so we currently get sneaky and look at those types in some ops. Longer term is makes sense to encode the required type info directly in the BC. - There's a whole lot of hand-waving going on with how the actual top-level bytecode module gets loaded, because of the way we currently treat the top-level module as an instruction stream in the IR. This means that we try to represent the loaded module as a "stack frame" for a call to the module as a function, but that approach as serious problems, and isn't realistically what we want to do. 21 September 2017, 17:21:34 UTC
10b62ee IR: handle control flow constructs (#186) * IR: handle control flow constructs This change includes a bunch of fixes and additions to the IR path: - `slang-ir-assembly` is now a valid output target (so we can use it for testing) - This uses what used to be the IR "dumping" logic, revamped to support much prettier output. - A future change will need to add back support for less prettified output to use when actually debugging - IR generation for `for` loops and `if` statements is supported - HLSL output from the above control flow constructs is implemented - Revamped the handling of l-values, and in particular work on compound ops like `+=` - Add basic IR support for `groupshared` variables - Add basic IR support for storing compute thread-group size - Output semantics on entry point parameters - This uses the AST structures to find semantics, so its still needs work - Pass through loop unroll flags - This is required to match `fxc` output, at least until we implement unrolling ourselves. * Fixup: 64-bit build issues. * fixup for merge 14 September 2017, 22:37:05 UTC
8cdfce5 Merge pull request #184 from tfoleyNV/design-docs Add an initial design document on interfaces 13 September 2017, 19:36:53 UTC
5c45b09 Merge branch 'master' into design-docs 13 September 2017, 19:30:54 UTC
11bd663 Add an initial design document on interfaces This covers interfaces, generics, and associated types - hopefully with enough detail that we can start writing up example programs that we believe should compile. 13 September 2017, 19:27:31 UTC
3763ca9 Merge pull request #181 from tfoleyNV/ir-lowering-work Get IR working for `AdaptiveTessellationCS40/Render` test 12 September 2017, 18:25:16 UTC
2e0f8f2 Get IR working for `AdaptiveTessellationCS40/Render` test I had expected this to be the first case where control-flow instructions were needed, but it turns out that we aren't testing that entry point right now. The real work/fix here ended up being handling of the `row_major` layout qualifier on a matrix inside a `cbuffer`. The existing AST-based code was passing it through easily (although I don't believe it was handling the layout rules right). Getting it working in the IR involved beefing up the type-layout behavior so that it can handle explicit layout qualifiers (at least for matrix layout) which should also improve the layout computation for non-square matrices with nonstandard layout in the AST-based path. There are still some annoying things left to do: - The `row_major` and `column_major` layout qualifiers in HLSL/GLSL mean different things (well, they mean the reverse of one another) so I need to validate that the GLSL case is working remotely correctly. - The layout logic isn't handling other explicit-layout cases as supported by GLSL (but of course GLSL is a far lower priority than HLSL/Slang) - There is currently no way to pass in an explicit matrix layout flag to Slang to control the default behavior. - Any client who was using Slang for HLSL pass-through and then applying a non-default flag on their HLSL->* compilation will get nasty unexpected behavior when the IR goes live - and they are already dealing with nasty behavior around non-square matrices not matching layout between Slang and the downstream. - The logic that gleans layout modes from a variable declaration is currently only being applied for fields of structure types (which applies to `cbuffer` declarations as well), and not to global-scope uniform variables. - We need test cases for all of this. 12 September 2017, 16:56:41 UTC
b0b0763 Merge pull request #180 from tfoleyNV/ir-lowering-work Get another test working with IR codedgen 12 September 2017, 00:38:49 UTC
4644b64 Get another test working with IR codedgen - Add support for matrix types in IR/codegen - Add support for basic indexing operations in IR/codegen 11 September 2017, 22:27:08 UTC
e2de1ea Merge pull request #179 from tfoleyNV/ir-lowering-work Support IR-based codegen for a few more examples. 11 September 2017, 21:46:20 UTC
2055d54 Support IR-based codegen for a few more examples. The main interesting change here is around support for lowering of calls to "subscript" operations (what a C++ programmer would think of as `operator[]`). An important infrastructure change here was to add an explicit AST-node representation for a "static member expression" which we use whenever a member is looked up in a type as opposed to a value. The implementation of this probably isn't robust yet, but it turns out to be important to be able to tell such cases apart. 11 September 2017, 21:06:02 UTC
80fb7b0 Merge pull request #178 from tfoleyNV/boilerplate-generator Initial work on boilerplate code generator 11 September 2017, 17:27:41 UTC
14137cb Initial work on boilerplate code generator The goal here is to get the Slang "standard library" code out of string literals and into something a bit more like an actual code file. This is handled by having a `slang-generate` tool that can translate a "template" file that mixes raw Slang code (or any language we want to generate...) with generation logic that is implemented in C++ (currently). This work isn't final by any stretch of the imagination, but it moves a lot of code and not merging it ASAP will complicate other changes. My expectation is that the generator tool will be beefed up on an as-needed basis, to get our stdlib code working. Similarly, the stdlib code does not really take advantage of the new approach as much as it could. That is something we can clean up along the way as we do modifications of the stdlib. 11 September 2017, 16:50:56 UTC
0e566a6 Merge pull request #177 from tfoleyNV/ir-work Replace old notion of "intrinsic" operations 07 September 2017, 17:31:37 UTC
ced92a0 Fixup: fix uninitialized memory bug This is a bug that already existed in the IR code, but wasn't getting triggered on existing test cases, and only seems to affect the 64-bit target right now. The "key" value being constructed to cache and re-use constants during IR generation wasn't actually being fully initialized, and so garbage values in it could cause the lookup of an existing value to fail where it should succeed. 07 September 2017, 16:50:50 UTC
ad35395 Replace old notion of "intrinsic" operations The code previously had an enumerated type for "intrinsic" operations, and allowed functions to be marked `__intrinsic_op(...)` to indicate the operation they map to. The nature of the IR meant that each of these intrinsic ops had to have a corresponding IR opcode, but the `enum` types weren't the same. This change cleans things up a bit by deciding that the `__intrinsic_op(...)` modifier names an actual IR opcode, and so the `IntrinsicOp` enum is gone. The biggest source of complexity here is that there are certain operations that need to be "intrinsic"-ish for the purposes of the current AST-based translation path, because we need them to round-trip from source to AST and back. Right now this is being handled by defining a bunch of "pseudo-ops" which can be used in the `__intrinsic_op` modifier, but which are *not* meant to be represented in the IR. Currently I don't actually handle this during IR generation. In the long run, once we are using IR for everything that needs cross-compilation, we should be able to eliminate the pseudo-ops in favor of just having these be ordinary (inline) functions defined in the stdlib (e.g., the `+=` operator can just have a direct definition). There was a second category of modifier that gets a little caught up in this, which is the `__intrinsic` modifier, which got used in two ways: 1. A function marked `__intrinsic(glsl, ...)` had what I call a "target intrinsic" modifier, which specified how to lower it for a specific target (e.g., GLSL). 2. A function just marked `__intrinsic` was supposed to be a marker for "this function shouldn't be emitted in the output, because the implementation is expected to be provided" The latter category of function should really be an `__intrinsic_op`, so I translated all those uses. I added a tiny bit of sugar so that `__intrinsic_op` without an explicit opcode will look up an opcode based on the name of the function being called, so that an operation like `sin` can automatically be plumbed through to an equivalent IR op. (The first category is a stopgap for the AST-based cross-compilation, and will hopefully be replaced by something better as we get the IR-based path working). Getting the switch from `__intrinsic` to `__intrinsic_op` working required shuffling around some code in `emit.cpp` that handles looking up those modifiers and emitting builtin operations appropriately during cross-compilation. Depending on where we go with things, a possible extension of this approach is to allow multiple operands to `__intrinsic_op` so that the first specifies the opcode, and then the rest are literal arguments to specify "sub-ops." This could help us handle stuff like texture-fetch operations without an explosion in the number of opcodes. I still need to think about whether this is a good idea or not. 07 September 2017, 16:16:41 UTC
ca16ede Merge pull request #176 from tfoleyNV/ir-work Continue work on IR-based codegen 06 September 2017, 21:15:11 UTC
5900f32 Continue work on IR-based codegen This gets us far enough that we can convert a single test case to use the IR, under the new `-use-ir` flag. Getting this merged into mainline will at least ensure that we keep the IR path working in a minimal fashion, even when we have to add functionality the existing AST-based path There is definitely some clutter here from keeping both IR-based and AST-based translation around, but I don't want to have a long-lived branch for the IR that gets further and further away from the `master` branch that is actually getting used and tested. Summary of changes: - Add pointer types and basic `load` operation to be able to handle variable declarations - Add basic `call` instruction type - Add simple address math for field reference in l-value - Always add IR for referenced decls to global scope - Add notion of "intrinsic" type modifier, which maps a type declaration directly to an IR opcode (plus optional literal operands to handle things like texture/sampler flavor) - Improve printing of IR instructions, types, operands - Add constant-buffer type to IR - Allow any instruction to be detected as "should be folded into use sites" and use this to tag things of constant-buffer type - Also add logic for implicit base on member expressions, to handle references to `cbuffer` members - Add connection back to original decl to IR variables (including global shader parameters...) - Use reflection name instead of true name when emitting HLSL from IR (so that we can match HLSL output) - Make IR include decorations for type layout - Re-use existing emit logic for HLSL semantics to output `register` semantics for IR-based code - Make IR-based codegen be an option we can enable from the command line - It still isn't on by default (it can barely manage a trivial shader), but it seems better to enable it always instead of putting it under an `#ifdef` - Fix up how we check for intrinsic operations suring AST-based cross compilation so that adding new intrinsic ops for the IR won't break codegen. 06 September 2017, 20:52:04 UTC
e59a1b3 Merge pull request #175 from tfoleyNV/implicit-conversion Move implicit conversion operations to stdlib 05 September 2017, 20:16:21 UTC
8b91fb8 Add a basic test case for HLSL implicit conversions This at least tries to capture some of the basic rules about what implicit conversions are allowed. 05 September 2017, 19:51:33 UTC
ff7c190 Move implicit conversion operations to stdlib - Previously, there were a variety of rules in `check.cpp` to pick the conversion cost for various cases involving scalar, vector, and matrix types. - The main problem of the previous approach is that any lowering pass would need to convert an arbitrary "type cast" node into the right low-level operation(s). - The new approach is that a type conversion (implicit or explicit) always resolves as a call to a constructor/initializer for the destination type. This means that the existing rules around marking operations as builtins should work for lowering. - The support this, the checking logic needs to perform lookup of intializers/constructors when asked to perform conversion between types. It does this by re-using the existing logic for lookup and overload resolution if/when a type was applied in an ordinary context. - Next, we define a modifier that can be attached to constructors/initializers to mark them as suitable for implicit conversion, and associate them with the correct cost to be used when doing overload comparisons. - We add the modifier to all the scalar-to-scalar cases in the stdlib, using the logic that previously existed in semantic checking. - Next we add cases for general vector-to-scalar conversions that also convert type, using the same cost computation as above. - This probably misses various cases, but at this point they can hopefully be added just in the stdlib. - One gotcha here is that in lowering, we need to make sure to lower any kind of call expression to another call expression of the same AST node class, so that we don't lose information on what casts were implicit/hidden in teh source-to-source case. Two notes for potential longer-term changes: 1. There is still some duplication between the type conversion declarations here and the "join" logic for types used for generic arguments. Ideally we'd eventually clean up the "join" logic to be based on convertability, but that isn't a high priority right now, as long as joins continue to pick the right type. 2. It is a bit gross to have to declare all the N^2 conversions for vector/matrix types to duplicate the cases for scalars. For the simple scalar-to-vector case, we might try to support multiple conversion "steps" where both a scalar-to-scalar and a scalar-to-vector step can be allowed (this could be tagged on the modifiers already introduced). That simple option doesn't scale to vector-to-vector element type conversions, though, where you'd really want to make it a generic with a constraint like: vector<T,N> init<U>(vector<U,N> value) where T : ConvertibleFrom<U>; Here the `ConvertibleFrom<U>` interface expresses the fact that a conforming type has an initializer that takes a `U`. What doesn't appear in this context is any notion of conversion costs. We'd need some kind of system for computing the conversion cost of the vector conversion from the cost of the `T` to `U` converion. 05 September 2017, 19:15:28 UTC
6207340 Merge pull request #174 from tfoleyNV/modifier-lowering-fix Fix some issues around cloned modifiers. 31 August 2017, 18:08:54 UTC
60ed105 Fix some issues around cloned modifiers. The root of the problem here is that: - We do a shallow copy of modifiers when "lowering" declarations/statements, by just copying over the head pointer of the linked list of modifiers - During lowering we sometimes add additional modifiers (only used during lowering), and these can thus accidentally get added to the end of the list of modifiers for the original declaration (rather than just the lowered decl) - If the same declaration is used by multiple entry points to be output, then a modifier added by the first entry point (which could reference entry-point-specific storage) will be earlier in the modifier list and might be picked up by a later entry point, so that we dereference already released memory The simple fix for right now is the use the support for a "shared" modifier node to ensure that each lowered declaration/statement gets a unique modifier list. A better long-term fix is: 1. Don't use modifiers to store general side-band information, and instead use proper lookup tables that own their contents. 2. Don't use a linked list to store modifiers (this was done to make splicing easy, but now we have a whole class of bugs related to bad splices), and be willing to clone them as needed. 31 August 2017, 17:41:03 UTC
227f9f5 Merge pull request #173 from tfoleyNV/resources-in-structs-fixes Fix some resources-in-structs bugs 26 August 2017, 03:13:19 UTC
c077a08 Fixup for test failure AppVeyor has a different version of fxc installed by default, and it produces subtly different output for this test case. It seems like later versions are clever enough to completely eliminate an empty `cbuffer` declaration, but earlier versions aren't. I'm actually not entirely sure why Slang is successfully eliminating the cbuffer as well, but the output DXBC implies it was not generated. 26 August 2017, 02:21:55 UTC
7c35d9e Try to print failure from AppVeyor tests 26 August 2017, 02:15:18 UTC
e62f905 Fixup: handle splice of multiple modifiers I changed the logic so that it might splice a new modifier into the existing linked list (not just at the end), but failed to account for the case where what we are splicing in isn't just a single modifier, but a whole *list* (which occurs when splicing in the shared modifiers themselves). This change handles that case with a bit of annoying linked-list cleverness. 26 August 2017, 01:55:49 UTC
8c68434 Fix some resources-in-structs bugs Fixes #171 Fixes #172 These two bugs related to bad logic in handling of splitting resource-containing `cbuffer` declarations. - Issue #171 was the case where a `cbuffer` *only* had resource fields, in which case we crashed whenever referencing any field (some code was assuming there had to be non-resource fields) - Issue #172 was a case where two fields were declared with a single declaration (`Texture2D a, b;`), and the logic we had for tracking resource-type fields was accidentally tagging *both* fields with a single modifier so that field `b` would get confused for `a` in some contexts, and attempts to access `b` would crash. Both issues are now fixed, and regression tests have been added. 25 August 2017, 22:48:49 UTC
7e05e06 Merge pull request #170 from tfoleyNV/ir Add some dummy logic to print IR to HLSL 17 August 2017, 22:22:24 UTC
ec8175c [ir] Add support for "decorations" on instructions The terminology here is similar to SPIR-V. For right now the only decoration exposed is a fairly brute-force one that just points back to a high-level declaration so that we can look up info on it that might affect how we print output. 17 August 2017, 21:51:09 UTC
95348fd [ir] Represent fields more direclty Previously, a `StructType` was an ordinary instruction that took a variable number of types are operands, representing the types of fields. This ends up being inconvenient for a few reasons: - To add decorations to the fields, you'd end up having to decorate the struct type instead (SPIR-V has this problem) - You need to compute field indices during lowering, when you might prefer to defer that until later - The get/set field operations now need an index, which needs to be an explicit operand, which means a magic numeric literal floating around to represent the index The new approach fixes for the first two of these, and at least makes the last one a bit nicer. A `StructDecl` is now a parent instruction, and its sub-instructions represent the fields of the type - each field is an explicit instruction of type `StructField`. The operation to extract a field takes a direct reference the struct field, so everything is quite explicit. 17 August 2017, 21:17:29 UTC
ad19484 Add some dummy logic to print IR to HLSL - Change IR instructions to just hold an integer opcode instead of a pointer to the "info" structure - Externalize definition of IR instructions to a header file, and use the "X macro" approach to allow generating different definitions - Add notion of function types to the IR, so that we can easily query the result type of a function - Add some convenience accesors to allow walking the IR in a strongly-typed manner (e.g., iterate over the parameters of a function) - TODO: these should really be changed to assert the type of things, as least in debug builds - Add very basic logic to `emit.cpp` so that it can walk the generated IR and start printing it back as HLSL - This isn't meant to be usable as-is, but it is a step toward where we need to go 17 August 2017, 20:44:02 UTC
1965c3f Merge pull request #169 from tfoleyNV/ir IR generation cleanup work 17 August 2017, 19:18:37 UTC
5de3003 IR generation cleanup work - Make all instructions store their argument count for now, so we can iterate over them easily. - Longer term we might try to optimize for space because the common case is that the operand count is known, but keeping it simpler seems better for now - Split apart the creation of an instruction from adding it to a parent - Use the above capability to make sure that we add a function to its parent *after* all the parameter/result type emission has occured. - Perform simple value numbering for types during IR creation - This logic also tries to pick a good parent for any type instructions, so that types don't get created local to a function unless they really need to - Create all constants at global scope, and re-use when values are identical 17 August 2017, 18:22:22 UTC
5230ad2 Merge pull request #168 from tfoleyNV/resources-in-structs-control Add a flag to control type splitting 17 August 2017, 17:58:11 UTC
d13bd05 Add a flag to control type splitting The `-split-mixed-types` flag can be provided to command-line `slangc`, and the `SLANG_COMPILE_FLAG_SPLIT_MIXED_TYPE` flag can be passed to `spSetCompileFlags`. Either of these turns on a mode where Slang will split types that included both resource and non-resource fields. The declaration of such a type will just drop the resource fields, while a variable declare using such a type turns into multiple declararations: one for the non-resource fields, and then one for each resource field (recursively). This behavior was already implemented for GLSL support, and this change just adds a flag so that the user can turn it on unconditionally. Caveats: - This does not apply in "full rewriter" mode, which is what happens if the user doesn't use any `import`s. I could try to fix that, but it seems like in that mode people are asking to bypass as much of the compiler as possible. - When it *does* apply, it applies to user code as well as library/Slang code. So this will potentially rewrite the user's own HLSL in ways they wouldn't expect. I don't see a great way around it, though. 17 August 2017, 15:59:30 UTC
3dd88c2 Merge pull request #167 from tfoleyNV/ir More work on IR 16 August 2017, 23:25:32 UTC
e30ba2f Fixups for IR checkpoint - The changes introduced a new path where we don't even go through the current "lowering" (really an AST-to-AST legalization pass), but this exposed a few issues I didn't anticipate: - First, we needed to make sure to pass in the computed layout information when emitting the original program (since the layout info is no longer automatically attached to AST nodes) - Second, we needed to take the sample-rate input checks that were being done in lowering before, and move them to the emit logic (which is really ugly, but I don't see a way around it for GLSL). 16 August 2017, 23:04:09 UTC
87c5067 More work on IR With this change, basic generation of IR works for a trivial shader, and there is some basic support for dumping the generated IR in an assembly-like format. As with the other IR change, the use of the IR is statically disabled for now, so that existing users won't be affected. 16 August 2017, 19:44:50 UTC
74e04b8 Merge pull request #166 from tfoleyNV/add-builtins-to-core Add user-defined builtins to the "core" module 15 August 2017, 19:51:38 UTC
6de0a48 Merge pull request #165 from tfoleyNV/ir Starting to add intermediate representation (IR) 15 August 2017, 19:51:14 UTC
d094bfc Add user-defined builtins to the "core" module The Slang API allows an expert user to feed in source code that the compiler then treats as if it came from the Slang "standard library." They can use this to introduce new builtin types, functions, etc. - so long as they are careful, and are willing to deal with the lack of any compatibility guarantees across versions. At some point I split the Slang standard library into distinct modules, so that GLSL and HLSL builtins wouldn't pollute each other's namespace. In that change, I had to decide what module any new user-defined builtins should get added to, and I apparently decided they should go into the module for the Slang language, which would only affect `.slang` files. This doesn't work at all if the user wants to declare new HLSL builtins. I've gone ahead and made user extensions add to the "core" module (which is used by all of HLSL, GLSL, and Slang), but a better long-term fix would be to let the user pick the module/language the new builtins should apply to. 15 August 2017, 19:46:51 UTC
831896f Starting to add intermediate representation (IR) Right now none of this is hooked up, but I want to get things checked in incrementally rather than have along long-lived branches. - Added placeholder declarations for IR representation of instructions, basic blocks, etc. - Start adding a `lower-to-ir` pass to translate from AST representation to IR Again: none of this is functional, so it shouldn't mess with existing users of the compiler. 15 August 2017, 19:21:08 UTC
e6abc68 Merge pull request #162 from tfoleyNV/gh-38 Improve diagnostics for overlapping/conflicting bindings 15 August 2017, 18:09:03 UTC
f64bbf7 Add missing expected output file for test. 15 August 2017, 17:07:49 UTC
4a94f39 Improve diagnostics for overlapping/conflicting bindings Closes #38 - Change overlapping bindings case from error to warning (it is *technically* allowed in HLSL/GLSL) - Make diagnostic messages for these cases include a note to point at the "other" declaration in each case, so that user can more easily isolate the problem - Unrelated fix: make sure `slangc` sets up its diagnostic callback *before* parsing command-line options so that error messages output during options parsing will be visible - Unrelated fix: make sure that formatting for diagnostic messages doesn't print diagnostic ID for notes (all have IDs < 0). - Note: eventually I'd like to not print diagnostic IDs at all (I think they are cluttering up our output), but doing that requires touching all the test cases... 15 August 2017, 16:33:36 UTC
4a9b281 Merge pull request #161 from tfoleyNV/gh-160 Handle possibility of bad types in varying input/output signature. 15 August 2017, 16:28:50 UTC
3a2f191 Handle possibility of bad types in varying input/output signature. Fixes #160 If the front-end runs into a type it doesn't understand in the parameter list of an entry point, it will create an `ErrorType` for that parameter, but then the parameter binding/layout rules will fail to create a `TypeLayout` for the prameter (and return `NULL`). There were some places where the code was expecting that operation to succeed unconditionally, and so would crash when there was a bad type. The specific case in the bug report was when the return type of a shader entry point was bad: // `vec4` is not an HLSL type vec4 main(...) { ... } Note that the specific case in the buf report only manifests in "rewriter" mode (when the Slang compiler isn't allowed to issue error messages from the front-end), but the same basic thing would happen if the varying parameter/output had used a type that is invalid for varying input/output: Texture2D main(...) { ... } I'm not 100% happy with just adding more `NULL` checks for this, because there is no easy way to tell if they are exhaustive. A better solution in the longer term might be to construct a kind of `ErrorTypeLayout` to represent cases where we wanted a type layout, but none could be constructed. 15 August 2017, 15:52:13 UTC
aeb247c Merge pull request #159 from tfoleyNV/name-type Name type 15 August 2017, 01:50:46 UTC
9885c97 Add an explicit `Name` type Fixes #23 Up to this point, the compiler has used the ordinary `String` type to represent declaration names, which means a bunch of lookup structures throughout the compiler were string-to-whatever maps, which can reduce efficiency. It also means that things like the `Token` type end up carying a `String` by value and paying for things like reference-counting. This change adds a `Name` type that is used to represent names of variables, types, macros, etc. Names are cached and unique'd globally for a session, and the string-to-name mapping gets done during lexing. From that point on, most mapping is from pointers, which should make all the various table lookups faster. More importantly (possibly), this brings us one step closer to being able to pool-allocate the AST nodes. 14 August 2017, 21:48:37 UTC
7f57ea4 Rename `Name` fields to `name` This is in preparation for using `Name` as a type name. 14 August 2017, 15:02:03 UTC
bb66d6e Merge pull request #158 from tfoleyNV/syntax-lookup Data-driven parsing of modifiers 12 August 2017, 23:14:48 UTC
f1bda6a Data-driven parsing of modifiers Just like the previous change did for declaration keywords, this change uses the lexical environment to drive the lookup and dispatch of modifier parsing. This allows us to easily add modifiers to Slang, even when they might conflict with identifiers used in user code (because the modifier names are no longer special keywords, but ordinary identifiers). There was already some support for ideas like this with `__modifier` declarations (`ModifierDecl`) used to introduce some GLSL-specific keywords (so that they wouldn't pollute the namespace of HLSL files). The new approach changes these to be actual `syntax` declarations (`SyntaxDecl`) with the same representation as those used to introduce declaration keywords. Because many modifiers just introduce a single keyword that maps to a simple AST node (no further tokens/data), I modified the handling of syntax declarations so that they can take a user-data parameter, and this allows the common case ("just create an AST node of this type...") to be handled with minimal complications. This also adds in a general-purpose string-based lookup path for AST node classes, that should support programmatic creation in more cases. Statements are now the main case of keywords that need to be made table driven. 12 August 2017, 21:50:01 UTC
495f881 Merge pull request #157 from tfoleyNV/syntax-lookup Look up declaration keywords using ordinary scoping. 11 August 2017, 21:04:24 UTC
b2febc7 Look up declaration keywords using ordinary scoping. The existing parser code was doing string-based matching on the lookahead token to figure out how to parse a declaration, e.g.: ``` if(lookAhead == "struct") { /* do struct thing */ } else if(lookAhead == "interface") { /* do interface thing * } ... ``` That approach has some annoying down-sides: - It is slower than it needs to be - It is annoying to deal with cases where the available declaration keywords might differ by language - Most importantly, it is not possible for us to introduce "extended" keywords that the user can make use of, but which can be ignored by the user and treated as an ordinary identifier. That last part is important. Suppose the user wanted to have a local variable named `import`, but we also had a Slang extension that added an `import` keyword. Then a line of code like `import += 1` would lead to a failure because we'd try to parse an import declaration, even when it is obvious that the user meant their local variable. This would mean that Slang can't parse existing user code that might clash with syntax extensions. This issue is the reason why we currently have keywords like `__import`. A traditional solution in a compiler is to map keywords to distinct token codes as part of lexing, which eliminates the first conern (performance) because now we can dispatch with `switch`. It can also aleviate the second concern if we add/remove names from the string->code mapping based on language (the rest of the parsing logic doesn't have to know about keywords being added/removed). The solution we go for here is more aggressive. Instead of mapping keyword names to special token codes during lexing, we instead introduce logical "syntax declarations" into the AST, which are looked up using the ordinary scoping rules of the language. Depending on what code is imported into the scope where parsing is going on, different keywords may then be visible. This solves our last concern, since a user-defined variable that just happens to use the same name as a keyword is now allowed to shadow the imported declaration for syntax (this is akin to, e.g., Scheme where there really aren't any "keywords"). This also opens the door to the possibility of eventually allowing user to define their own syntax (again, like Scheme). For now I'm only using this for the declaration keywords. With this change it should be pretty easy to also add statement keywords in the same fashion. 11 August 2017, 19:34:58 UTC
db4079f Merge pull request #156 from tfoleyNV/source-file-cleanup Make source location lightweight 10 August 2017, 22:25:04 UTC
a5a436c Make source location lightweight Fixes #24 So far the code has used a representation for source locations that is heavy-weight, but typical of research or hobby compilers: a `struct` type containing a line number and a (heap-allocated) string. This is actually very convenient for debugging, but it means that any data structure that might contain a source location needs careful memory management (because of those strings) and has a tendency to bloat. The new represnetation is that a source location is just a pointer-sized integer. In the simplest mental model, you can think of this as just counting every byte of source text that is passed in, and using those to name locations. Finding the path and line number that corresponds to a location involves a lookup step, but we can arrange to store all the files in an array sorted by their start locations, and do a binary search. Finding line numbers inside a file is similarly fast (one you pay a one-time cost to build an array of starting offsets for lines). More advanced compilers like clang actually go further and create a unique range of source locations to represent a file each time it gets included, so that they can track the include stack and reproduce it in diagnostic messages. I'm not doing anything that clever here. 10 August 2017, 20:05:04 UTC
6e4830f Merge pull request #155 from tfoleyNV/renaming Major naming overhaul: 09 August 2017, 18:20:09 UTC
8e4c0c3 Merge pull request #154 from tfoleyNV/fix-lowering Fix use of "pseudo-syntax" in current lowering pass 09 August 2017, 17:23:29 UTC
695c270 Major naming overhaul: - `ExpressionSyntaxNode` becomes `Expr` - `StatementSyntaxNode` becomes `Stmt` - `StructSyntaxNode` becomes `StructDecl` - `ProgramSyntaxNode` becomes `ModuleDecl` - `ExpressionType` becomes `Type` - Existing fields names `Type` become `type` - There might be some collateral damage here if there were, e.g., `enum`s named `Type`, but I can live with that for now and fix those up as a I see them 09 August 2017, 17:23:09 UTC
a728612 Fix use of "pseudo-syntax" in current lowering pass The so-called "lowering" pass (really a kind of AST-to-AST legalization pass right now) needs to handle some basic scalarization of structured types, and it does this by inventing what I call "pseuo-expressions" and "pseudo-declarations." For example, there is a pseudo-expression node type that represents a tuple of N other expressions, and certain operations act element-wise over such tuples. The problem was that the implementation introduced these out-of-band expression/declaration types into the existing AST hierarchy which led to a dilemma: - If these new AST nodes were declared like all the others (and integrated into the visitor dispatch approach, etc.) then every pass would need to deal with them even though they are meant to be a transient implementation detail of this one pass - But if the new nodes *aren't* declared like the others, then they can't meaningfully interact with visitor dispatch, and will just crash the compiler if they somehow "leak" through to latter passes. And because they are just ordinary AST nodes from a C++ type-system perspective, such leaking is entirely possible (if not probable) Hopefully that setup helps make the solution clear: instead of having the "lowering" pass map an expression to an expression, it needs to map an expression to a new data type (here called `LoweredExpr`) that can wrap *either* an ordinary expression (the common case) or one of the new out-of-band values. Any code that accepts a `LoweredExpr` needs to handle all the cases, or explicitly decide that it can't/won't deal with anything other than ordinary expressions. Most of the code changes are straightforward at that point, although the whole "lowering" approach is a bit fiddly right now, so gertting the tests passing took a bit of attention. I'm not sure our test coverage of all this code is great, so I wouldn't be surprised if some failures are lurking still. 09 August 2017, 16:59:50 UTC
9ad2b40 Merge pull request #153 from tfoleyNV/remove-globals Remove uses of global variables 07 August 2017, 22:44:00 UTC
7b54f43 Remove uses of global variables There were two main places where global variables were used in the Slang implementation: 1. The "standard library" code was generated as a string at run-time, and stored in a global variable so that it could be amortized across compiles. 2. The representation of types uses some globals (well, class `static` members) to store common types (e.g., `void`) and to deal with memory lifetime for things like canonicalized types. In each case the "simple" fix is to move the relevant state into the `Session` type that controlled their lifetime already (the `Session` destructor was already cleaning up these globals to avoid leaks). For the standard library stuff this really was easy, but for the types it required threading through the `Session` a bit carefully. One more case that I found: there was a function-`static` variable used to generate a unique ID for files output when dumping of intermediates is enabled (this is almost strictly a debugging option). Rather than make this counter per-session (which would lead to different sessions on different threads clobbering the same few files), I went ahead and used an atomic in this case. Note that the remaining case I had been worried about was any function-`static` counter that might be used in generating unique names. It turns out that right now the parser doesn't use such a counter (even in cases where it probably should), and the lowering pass already uses a counter local to the pass (again, whether or not this is a good idea). This change should be a major step toward allowing an application to use Slang in multiple threads, so long as each thread uses a distinct `SlangSession`. The case of using a single session across multiple threads is harder to support, and will require more careful implementation work. 07 August 2017, 22:16:54 UTC
ca8eea9 Update README.md Typo fix. 27 July 2017, 16:49:49 UTC
0045a63 Merge pull request #149 from tfoleyNV/docs Update documentation. 26 July 2017, 22:52:40 UTC
92463c8 Update documentation. - Update readme to fill out some of the `TODO` sections - Add an API user's guide that gives the basics of linking against Slang and using it to compile and reflect shaders - Add a bit of usage info for the command-line `slangc` program - Add an overview of the Slang language as it stands today - Add an initial FAQ, mostly to help answer the "why should I use this?" question 26 July 2017, 21:56:45 UTC
174bc66 Merge pull request #148 from shader-slang/add-code-of-conduct Add Code of Conduct 26 July 2017, 21:42:19 UTC
213d609 Create CODE_OF_CONDUCT.md 26 July 2017, 21:41:11 UTC
155693a Merge pull request #146 from tfoleyNV/output-path-option Add a `-o` option to command-line `slangc` 26 July 2017, 01:22:12 UTC
941b834 Add a `-o` option to command-line `slangc` Fixes #11 - This adds a `-o` command-line option for specifying an output file. - The code tries to be a bit smart, to glean an output format from a file extension, and also to associate multiple `-o` options with multiple `-entry` options if needed. - There is a restriction that all the output files need to agree on the code generation target. This is reasonable for now, but might be something to lift eventualy - There is a restriction that only one output file is allowed per entry point - Together with the previous item this means you can't output both a `.spv` and a `.spv.asm` in one pass, even though both should be possible - There is currently a restriction that output paths only apply to entry points - This means there is no way to output reflection JSON to a file with `-o` (but that is mostly just a debugging feature for now) - This also means we don't support any "container" formats that can encapsulate multiple compiled entry points 25 July 2017, 23:21:23 UTC
29fb1d7 Merge pull request #143 from tfoleyNV/fixup-for-glslang-bug-fixup Fixup for the glslang bug workaround 24 July 2017, 02:59:29 UTC
e7241a9 Fixup for the glslang bug workaround There was a bug where the intialization expression for a variable was being lowered after the declaration was added to the output code, so that any sub-expressions that get hoisted out actually get computed *after* the original variable. This obviously led to downstream compilation failure. I've updated the test case to stress this scenario. 24 July 2017, 02:52:14 UTC
1d4633f Merge pull request #142 from tfoleyNV/glslang-bug-workaround Work around glslang issue 988 24 July 2017, 01:38:32 UTC
d4cfe57 Work around glslang issue 988 The basic bug there is that if you have a member of `struct` type in a `uniform` block and then pass a reference to that member directly to a call: ``` struct Foo { vec4 bar; }; uniform U { Foo foo; }; void main() { doSomething(foo); } ``` then glslang generates invalid SPIR-V which seems to cause an issue for some drivers. This change works around the problem by detecting cases where an argument to a function call is a reference to `uniform` block member (of `struct` type) and then rewrites the code to move that value to a temporary before the call. 24 July 2017, 01:37:52 UTC
376e61a Merge pull request #136 from tfoleyNV/explicit-sampler-fix Make the "hack" sampler explicit for now 22 July 2017, 21:07:19 UTC
edf8caf Make the "hack" sampler explicit for now - We use this to work around the fact that, e.g., `Texture2D.Load` doesn't take a sampler, but the equivalent GLSL operation `texelFetch` requires one - Previously we tried to hide the sampler from the user, hoping that glslang would drop it and we could just ignore it, but that doesn't work - For now we'll go ahead and explicitly show the sampler in the reflection info so that an app can react appropriately - We also generate a unique binding for the sampler, instead of the old behavior that fixed it with `binding = 0` - We still fix it with `set = 0`, so it might still surprise users 22 July 2017, 21:05:07 UTC
30fbf64 Merge pull request #135 from tfoleyNV/falcor-hashed-alpha-fixups Map HLSL `frac()` to GLSL `fract()` 21 July 2017, 20:48:58 UTC
5a63cb0 Map HLSL `frac()` to GLSL `fract()` 21 July 2017, 20:48:12 UTC
6992d0e Merge pull request #134 from tfoleyNV/gh-133 Don't add `flat` qualifier to integer fragment output 21 July 2017, 19:31:18 UTC
0048a81 Don't add `flat` qualifier to integer fragment output Fixes #133 We already had logic to skip adding `flat` to a vertex input, and this just extends it to not adding `flat` to a fragment output. Note that explicit qualifiers in the input HLSL/Slang will still be carried through to the output, so it is still possible for a Slang user to shoot themself in the foot with interpolation qualifiers. 21 July 2017, 19:19:01 UTC
3fa85ed Merge pull request #132 from tfoleyNV/line-directive-control Add an API option to control emission of `#line` directives 21 July 2017, 18:52:17 UTC
9718ab3 Add an API option to control emission of `#line` directives - API users can use this to get "clean" output to aid with debugging Slang issues - Also changes the prefix on intermediate files that Slang dumps, to make them easier to ignore with a regexp 21 July 2017, 18:37:31 UTC
85f60e0 Merge pull request #130 from tfoleyNV/vs-write-gllayer Require extension when using `gl_Layer` in VS 20 July 2017, 22:20:25 UTC
0cf9bcf Require extension when using `gl_Layer` in VS The requirements for using `gl_Layer` differ by stage, and so we need to pick an appropriate GL version based on the target stage, and then also require a specific extension for anything other than geometry or fragment. 20 July 2017, 22:14:35 UTC
b8478a4 Merge pull request #129 from tfoleyNV/single-pass-stereo-ext Translate NV single-pass stereo extension from Slang to GLSL 20 July 2017, 17:13:54 UTC
76dca35 Translate NV single-pass stereo extension from Slang to GLSL - The easy part here is treating `NV_` prefixed semantics as another case of "system-value" semantics - Mapping the new semantics (`NV_X_RIGHT` and `NV_VIEWPORT_MASK`) to their GLSL equivalents is harder - Instead of a single "right-eye vertex" output, GLSL defines an array of per-view positions - Instead of a vector of masks, GLSL defines an array of per-view masks - Another point here is that a lot of semantics that appear as `uint` in HLSL are `int` in GLSL, which can lead to conversion issues. - The approach here is to have the lowering pass introduce a notion of assignment with "fixups," which will try to cast things as needed - When assigning to a simple value with the "wrong" type, introduce a cast - When assigning to an array from a vector, break out multiple assignments of individual vector/array elements - In order to facilitate the above, I needed to add actual types to the magic expressions I introduce to represent GLSL builtin variables. These were taken by scanning the online documentation for GL, so they might not be perfect. - Major issues with the approach in this change: - No attempt is being made here to check that the original declaration used a type appropriate to the semantic. The assumption is that this logic only ever triggers for Slang entry points, or GLSL entry points using a Slang `struct` type for input/output (and for right now Slang code is only ever written by "understanding" developers) - In the case of a Slang entry point, we always copy varying parameters in/out around the call to `main_`, so this approach should handle calls to functions with `out` or `in out` parameters okay, but it is *not* robust to cases where we don't want to copy in all the entry point parameters first thing (e.g., a GS), so that will have to change - In the GLSL case (or if we revise the approach to Slang entry points), there is going to be a problem if these converted varying parameters are ever passed as arguments to `out` or `in out` parameters. In these cases we need to do more sleight-of-hand to reify a temporary variable and do the necessary copy-in/copy-out. Being able to do that logic relies on having correct information about callees, which requires having robust semantic analysis of the function body. There is only so much we can do... - A better long-term approach would not rely on an ad-hoc "fixup" conversion during assignment, but would instead implement the GLSL builtin variables as, effectively, global "property" declarations that have both `get` and `set` accessors, and then tunnel a reference to such a property down through lowering, where it can lower to uses of the "getter" or "setter" as appropriate in context (and the result type of the getter/setter can be what we'd want/expect). 20 July 2017, 17:09:25 UTC
f07c01c Merge pull request #128 from tfoleyNV/improve-failure-modes Try to improve handling of failures during compilation 20 July 2017, 01:52:38 UTC
3f48e1c Try to improve handling of failures during compilation The change is mostly about trying to make sure the compiler "fails safe" when it encounters an internal assumption that isn't met. Most internal errors will now throw exceptions (yes, exceptions are evil, but this will work for now), and these get caught in `spCompile` so that they don't propagate to the user (they just see a message that compilation aborted due to an internal error). Subsequent changes are going to need to work on diagnosing as many of these situations as possible, so that users can at least know what construct in their code was unexpected or unhandled by the compiler. 20 July 2017, 01:15:37 UTC
a2b8b4c Merge pull request #127 from tfoleyNV/deployment Use AppVeyor to deploy to GitHub Releases 19 July 2017, 21:54:32 UTC
6c71681 Use AppVeyor to deploy to GitHub Releases - Change naming convention for output directory - `windows-x86` and `windows-x64` - Lower-case for `debug`, `release` - Test script needed to be patched up for this - Add packaging and deployment logic to AppVeyor CI script - Trigger deployment on new tag - Compute a release version based on Git tag name (`vX.Y.Z` becomes `X.Y.Z`) - Fallback to hash in case tag isn't available (it should always be) - Bundle files into a few artifacts (binary package + source package) - Set up to push those files into existing GitHub tag/release 19 July 2017, 21:28:26 UTC
019bc0d Merge pull request #126 from tfoleyNV/dynamic-library-build Build a dynamic library for Slang 19 July 2017, 18:52:59 UTC
a97eac2 Build a dynamic library for Slang - Change the `slang` project from a static library to a dynamic one - Add some details around `slang.h` to make sure DLL export stuff is working - Make the `slangc` executable use the dynamic library - Rename the `glslang` sub-project to `slang-glslang` and move it into the main source hierarchy - This reflects the fact that it isn't a stand-alone tool, and isn't in any way a standard binary of glslang, but rather just an artifact of how Slang uses glslang 19 July 2017, 18:27:23 UTC
back to top