https://github.com/shader-slang/slang

sort by:
Revision Author Date Message Commit Date
4e3e7f2 Disable CUDA active mask tests as failing on CI. (#1367) 03 June 2020, 13:41:31 UTC
d386e27 Added spGetBuildTagString. (#1365) 02 June 2020, 19:26:51 UTC
f87b632 Make stdlib path just be the filename. (#1364) * Made bad-operaor-call available on all targets. Fix the line filename to not inclue path, to avoid paths being absolute and therefores value be host environment dependent (causing tests to fail). * Disable on linux because theres still a problem on gcc x86 where the file path is different. * Fix to some typos in bad-operator-call.slang * Fix diagnostic for bad-operator-call.slang 02 June 2020, 18:05:35 UTC
926a0c5 Working matrix swizzle (#1354) * Working matrix swizzle. Supports one and zero indexing and multiple elements. Performs semantic checking of the swizzle. Matrix swizzles are transformed into a vector of indexing operations during lowering to the IR. This change does not handle matrix swizzle as lvalues. * Renaming * Added missing semicolon * Initialize variable for gcc * Added the expect file for diagnostics * Matrix swizzle updated per PR feedback * Stylistic fix * Formatting fixes * Fix compiling with AST change. Change indentation. Co-authored-by: jsmall-nvidia <jsmall@nvidia.com> 02 June 2020, 16:12:35 UTC
8acb704 Bug fix problem with ray tracing from fragment shader (#1362) * Added GLSL_460 if ray tracing is used on fragment shader. Moved GLSL specific setup init function. * Split out _requireRayTracing method. 29 May 2020, 19:56:28 UTC
9773495 NodeBase types constructed with astNodeType member set (#1363) * Maked Substituions derived from NodeBase * * Add astNodeTYpe field to NodeBase * Make Substitutions derived from NodeBase * Make all construction through ASTBuilder * Make getClassInfo non virtual (just uses the astNodeType) 29 May 2020, 18:26:48 UTC
45e414f Update target compat doc to include Wave intrinsics for CUDA (#1361) 29 May 2020, 15:01:24 UTC
f3d7042 Feature/ast syntax standard (#1360) * Small improvements to documentation and code around DiagnosticSink * Made methods/functions in slang-syntax.h be lowerCamel Removed some commented out source (was placed elsewhere in code) * Making AST related methods and function lowerCamel. Made IsLeftValue -> isLeftValue. 29 May 2020, 12:36:10 UTC
95597d7 Small improvements to documentation and code around DiagnosticSink (#1359) 28 May 2020, 19:32:19 UTC
c2d3134 WIP: ASTBuilder (#1358) * Compiles. * Small tidy up around session/ASTBuilder. * Tests are now passing. * Fix Visual Studio project. * Fix using new X to use builder when protectedness of Ctor is not enough. Substitute->substitute * Add some missing ast nodes created outside of ASTBuilder. * Compile time check that ASTBuilder is making an AST type. * Moced findClasInfo and findSyntaxClass (essentially the same thing) to SharedASTBuilder from Session. 28 May 2020, 18:01:51 UTC
e5d0f33 Synthesize "active mask" for CUDA (#1352) * Synthesize "active mask" for CUDA The Big Picture =============== The most important change here is to `hlsl.meta.slang`, where the declaration of `WaveGetActiveMask()` is changed so that instead of mapping to `__activemask()` on CUDA (which is semantically incorrect) it maps to a dedicated IR instruction. The other `WaveActive*()` intrinsics that make use of the implicit "active mask" concept had already been changed in #1336 so that they explicitly translate to call the equivalent `WaveMask*()` intrinsic with the result of `WaveGetActiveMask()`. As a result, all of the `WaveActive*()` functions are now no different from a user-defined function that uses `WaveGetActiveMask()`. The bulk of the work in this change goes into an IR pass to replace the new instruction for getting the active mask gets replaced with appropriately computed values before we generate output CUDA code. That work is in `slang-ir-synthesize-active-mask.{h,cpp}`. Utilities ========= There are a few pieces of code that were helpful in writing the main pass but that can be explained separately: * IR instructions were added corresponding to the Slang `WaveMaskBallot()` and `WaveMaskMatch()` functions, which map to the CUDA `__ballot_sync()` and `__match_any_sync()` operations, respectively. These are only implemented for the CUDA target because they are only being generated as part of our CUDA-only pass. * The `IRDominatorTree` type was updated to make it a bit more robust in the presence of unreachable blocks in the CFG. It is possible that the same ends could be achieved more efficiently by folding the corner cases into the main logic, but I went ahead and made things very explicit for now. * I added an `IREdge` utility type to better encapsulate the way that certain code operating on the predecessors/successors of an `IRBlock` were using an `IRUse*` to represent a control-flow edge. The `IREdge` type makes the logic of those operations more explicit. A future change should proably change it so that `IRBlock::getPredecessors()` and `getSuccessors()` are instead `getIncomingEdges()` and `getOutgoingEdges()` and work as iterators over `IREdge` values, given the way that the predecessor and successor lists today can contain duplicates. * Using the above `IREdge` type, the logic for detecting and break critical edges was broken down into something that is a bit more clear (I hope), and that also factors out the breaking of an edge (by inserting a block along it) into a reusable subroutine. The Main Pass ============= The implementation of the new pass is in `slang-ir-synthesize-active-mask.cpp`, and that file attempts to include enough comments to make the logic clear. A brief summary for the benefit of the commit history: * The first order of business is to identify functions that need to have the active mask value piped into them, and to add an additional parameter to them so that the active mask is passed down explicitly. Call sites are adjusted to pass down the active mask which can then result in new functions being identified as needing the active mask. * The next challenge is for a function that uses the active mask, to compute the active mask value to use in each basic block. The entry block can easily use the active mask value that was passed in, while other blocks need more work. * When doing a conditional branch, we can compute the new mask for the block we branch to as a function of the existing mask and the branch condition. E.g., the value `WaveMaskBallot(existingMask, condition)` can be used as the mask for the "then" block of an `if` statement. * When control flow paths need to "reconverge" at a point after a structured control-flow statement, we need to insert logic to synchronize and re-build the mask that will execute after the statement, while also excluding any lanes/threads that exited the statement in other ways (e.g., an early `return` from the function). The explanation here is fairly hand-wavy, but the actual pass uses much more crisp definitions, so the code itself should be inspected if you care about the details. Tests ===== The tests for the new feature are all under `tests/hlsl-intrinsic/active-mask/`. Most of them stress a single control-flow construct (`if`, `switch`, or loop) and write out the value of `WaveGetActiveMask()` at various points in the code. In practice, our definition of the active mask doesn't always agree with what D3D/Vulkan implementations seem to produce in practice, and as a result a certain amount of effort has gone into adding tweaks to the tests that force them to produce the expected output on existing graphics APIs. These tweaks usually amount to introducing conditional branches that aren't actually conditional in practice (the branch condition is always `true` or always `false` at runtime), in order to trick some simplistic analysis approaches that downstream compilers seem to employ. One test case currently fails on our CUDA target (`switch-trivial-fallthrough.slang`) and has been disabled. This is an expected failure, because making it produce the expected value requires a bit of detailed/careful coding that would add a lot of additional complexity to this change. It seemed better to leave that as future work. Future Work =========== * As discussed under "Tests" above, the handling of simple `switch` statements in the current pass is incomplete. * There's an entire can of worms to be dealt with around the handling of fall-through for `switch`. * The current work also doesn't handle `discard` statements, which is unimportant right now (CUDA doesn't have fragment shaders), but might matter if we decide to synthesize masks for other targets. Similar work would probably be needed if we ever have `throw` or other non-local control flow that crosses function boundaries. * An important optimization opportunity is being left on the floor in this change. When block that comes "after" a structured control-flow region (which is encoded explicitly in Slang IR and SPIR-V) post-dominates the entry block of the region, then we know that the active mask when exiting the region must be the same as the mask when entering the region, and there is no need to insert explicit code to cause "re-convergence." This should be addressed in a follow-on change once we add code to Slang for computing a post-dominator tree from a function CFG. * Related to the above, the decision-making around whether a basic block "needs" the active mask is perhaps too conservative, since it decides that any block that precedes one needing the active mask also needs it. This isn't true in cases where the active mask for a merge block can be inferred by post-dominance (as described above), so that the blocks that branch to it don't need to compute an active mask at all. * If/when we extend the CPU target to support these operations (along with SIMD code generation, I assume), we will also need to synthesize an active mask on that platform, but the approach taken here (which pretty much relies on support for CUDA "cooperative groups") wouldn't seem to apply in the SIMD case. * Similarly, the approach taken to computing the active mask here requires a new enough CUDA SM architecture version to support explicit cooperative groups. If we want to run on older CUDA-supporting architectures, we will need a new and potentially very different strategy. * Because the new pass here changes the signature of functions that require the active mask (and not those that don't), it creates possible problems for generating code that uses dynamic dispatch (via function pointers). In principle, we need to know at a call site whether or not the callee uses the active mask. There are multiple possible solutions to this problem, and they'd need to be worked through before we can make the implicit active mask and dynamic dispatch be mutually compatible. * Related to changing function signatures: no effort is made in this pass to clean up the IR type of the functions it modifies, so there could technically be mismatches between the IR type of a function and its actual signature. If/when this causes problems for downstream passes we probably need to do some cleanup. * fixup: backslash-escaped lines I did some "ASCII art" sorts of diagrams to explain cases in the CFG, and some of those diagrams used backslash (`\`) characters as the last character on the line, causing them to count as escaped newlines for C/C++. The gcc compiler apparently balked at those lines, since they made some of the single-line comments into multi-line comments. I solved the problem by adding a terminating column of `|` characters at the end of each line that was part of an ASCII art diagram. * fixup: typos Co-authored-by: jsmall-nvidia <jsmall@nvidia.com> 26 May 2020, 22:11:22 UTC
b136904 Improvements around hashing (#1355) * Fields from upper to lower case in slang-ast-decl.h * Lower camel field names in slang-ast-stmt.h * Fix fields in slang-ast-expr.h * slang-ast-type.h make fields lowerCamel. * slang-ast-base.h members functions lowerCamel. * Method names in slang-ast-type.h to lowerCamel. * GetCanonicalType -> getCanonicalType * Substitute -> substitute * Equals -> equals ToString -> toString * ParentDecl -> parentDecl Members -> members * * Make hash code types explicit * Use HashCode as return type of GetHashCode * Added conversion from double to int64_t * Split Stable from other hash functions * toHash32/64 to convert a HashCode to the other styles. GetHashCode32/64 -> getHashCode32/64 GetStableHashCode32/64 -> getStableHashCode32/64 * Other Get/Stable/HashCode32/64 fixes * GetHashCode -> getHashCode * Equals -> equals * CreateCanonicalType -> createCanonicalType * Catches of polymorphic types should be through references otherwise slicing can occur. * Fixes for newer verison of gcc. Fix hashing problem on gcc for Dictionary. * Another fix for GetHashPos * Fix signed issue around GetHashPos 26 May 2020, 17:53:10 UTC
ee2ec68 Merge pull request #1356 from csyonghe/master Remove non-ascii characters from source file. 25 May 2020, 21:53:15 UTC
fd28dcf Remove non-ascii characters from source file. 25 May 2020, 20:07:26 UTC
076a4c0 Tidy up around AST nodes (#1353) * Fields from upper to lower case in slang-ast-decl.h * Lower camel field names in slang-ast-stmt.h * Fix fields in slang-ast-expr.h * slang-ast-type.h make fields lowerCamel. * slang-ast-base.h members functions lowerCamel. * Method names in slang-ast-type.h to lowerCamel. * GetCanonicalType -> getCanonicalType * Substitute -> substitute * Equals -> equals ToString -> toString * ParentDecl -> parentDecl Members -> members 22 May 2020, 18:21:37 UTC
daf53bb Non virtual accept implementation on AST types (#1351) * First pass impl of making accept on AST node types non virtual. * A single switch for ITypeVistor on Val type. * Use ORIGIN to choose ITypeVisitor dispatch. * Don't use ORIGIN - we don't need special handling for ITypeVisitor on Val derived types. 21 May 2020, 19:36:02 UTC
d90ae36 AST dump improvements (#1350) * Add support for parsing array types to C++ extractor. * C++ extractor looks for 'balanced tokens'. Use for extracting array suffixes. * First pass at field dumping. * Update project for field dumping. * WIP AST Dumper. * More AST dump compiling. * Fix bug in StringSlicePool where it doesn't use the copy of the UnownedStringSlice in the map. * Add support for SLANG_RELFECTED and SLANG_UNREFLECTED More AST dump support. * Support for hierarchical dumping/flat dumping. Use SourceWriter to dump. * Add -dump-ast command line option. * Add fixes to VS project to incude AST dump. * Fix compilation on gcc. * Add fix for type ambiguity issue on x86 VS. * Fixes from merge of reducing Token size. * Fix comment about using SourceWriter. * Improvement AST dumping around * Pointers (write as hex) * Scope * Turn off some unneeded fields in AST hierarchy * Only output the initial module in full. 21 May 2020, 18:06:18 UTC
96a00c8 AST dumping via C++ Extractor reflection (#1348) * Add support for parsing array types to C++ extractor. * C++ extractor looks for 'balanced tokens'. Use for extracting array suffixes. * First pass at field dumping. * Update project for field dumping. * WIP AST Dumper. * More AST dump compiling. * Fix bug in StringSlicePool where it doesn't use the copy of the UnownedStringSlice in the map. * Add support for SLANG_RELFECTED and SLANG_UNREFLECTED More AST dump support. * Support for hierarchical dumping/flat dumping. Use SourceWriter to dump. * Add -dump-ast command line option. * Add fixes to VS project to incude AST dump. * Fix compilation on gcc. * Add fix for type ambiguity issue on x86 VS. * Fixes from merge of reducing Token size. * Fix comment about using SourceWriter. 20 May 2020, 14:56:49 UTC
c54c957 Reduce the size of Token (#1349) * Token size on 64 bits is 24 bytes (from 40). On 32 bits is 16 bytes from 24. * Added hasContent method to Token. Some other small improvements around Token. 19 May 2020, 19:37:40 UTC
1de1431 Use 'balance' for extracting array suffix in C++ extractor (#1346) * Add support for parsing array types to C++ extractor. * C++ extractor looks for 'balanced tokens'. Use for extracting array suffixes. 18 May 2020, 19:24:05 UTC
4397d90 Add a few missing enum cases to slang.h (#1347) These are cases where the C-style API had the appropriate `SLANG_*` case defined, but the wrapper C++ `enum class` declaration didn't have a matching case. 15 May 2020, 21:42:02 UTC
d41f5d4 Change to make a single implementation of SLANG_ABSTRACT_CLASS and SLANG_CLASS, to simplify macro injection and not require macro redefinition in each file. (#1345) 14 May 2020, 22:28:50 UTC
daaae74 Add support for parsing array types to C++ extractor. (#1343) 14 May 2020, 18:21:38 UTC
f5dfa1e Add GLSL translation for HLSL fmod() (#1342) The existing code was assuming `fmod()` was available as a builtin in GLSL, which isn't true. It also isn't possible to translate the HLSL `fmod()` to the GLSL `mod()` since the two have slightly different semantics. This change introduces a definition for `fmod(x,y)` that amounts to `x - y*trunc(x/y)` which should agree with the HLSL version except in corner cases (e.g., there are some cases where the HLSL version returns `-0` and this one will return `0`). 11 May 2020, 21:57:05 UTC
798f3bc AST nodes using C++ Extractor (#1341) * Extractor builds without any reference to syntax (as it will be helping to produce this!). * Change macros to include the super class. * WIP replacing defs files. * Added indexOf(const UnownedSubString& in) to UnownedSubString. Refactored extractor * Output a macro for each type with the extracted info - can be used during injection in class * Simplify the header file - as can get super type and last from macro now * Store the 'origin' of a definition * Some small tidy ups to the extractor. * Improve comments on the extractor options. * Made CPPExtractor own SourceOrigins * Small fixes around SourceOrigin. * Small tidy up around macroOrign * WIP Visitor seems now to work correctly. Split out types used by ast into slang-ast-support-types.h * Fix remaining problems with C++ extractor being used with AST nodes. Add CountOf to extractor type ids. Added ReflectClassInfo::getInfo to turn an ASTNodeType into a ReflectClassInfo * Fix compiling on linux. Fix typo in memset. * Small tidy up around comments/layout. Moved NodeBase casting to NodeBase. * Make premake generate project that builds with cpp-extractor for AST. * Get the source directory from the filter in premake. * Fix typo in source path * Explicitly set the source path for premake generation for AST. * Special case handling of override to apease Clang. * Use a more general way to find the slang-ast-reflect.h file to run the extractor. * Appveyor is not triggering slang-cpp-extractor - try putting dependson together. * Put building slang-cpp-extractor first. * Disable some project options to stop MSBuild producing internal compiler errors. * Try reordering the projects in premake5.lua * Hack to try and make slang-cpp-extractor built on appveyor. * Disable flags - not required for MSBuild on appveyor. * Disable flags not required for build on AppVeyor. * Updated Visual Studio projects with slang-cpp-extractor. * Added Visual Studio slang-cpp-extractor project. 08 May 2020, 18:31:40 UTC
c16abd4 Enhanced C++ extractor (#1340) * Extractor builds without any reference to syntax (as it will be helping to produce this!). * Change macros to include the super class. * Added indexOf(const UnownedSubString& in) to UnownedSubString. Refactored extractor * Output a macro for each type with the extracted info - can be used during injection in class * Simplify the header file - as can get super type and last from macro now * Store the 'origin' of a definition * Some small tidy ups to the extractor. * Improve comments on the extractor options. * Made CPPExtractor own SourceOrigins * Small fixes around SourceOrigin. * Small tidy up around macroOrign 07 May 2020, 19:00:33 UTC
9245460 Try to stop AppVeyor from clobbering release logs (#1339) We trigger releases by creating tags on GitHub, and ideally we try to include a commit log of stuff that changed since the previous release. Unfortunately, the way that AppVeyor CI is currently set up, it clobbers the information for a release (including its description) as part of the deployment step, so all releases just list `Slang <verison>` as the description even if the programmer manually created a suitable release log message. This change tweaks a setting in `appveyor.yml` that seems (AFAICT from the documentation and various issues I've been reading) to make it so that AppVeyor will simply push the binary artifacts to the release if it already exists, rather than set all of the release information from scratch. We will have to wait until we next do a release to see how this works. 05 May 2020, 16:36:34 UTC
2dcfce4 Disable a test that was breaking on CI (#1338) The CI system doesn't have a new enough dxc to support 16-bit load/store from byte-addressed buffers, so I am disabling the test for now. A better long-term fix is to put an appropriate version of dxc into `slang-binaries` and use that for our CI tests. 04 May 2020, 20:10:35 UTC
f599788 C++ Extractor (#1337) * WIP: Doing texing using slangs lexer for cpp-extractor * Node tree for C++ extraction. * Bug fixing. Add dump of hierarchy. * First pass at extracting fields. * Parse template types. * Use diagnostics defs for C++ extractor. * Simplify Diagnostic Defs. * Remove the brace stack. * Added IdentifierLookup. * Add handling for >> style template close. * Improved identifier handling/keywords. * Added ability to check if reader is at cursor position. * Handling of an unspecified root type. * Parsing code comments. Tidy up some parsing - to use advanceIf functions more. * Improve path handling. * Fixes around changes to Path interface. * Working Range, Type and Scope header. * Extract the middle part of marker and put in output. Gives more flexibility at macro injection, and in class definitions. * Split DERIVED types into it's own macro, to provide way to generate for derived types. * Fix clang/g++ compile issue. * Tabs -> spaces. * Fix small bug in getFileNameWithoutExt * Small improvement around naming. Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 04 May 2020, 17:46:24 UTC
5d3a737 Make stdlib WaveActive* call WaveMask* (#1336) This change makes the various `WaveActive*()` functions have default implementations that call `WaveMask*()` passing `WaveActiveMask()`. The new definitions will be used during CUDA code generation, which simplifies some of the duplication that was occuring in the `__target_intrinsic` modifiers. This change does *not* add logic to make computation of `WaveGetActiveMask()` corect on CUDA, so these functions will still fail to provide the behavior that users need/expect. A future change will need to add logic to synthesize the value of `WaveGetActiveMask()` automatically. 04 May 2020, 16:06:55 UTC
c697fe5 Improve GLSL coverage of boolean binary ops (#1335) * Improve GLSL coverage of boolean binary ops This change ensures that the `&&`, `||`, `&`, `|`, and `^` apply correctly to vectors of `bool` values when targetting GLSL. Most of the changes are in the GLSL emit path, where the IR instructions for these operators are bottlenecked through a small set of helper routines to cover the different cases. In general: * The vector variants of the operations are implemented by casting to `uint` vectors, performing bitwise ops, then casting back * The scalar variants are handled by conveting the bitwise operations to their equivalent logical operator (the one interesting case there is bitwise `^` where the equivalent logical operation on `bool` is `!=`) This change makes it clear that our IR really shouldn't have distinct opcodes for logical vs. bitwise and/or/xor, and instead should just have a single family of operations where the behavior differs based on the type of the operand. That is already *de facto* the way things work (a user can always write `&`, `|` and `^` and expect them to work on `bool` and vectors of `bool`), so that the GLSL output path has to deal with the overlap. Having two sets of IR ops here actually makes for more code instead of less. * Fixups: review feedback and test ! operator 01 May 2020, 15:53:11 UTC
bd8562c Add support for generic load/store on byte-addressed buffers (#1334) * Add support for generic load/store on byte-addressed buffers Introduction ============ The HLSL `*ByteAddressBuffer` types originaly only supported loading/storing `uint` values or vectors of the same, using `Load`/`Load2`/`Load3`/`Load4` or `Store`/`Store2`/`Store3`/`Store4`. More recent versions of dxc have added support for generic `Load<T>` and `Store<T>`, which adds a two main pieces of functionality for users. The first and more fundamental feature is that `T` can be a type that isn't 32 bits in size (or a vector with elements of such a type), thus exposing a capability that is difficult or impossible to emulate on top of 32-bit load/store (depending on what guarantees `*StructuredBuffer` makes about the atomicity of loads/stores). The secondary benefit of having a generic `Load<T>` and `Store<T>` is that it becomes possible to load/store types like `float` without manual bit-casting, and also becomes possible to load/store `struct` types so long as all the fields are loadable/storable. This change adds generic `Load<T>` and `Store<T>` to the Slang standard library definition of byte-address buffers, and tries to bring those same benefits to as many targets as possible. In particular, the secondary benefits become available on all targets, including DXBC: byte-address buffers can be used to directly load/store types other than `uint`, including user-defined `struct` types, so long as all of the fields of those types can be loaded/stored. The ability to load/store non-32-bit types depends on target capabilities, and so is only available where direct support for those types is available. For 16-bit types like `half` this includes both Vulkan and D3D12 DXIL with appropriate extensions or shader models. The implementation is somewhat involved, so I will try to explain the pieces here. Standard Library ================ The changes to the Slang standard library in `hlsl.meta.slang` are pretty simple. We add new `Load<T>` and `Store<T>` generic methods to `*ByteAddressBuffer`, and route them through to a new IR opcode. Right now the generic `Load<T>` and `Store<T>` do *not* place any constraints on the type `T`, although in practice they should only work when `T` is a fixed-size type that only contains "first class" uniform/ordinary data (so no resources, unless the target makes resource types first class). Our front-end checking cannot currently represent first-class-ness and validate it (nor can it represent fixed-size-ness), so these gaps will have to do for now. Rather than directly translate `Load<T>` or `Store<T>` calls into a single instruction, we instead bottleneck them through internal-use-only subroutines. The design choice here is intended to ensure that for some large user-defined type like `MassiveMaterialStruct` we only emit code for loading all of its fields *once* in the output HLSL/GLSL rather than once per load site. While downstream compilers are likely to inline all of this logic anyway, we are doing what we can to avoid generating bloated code. Emit and C++/CUDA ================= Over in `slang-emit-c-like.cpp` we translate the new ops into output code in a straightforward way. A call like `obj.Load<Foo>(offset)` will eventually output as a call like `obj.Load<Foo>(offset)` in the generated code, by default. For the CPU C++ and CUDA C++ codegen paths, this is enough to make a workable implementation, and we add suitable templated `Load<T>` and `Store<T>` declarations to the prelude for those targets. Legalization ============ For targets like DXBC and GLSL there is no way to emit a load operation for an aggregate type like a `struct`, so we introduce a legalization pass on the IR that will translate our byte-address-buffer load/store ops into multiple ops that are legal for the target. Scalarization ------------- The big picture here is easy enough to understand: when we see a load of a `struct` type from a byte-address buffer, we translate that into loads for each of the fields, and then assemble a new `struct` value from the results. We do similar things for arrays, matrices, and optionally for vectors (depending on the target). Bit Casting ----------- After scalarization alone, we might have a load of a `float` or a `float3` that isn't legal for D3D11/DXBC, but that *would* be legal if we just loaded a `uint` or `uint3` and then bit-casted it. The legalization pass thus includes an option to allow for loads/stores to be translated to operate on a same-size unsigned integer type and then to bit-cast. To make this work actually usable, I had to add some more details to the implementation of the bit-cast op during HLSL emit and, more importantly, I had to customize the way that the byte-address buffer load/store ops get emitted to HLSL so that it prefers to use the existing operations like `Load`/`Load2`/`Load3`/`Load4` instead of the generic one, whenever operating on `uint`s or vectors of `uint`. Translation to Structured Buffers --------------------------------- Even after scalarizing all byte-address-buffer loads/stores, we still have a problem for GLSL targets, because a single global `buffer` declaration used to back a byte-address buffer can only have a single element type (currently always `uint`), so the granularity of loads/stores it can express is fixed at declaration time. If we want to load a `half` from a byte-address buffer, we need a dedicated `buffer` declaration in the output GLSL with an element type of `half`. The solution we employ here is to translate all byte-address buffer loads into "equivalent" structured-buffer ops when targetting GLSL. We add logic to find the underlying global shader parameter that was used for a load/store and introduce a new structured-buffer parameter with the desired element type (e.g., `half`) and then rewrite the load/store op to use that buffer instead. We copy layout information from the original buffer to the new one, so that in the output GLSL all the various `buffer`s will use a single `binding` and thus alias "for free." We don't want to create a new global buffer for every load/store, so we try to cache these "equivalent" structured buffers as best as we can. For the caching I ended up needing a pair to use as a key, so I tweaked the `KeyValuePair<K,V>` type in `core` so that it could actually work for that purpose. Because we are working at the level of IR instructions instead of stdlib functions at this work I had to add new IR opcodes to represent structured-buffer load/store that only (currently) apply to GLSL. Layout ====== In order to translate a load/store of a `struct` type into per-field load/store we need a way to access layout information for the types of the fields. Previously layout information has been an AST-level concern that then gets passed down to the IR only when needed and only on global parameters, so layout information isn't always available in cases like this, at the actual load/store point. As an expedient move for now I've introduced a dedicated module that does IR-level layout and caches its results on the IR types themselves. This approach *only* supports the "natural" layout of a type, and thus is usable for structured buffers and byte-address buffers (or general pointer load/store on targets that support it), but which is *not* usable for things like constant buffer layout. We've known for a while that the Right Way to do layout going forward is to have an IR-based layout system, and this could either be seen as a first step toward it, or else as a gross short-term hack. YMMV. Details ======= The GLSL "extension tracker" stuff around type support needed to be tweaked to recognize that types like `int16_t` aren't actually available by default. I switched it from using a "black list" of unavailable types at initialization time over to using a "white list" of types that are known to always be available without any extensions. Tests ===== There are two tests checked in here: one for the basic case of a `struct` type that has fields that should all be natively loadable, and one that stresses 16-bit types. Each test uses both load and store operations. Future Directions ================= Right now we translate vector load/store to GLSL as load/store of individual scalars, which means the assumed alignment is just that of the scalars (consistent with HLSL byte-address buffer rules). We could conceivably introduce some controls to allow outputting the vector load/store ops more directly to GLSL (e.g., declaring a `buffer` of `float4`s), which might enable more efficient load/store based on the alignment rules for `buffer`s. The IR layout work has a number of rough edges, but the most worrying is probably the assumption that all matrices are laid out in row-major order. Slang really needs an overhaul of its handling of matrices and matrix layout, so I don't know if we can do much better in the near term. At some point the IR-based layout system needs to be reconciled with our current AST-base layout, and we need to figure out how "natural" layout and the currently computed layouts co-exist (in particular, we need to make sure that the IR-based layout and the existing layout logic for structured buffers will agree). This probably needs to come along once we have moved the core layout logic to operate on IR types instead of AST types (a change we keep talking about). As part of this work I had to touch the implementation of bit-casting for HLSL, and it seems like that logic has some serious gaps. We really ought to consider a separate legalization pass that can turn IR bitcast instructions into the separate ops that a target actually supports so that we can implement `uint64_t`<->`double` and other conersions that are technically achievable, but which are hard to express in HLSL today. * fixup: missing files 27 April 2020, 22:17:54 UTC
6f5c250 Small improvements around atomics (#1333) * Use the original value in the test. Run test on VK. * Added RWBuffer and Buffer types to C++ prelude. * Add vk to atomics.slang tests * Update target-compatibility around atomics. When tests disabled in atomics-buffer.slang explained why. * tabs -> spaces. * Small docs improvement. 23 April 2020, 18:40:01 UTC
806ab08 Fix typo in TypeTextUtil::getCompileTargetName(SlangCompileTarget target) (#1332) 23 April 2020, 14:04:10 UTC
e45f8c1 Disable OptiX tests by default. (#1331) When running `slang-test`, the OptiX tests will be skipped by default for now, and must be explicitly enabled by adding `-category optix` on the command line. I will need to add a better discovery mechanism down the line, closer to how support for different graphics APIs is being tested, but for now this should be enough to unblock our CI builds. 22 April 2020, 18:09:09 UTC
58904b5 Diagnose attempts to call instance methods from static methods (#1330) Currently we fail to diagnose code that calls an instance method from a static method using implicit `this`, and instead crash during lowering of the AST to the IR. This change introduces a bit more detail to the "this parameter mode" that is computed during lookup, so that it differentiates three cases. The existing two cases of a mutable `this` and immutable `this` remain, but we add a third case where the "this parameter mode" only allows for a reference to the `This` type. When turning lookup "breadcrumb" information into actual expressions, we respect this setting to construct either a `This` or `this` expression. In order to actually diagnose the incorrect reference, I had to add code around an existing `TODO` comment that noted how we should diagnose attempts to refer to instance members through a type. Enabling that diagnostic revealed a missing case needed by generics (including those in the stdlib) - a type-constraint member is always referenced statically. Putting the diagnostic for a static reference to a non-static member in its new bottleneck location meant that some code higher up the call static that handles explicit static member references had to be tweaked to not produce double error messages. This change includes a new diagnostic test to show that we now give an error message on code that makes this mistake, instead of crashing. 21 April 2020, 22:41:52 UTC
2c55d37 Fix for a generic definition, followed by a declaration with target intrisic causing a crash (#1329) * * Make a 'definition' if a function has a body or a target intrinsic defined * Added test for this situation. * Fix tab. * Fix single-target-intrisic.slang expected output. Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 21 April 2020, 21:16:22 UTC
77d5971 Small Improvements around Wave Intrinsics (#1328) * Fix issues in wave-mask/wave.slang tests. WaveGetActiveMask -> WaveGetConvergedMask. Update target-compatibility.md * First pass at wave-intrinsics.md documentation. Write up around WaveMaskSharedSync. * Added more of the Wave intrinsics as WaveMask intrinsics. Improvements to documentation around wave-intrinsics. * Add the Wave intrinsics for SM6.5 for WaveMask Expand WaveMask intrinsics Improve WaveMask documentation * Added WaveMaskIsFirstLane. * Added WaveGetConvergedMask for glsl and hlsl. Added wave-get-converged-mask.slang test. * WaveGetActiveMask/Multi and WageGetConvergedMask/Multi * Improve Wave intrinsics docs. Adde WaveGetActveMulti WaveGetConvergedMulti, WaveGetActiveMask (for vk/hlsl). * Enable GLSL WaveMultiPrefixBitAnd. * Re-add definitions of f16tof32 and f32to16 from #1326 * Remove multiple definition of f32tof16 Disable optix call to Ray trace test, if OPTIX not available. * Improve wave intrinsics documetnation - remove the __generic as part of definitions, small improvements. * Change comment to try and trigger build. 21 April 2020, 18:09:36 UTC
7de5f63 WaveMask remaining intrinsics and tests (#1327) * Fix issues in wave-mask/wave.slang tests. WaveGetActiveMask -> WaveGetConvergedMask. Update target-compatibility.md * First pass at wave-intrinsics.md documentation. Write up around WaveMaskSharedSync. * Added more of the Wave intrinsics as WaveMask intrinsics. Improvements to documentation around wave-intrinsics. * Add the Wave intrinsics for SM6.5 for WaveMask Expand WaveMask intrinsics Improve WaveMask documentation * Added WaveMaskIsFirstLane. Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 21 April 2020, 13:32:21 UTC
6d4fa92 Fix stdlib definitions of half<->float conversion (#1326) These ended up being additional cases where we needed to use an explicit loop over components in the stdlib in order to produce valid GLSL output, but the existing declarations weren't doing it. I added a very minimal cross-compilation test just to confirm that we generate valid SPIR-V for code using `f16tof32()`. 20 April 2020, 23:24:49 UTC
c4441d8 Feature/wave mask review (#1325) * Fix issues in wave-mask/wave.slang tests. WaveGetActiveMask -> WaveGetConvergedMask. Update target-compatibility.md * First pass at wave-intrinsics.md documentation. Write up around WaveMaskSharedSync. * Added more of the Wave intrinsics as WaveMask intrinsics. Improvements to documentation around wave-intrinsics. 20 April 2020, 17:03:18 UTC
acb1c39 Add support for global shader parameters to OptiX path (#1323) There are two main pieces here. First, we specialize the code generaiton for CUDA kernels to account for the way that shader parameters are passed differently for ordinary compute kernels vs. ray-tracing kernels. Both global and entry-point shader parameters in Slang are translated to kernel function parameters for CUDA compute kernels, while for OptiX ray tracing kernels we need to use a global `__constant__` variable for the global parameters, and the SBT data (accessed via an OptiX API function) for entry-point shader parameters. This choice bakes in a few pieces of policy when it comes to how Slang ray-tracing shaders translate to OptiX: * It fixes the name used for the global `__constant__` variable for global shader parameters to be `SLANG_globalParams`. Since that name has to be specified when creating a pipeline with the OptiX API, the choice of name effectively becomes an ABI contract for Slang's code generation. * It fixes the choice that global parameters in Slang map to per-launch parameters in OptiX, and entry-point parameters in Slang map to SBT-backed parameters in OptiX. This is a reasonable policy, and it is also one that we are likely to need to codify for Vulkan as well, but it is always a bit unfortunate to bake policy choices like this into the compiler (especially when shaders compiled for D3D can often decouple the form of their HLSL/Slang code from how things are bound in the API). The second piece is a lot of refactoring of the logic in `render-test/cuda/cuda-compute-util.cpp`, so that the logic for setting up (and reading back) the buffers of parameter data can be shared between the compute and ray-tracing paths. The result may not be a true global optimum for how the code is organized, but it at least serves the goal of not duplicating the parameter-binding logic between compute and ray-tracing. 17 April 2020, 15:53:41 UTC
12b30af Workaround for matching of dxc diagnostics (#1324) * Specialized handling for comparison of dxc output that ignores line/column numbers. * Simplify areAllEqualWithSplit. 16 April 2020, 22:25:53 UTC
b5a5317 Added wave.slang.expected.txt (#1322) 16 April 2020, 13:57:38 UTC
d5d3222 First support for 'WaveMask' intrinsics (#1321) * WIP tests to confirm divergence on CUDA. * Added wave.slang test that uses masks. Made all CUDA intrinsic impls take a mask explicitly. Added initial WaveMaskXXX intrinsics. * Added WaveMaskSharedSync. * Improvements aroung WaveMaskSharedSync/WaveMaskSync * Remove tabs. 15 April 2020, 18:14:58 UTC
fbac017 CUDA global scope initialization of arrays without function calls. (#1320) * Fix CUDA output of a static const array if values are all literals. * Fix bug in Convert definition. * Output makeArray such that is deconstructed on CUDA to fill in based on what the target type is. Tries to expand such that there are no function calls so that static const global scope definitions work. * Fix unbounded-array-of-array-syntax.slang to work correctly on CUDA. * Remove tabs. * Check works with static const vector/matrix. * Fix typo in type comparison. * Shorten _areEquivalent test. * Rename _emitInitializerList. Some small comment fixes. Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 14 April 2020, 21:00:11 UTC
cbdee1b Fix front-end handling of generic static methods (#1319) * Fix front-end handling of generic static methods The front-end logic that was testing if a member was usable as a static member neglected to unwrap any generic-ness and look at the declaration inside (the parser currently puts all modifiers on the inner declaration instead of the outer generic). The test case included here is not a full compute test so that it only runs the front-end checking logic (where we had the bug). * fixup: tabs->spaces 14 April 2020, 17:55:10 UTC
79f6a01 Change rules for layout of buffers/blocks containing only interface types (#1318) TL;DR: This is a tweak the rules for layout that only affects a corner case for people who actually use `interface`-type shader parameters (which for now is just our own test cases). The tweaked rules seem like they make it easier to write the application code for interfacing with Slang, but even if we change our minds later the risk here should be low (again: nobody is using this stuff right now). Slang already has a rule that a constant buffer that contains no ordinary/uniform data doesn't actually allocate a constant buffer `binding`/`register`: struct A { float4 x; Texture2D y; } // has uniform/ordinary data struct B { Texture2D u; SamplerState v; } // has none ConstantBuffer<A> gA; // gets a constant buffer register/binding ConstantBuffer<B> gB; // does not There is similar logic for `ParameterBlock`, where the feature makes more sense. A user would be somewhat surprised if they declared a parmaeter block with a texture and a sampler in it, but then the generating code reserved Vulkan `binding=0` for a constant buffer they never asked for. The behavior in the case of a plain `ConstantBuffer` is chosen to be consistent with the parameter block case. (Aside: all of this is a non-issue for targets with direct support for pointers, like CUDA and CPU. On those platforms a constant buffer or parameter block always translates to a pointer to the contained data.) Now, suppose the user declares a constant buffer with an interface type in it: interface IFoo { ... } ConstantBuffer<IFoo> gBuffer; When the layout logic sees the declaration of `gBuffer` it doesn't yet know what type will be plugged in as `IFoo` there. Will it contain uniform/ordinary data, such that a constant buffer is needed? The existing logic in the type layout step implemented a complicated rule that amounted to: * A `ConstantBuffer` or `cbuffer` that only contains `interface`/existential-type data will *not* be allocated a constant buffer `register`/`binding` during the initial layout process (on unspecialized code). That means that any resources declared after it will take the next consecutive `register`/`binding` without leaving any "gap" for the `ConstantBuffer` variable. * After specialization (e.g., when we know that `Thing` should be plugged in for `IFoo`), if we discover that there is uniform/ordinary data in `Thing` then we will allocate a constant buffer `register`/`binding` for the `ConstantBuffer`, but that register/binding will necessarily come *after* any `register`s/`binding`s that were allocated to parameters during the first pass. * Parameter blocks were intended to work the same when when it comes to whether or not they allocate a default `space`/`set`, but that logic appears to not have worked as intended. These rules make some logical sense: a `ConstantBuffer` declaration only pays for what the element type actually needs, and if that changes due to specialization then the new resource allocation comes after the unspecialized resources (so that the locations of unspecialized parameters are stable across specializations). The problem is that in practice it is almost impossible to write client application code that uses the Slang reflection API and makes reasonable choices in the presence of these rules. A general-purpose `ShaderObject` abstraction in application code ends up having to deal with multiple possible states that an object could be in: 1. An object where the element type `E` contains no uniform/ordinary data, and no interface/existential fields, so a constant buffer doesn't need to be allocated or bound. 2. An object where the element type `E` contains no uniform/ordinary data, but has interace/existential fields, with two sub-cases: a. When no values bound to interface/existential fields use uniform/ordinary dat, then the parent object must not bind a buffer b. When the type of value bound to an interface/existential field uses uniform/ordinary data, then the parent object needs to have a buffer allocated, and bind it. 3. When the element type `E` contains uniform/ordinary data, then a buffer should be allocated and bound (although its size/contents may change as interface/existential fields get re-bound) Needing to deal with a possible shift between cases (2a) and (2b) based on what gets bound at runtime is a mess, and it is important to note that even though both (2a) and (3) require a buffer to be bound, the rules about *where* the buffer gets bound aren't consistent (so that the application needs to undrestand the distinction between "primary" and "pending" data in a type layout). This change introduces a different rule, which seems to be more complicated to explain, but actually seems to simplify things for the application: * A `ConstantBuffer` or `cbuffer` that only contains `interface`/existential-type data always has a constant buffer `register`/`binding` allocated for it "just in case." * If after specialization there is any uniform/ordinary data, then that will use the buffer `register`/`binding` that was already allocated (that's easy enough). * If after speciazliation there *isn't* any uniform/ordinary data, then the generated HLSL/GLSL shader code won't declare a buffer, but the `register`/`binding` is still claimed. * A `ParameterBlock` behaves equivalently, so that if it contains any `interface`/existential fields, then it will always allocate a `space`/`set` "just in case" The effect of these rules is to streamline the cases that an application needs to deal with down to two: 1. If the element type `E` of a shader object contains no uniform/ordinary or interface/existential fields, then no buffer needs to be allocated or bound 2. If the element type `E` contains *any* uniform/ordinary or interface/existential fields, then it is always safe to allocate and bind a buffer (even in the cases where it might be ignored). Furthermore, the reflection data for the constant buffer `register`/`binding` becomes consistent in case (2), so that the application can always expect to find it in the same way. 14 April 2020, 15:48:54 UTC
b2c9fcc Remove Not constant folding - because it doesn't take into account the type change. (#1317) Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 13 April 2020, 16:34:20 UTC
4a5c606 Fix CUDA build of render-test (#1316) The CUDA build of the render-test tool had been broken in a fixup change to #1307 (which was ostensibly adding features for the CUDA path). The fix is a simple one-liner. 10 April 2020, 21:15:08 UTC
2d16fcd Fix crashing bug when using overloaded name as generic arg (#1315) If somebody defines two `struct` types with the same name: ```hlsl struct A {} // ... struct A {} ``` and then tries to use that name when specializing a generic function: ```hlsl void doThing<T>() { ... } // ... doThing<A>(); ``` then the Slang front-end currently crashes, which leads to it not diagnosing the original problem (the conflicting declarations of `A`). This change fixes up the checking of generic arguments so that it properly fills in dummy "error" arguments in place of missing or incorrect arguments, and thus guarantees that the generic substitution it creates will at least be usable for the next steps of checking (rather than leaving null pointers in the substitution). This change also fixes up the error message for the case where a generic application like `F<A>` is formed where `F` is not a generic. We already had a more refined diagnostic defined for that case, but for some reason the site in the code where we ought to use it was still issuing an internal compiler error around an unimplemented feature. This chagne includes a diagnostic test case to cover both of the above fixes. 10 April 2020, 16:20:36 UTC
a01c09c Literal folding on other operators (#1314) * Fold prefix operators if they prefix an int literal. * Make test case a bit more convoluted. * Remove ++ and -- as not appropriate for folding of literals. * Set output buffer name. 09 April 2020, 19:43:09 UTC
78acd32 Replace /* unhandled */ in source emit with a real error (#1313) For a long time the various source-to-source back-ends have been emitted the text `/* unhandled */` when they encounter an IR instruction opcode that didn't have any emit logic implemented. This choice had two apparent benefits: * In most common cases, emitting `/* unhandled */` in place of an expression would lead to downstream compilation failure, so most catastrophic cases seemed to work as desired (e.g., if we emit `int i = /* unhandled */;` we get a downstream parse error and we know something is wrong. * In a few cases, if a dead-but-harmless instruction slips through (e.g., a type formed in the body of a specialized generic function), we would emit `/* unhandled */;`, which is a valid empty statement. It is already clear from the above write-up that the benefits of the policy aren't really that compelling, and where it has recently turned out to be a big liability is that there are actually plenty of cases where emitting `/* unhandled */` instead of a sub-expression won't cause downstream compilation failure, and will instead silently compute incorrect results: * Emitting `/* unhandled */ + b` instead of `a + b` * Emitting `/* unhandled */(a)` instead of `f(a)`, or even `/* unhandled */(a, b)` instead of `f(a,b)` * Emitting `f(/*unhandled*/)` instead of `f(a)` in cases where `f` is a built-in with both zero-argument and one-argument overloads The right fix here is simple: where we would have emitted `/* unhandled */` to the output we should instead diagnose an internal compiler error, thus leading to compilation failure. This change appears to pass all our current tests, but it is possible that there are going to be complicated cases in user code that were relying on the previous lax behavior. I know from experience that we sometimes see `/* unhandled */` in output for generics, and while we have eliminated many of those cases I don't have confidence we've dealt with them all. When this change lands we should make sure that the first release that incorporates it is marked as potentially breaking for clients, and we should make sure to test the changes in the context of the client codebases before those codebases integrate the new release. 08 April 2020, 22:30:12 UTC
6274e17 Initial work to support OptiX output for ray tracing shaders (#1307) * Initial work to support OptiX output for ray tracing shaders This change represents in-progress work toward allowing Slang/HLSL ray-tracing shaders to be cross-compiled for execution on top of OptiX. The work as it exists here is incomplete, but the changes are incremental and should not disturb existing supported use cases. One major unresolved issue in this work is that the OptiX SDK does not appear to set an environment variable Changes include: * Modified the premake script to support new options for adding OptiX to the build. Right now the default path to the OptiX SDK is hard-coded because the installer doesn't seem to set an environment variable. We will want to update that to have a reasonable default path for both Windows and Unix-y platforms in a later chance. * I ran the premake generator on the project since I added new options, which resulted in a bunch of diffs to the Visual Studio project files that are unrelated to this change. Many of the diffs come from previous edits that added files using only the Visual Studio IDE rather than by re-running premake, so it is arguably better to have the checked-in project files more accurately reflect the generated files used for CI builds. * The "downstream compiler" abstraction was extended to have an explicit notion of the kind of pipeline that shaders are being compiled for (e.g., compute vs. rasterization vs. ray tracing). This option is used to tell the NVRTC case when it needs to include the OptiX SDK headers in the search path for shader compilation (and also when it should add a `#define` to make the prelude pull in OptiX). This code again uses a hard-coded default path for the OptiX SDK; we will need to modify that to have a better discovery approach and also to support an API or command-line override. * One note for the future is that instead of passing down a "pipeline type" we could instead pass down the list/set of stages for the kernels being compiled, and the OptiX support could be enabled whenever there is *any* ray tracing entry point present in a module. That approach would allow mixing RT and compute kernels during downstream compilation. We will need to revisit these choices when we start supporting code generation for multiple entry points at a time. * The CUDA emit logic is currently mostly unchanged. The biggest difference is that when emitting a ray-tracing entry point we prefix the name of the generated `__global__` function with a marker for its stage type, as required by the OptiX runtime (e.g., a `__raygen__` prefix is required on all ray-generation entry points). * The `Renderer` abstraction had a bare minimum of changes made to be able to understand that ray-tracing pipelines exist, and also that some APIs will require the name of each entry point along with its binary data in order to create a program. * The `ShaderCompileRequest` type was updated so that only a single "source" is supported (rather than distinct source for each entry point), and also the entry points have been turned into a single list where each entry identifies its stage instead of a fixed list of fields for the supported entry-point types. * The CUDA compute path had a lot of code added to support execution for the new ray-tracing pipeline type. The logic is mostly derived from the `optixHello` example in the OptiX SDK, and at present only supports running a single ray-generation shader with no parameters. The code here is not intended to be ready for use, but represents a signficiant amount of learning-by-doing. * The `slang-support.cpp` file in `render-test` was updated so that instead of having separate compilation logic for compute vs. rasterization shaders (which would mean adding a third path for ray tracing), there is now a single flow to the code that works for all pipeline types and any kind of entry points. * Implicit in the new code is dropping support for the way GLSL was being compiled for pass-through render tests, which means pass-through GLSL render tests will no longer work. It seems like we didn't have any of those to begin with, though, so it is no great loss. * Also implicit are some new invariants about how shaders without known/default entry points need to be handled. For example, the ray tracing case intentionally does not fill in entry points on the `ShaderCompileRequest` and instead fully relies on the Slang compiler's support for discovering and enumerating entry points via reflection. As a consequence of those edits the `-no-default-entry-point` flag on `render-test` is probably not working, but it seems like we don't have any test cases that use that flag anyway. Given the seemingly breaking changes in those last two bullets, I was surprised to find that all our current tests seem to pass with this change. If there are things that I'm missing, I hope they will come up in review. * fixup: issues from review and CI * Some issues noted during the review process (e.g., a missing `break`) * Fix logic for render tests with `-no-default-entry-point`. I had somehow missed that we had tests reliant on that flag. This required a bit of refactoring to pass down the relevant flag (luckily the function in question was already being passed most of what was in `Options`, so that just passing that in directly actually simplifies the call sites a bit. * There was a missing line of code to actually add the default compute entry points to the compile request. I think this was a problem that slipped in as part of some pre-PR refactoring/cleanup changes that I failed to re-test. 08 April 2020, 20:57:24 UTC
f38c082 Fix expected output for dxc-error test. (#1312) I'm not sure how this slipped in, but I know that I missed this when testing all my recent PRs because I end up havign a bunch of random not-ready-to-commit repro tests in my source tree which means I always get at least *some* test failures and have to scan them for the ones that are real. Somehow I have had a blind spot for this one. 08 April 2020, 19:37:16 UTC
a53f817 Fixes for IR generics (#1311) * Fixes for IR generics There are a few different fixes going on here (and a single test that covers all of them). 1. Fix optionality of trailing semicolon for `struct`s ====================================================== We have logic in the parser that tries to make a trailing `;` on a `struct` declaration optional. That logic is a bit subtle and couild potentially break non-idiomatic HLSL input, so we try to only trigger it for files written in Slang (and not HLSL). For command-line `slangc` this is based on the file extension (`.slang` vs. `.hlsl`), and for the API it is based on the user-specified language. The missing piece here was that the path for handling `import`ed code was *not* setting the source language of imported files at all, and so those files were not getting opted into the Slang-specific behavior. As a result, `import`ed code couldn't leave off the semicolon. 2. Fix generic code involving empty `interface`s ================================================ We have logic that tries to only specialize "definitions," but the definition-vs-declaration distinction at the IR level has historically been slippery. One corner case was that a witness table for an interface with no methods would always be considered a declaration, because it was empty. The notion of what is/isn't a definition has been made more nuanced so that it amounts to two main points: * If something is decorated as `[import(...)]`, it is not a definition * If something is a generic/func (a declaration that should have a body), and it has no body, it is a declaration Otherwise we consider anything a definition, which means that non-`[import(...)]` witness tables are now definitions whether or not they have anything in them. 3. Fix IR lowering for members of generic types =============================================== The IR lowering logic was trying to be a little careful in how it recurisvely emitted "all" `Decl`s to IR code. In particular, we don't want to recurse into things like function parameters, local variables, etc. since those can never be directly referenced by external code (they don't have linkage). The existing logic was basically emitting everything at global scope, and then only recursing into (non-generic) type declarations. This created a problem where a method declared inside a generic `struct` would not be emitted to the IR for its own module at all *unless* it happened to be called by other code in the same module. The fix here was to also recurse into the inner declaration of `GenericDecl`s. I also made the code recurse into any `AggTypeDeclBase` instead of just `AggTypeDecl`s, which means that members in `extension` declarations should not properly be emitted to the IR. Conclusion ========== These fixes should clear up some (but not all) cases where we might emit an `/* unhandled */` into output HLSL/GLSL. A future change will need to make that path a hard error and then clean up the remaining cases. * fixup: tabs->spaces 08 April 2020, 16:41:59 UTC
a9214f3 Remove static struct members from layout and reflection (#1310) * * Added MemberFilterStyle - controls action of FilteredMemberList and FilteredMemberRefList * Splt out template implementations * Use more standard method names dofr FilteredMemberRefList * Added reflect-static.slang test * Added isNotEmpty/isEmpty to filtered lists * Added ability to index into filtered list (so not require building of array) * Default MemberFilterStyle to All. * Remove explicit MemberFilterStyle::All 08 April 2020, 14:56:00 UTC
ba232e4 Fix a bug around generic functions using DXR RayQuery (#1309) The DXR `RayQuery` type is our first generic type defined in the stdlib that is marked as a target intrinsic but does *not* map to a custom `IRType` case. Instead, a reference to `RayQuery<T>` is encoded in the IR as an ordinary `specialize` instruction. Unfortunately, this doesn't play nice with the current specialization logic, which considered a `specialize` instruction to not represent a "fully specialized" instruction, which then inhibits specialization of generics/functions that use such an instruction. The fix here was to add more nuanced logic to the check for "fully specialized"-ness, so that it considers a `specialize` to already be fully specialized when the generic it applies to represents a target intrinsic. I also added a note that the whole notion of "fully specialized"-ness that we use isn't really the right thing for the specialization pass, and how we really ought to use a notion of what is or is not a "value." This change doesn't include a test because the only way to trigger the issue is using the DXR 1.1 `RayQuery` type, and that type is not supported in current release versions of DXC. 07 April 2020, 20:56:01 UTC
c5db04b Fix WaveGetLaneIndex for glsl (#1306) * Fix typo in stdlib around WaveGetLaneIndex and WaveGetLaneCount * Reorder emit so #extensions come before layout * Added wave-get-lane-index.slang test. 02 April 2020, 23:52:12 UTC
00e1dba Optimize creation of memberDictionary (#1305) * Improve performance of building members dictionary by adding when needed. * Fix unbounded-array-of-array-syntax.slang, that DISABLE_TEST now uses up an index. Use IGNORE_TEST. * Improve variable name. Small improvements. Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 02 April 2020, 21:06:16 UTC
487d4a4 Add basic support for namespaces (#1304) This change adds logic for parsing `namespace` declarations, referencing them, and looking up their members. * The parser changes are a bit subtle, because that is where we deal with the issue of "re-opening" a namespace. We kludge things a bit by re-using an existing `NamespaceDecl` in the same parent if one is available, and thereby ensure that all the members in the same namespace can see on another. * In order to allow namespaces to be referenced by name they need to have a type so that a `DeclRefExpr` to them can be formed. For this purpose we introduce `NamespaceType` which is the (singleton) type of a reference to a given namespace. * The new `NamespaceType` case is detected in the `MemberExpr` checking logic and routed to the same logic that `StaticMemberExpr` uses, and the static lookup logic was extended with support for looking up in a namespace (a thin wrapper around one of the existing worker routines in `slang-lookup.cpp`. * I made `NamespaceDecl` have a shared base class with `ModuleDecl` in the hopes that this would allow us to allow references to modules by name in the future. That hasn't been tested as part of this change. * I cleaned up a bunch of logic around `ModuleDecl` holding a `Scope` pointer that was being used for some of the more ad hoc lookup routines in the public API. Those have been switched over to something that is a bit more sensible given the language rules and that doesn't rely on keeping state sititng around on the `ModuleDecl`. * I added a test case to make sure the new funcitonality works, which includes re-opening a namespace, and it also tests both `.` and `::` operations for lookup in a namespace. * The main missing feature here is the ability to do something like C++ `using`. It would probably be cleanest if we used `import` for this, since we already have that syntax (and having both `import` and `using` seems like a recipe for confusion). Most of the infrastructure is present to support `import`ing one namespace into another (in a way that wouldn't automatically pollute the namespace for clients), but some careful thought needs to be put into how import of namespaces vs. modules should work. 02 April 2020, 15:52:42 UTC
5e73e98 Improve diagnostic parsing from GCC. (#1303) Enable x86_64 CPU tests on TC. 31 March 2020, 18:06:34 UTC
ea76905 CUDA version handling (#1301) * render feature for CUDA compute model. * Use SemanticVersion type. * Enable CUDA wave tests that require CUDA SM 7.0. Provide mechanism for DownstreamCompiler to specify version numbers. * Enabled wave-equality.slang * Make CUDA SM version major version not just a single digit. * Fix assert. * DownstreamCompiler::Version -> CapabilityVersion 30 March 2020, 23:23:09 UTC
ad5b60c Add a test for static const declarations in structure types (#1300) The functionality already appears to work, and this test is just to make sure we don't regress on it. The most interesting thing here is that I'm using this change to pitch a new organization for tests around what part of the language they cover (rather than the kind of test they are), since the `tests/compute/` directory is getting overly full and is hard to navigate. We can consider moving individual tests into more of a hierarchy at some later point. 30 March 2020, 19:47:43 UTC
6f43b26 WaveBroadcastAt/WaveShuffle (#1299) * Support for WaveReadLaneAt with dynamic (but uniform across Wave) on Vk by enabling VK1.4. Fixed wave-lane-at.slang test to test with laneId that is uniform across the Wave. * Added WaveShuffle intrinsic. Test for WaveShuffle intrinsic. * Added some documentation on WaveShuffle * Fix that version required for subgroupBroadcast to be non constexpr is actually 1.5 * Added WaveBroadcastLaneAt Documented WaveShuffle/BroadcastLaneAt/ReadLaneAt * Update docs around WaveBroadcast/Read/Shuffle. Use '_waveShuffle` as name in CUDA prelude to better describe it's more flexible behavior. 27 March 2020, 22:35:06 UTC
e267ce2 Adds WaveShuffle intrinsic (#1298) * Support for WaveReadLaneAt with dynamic (but uniform across Wave) on Vk by enabling VK1.4. Fixed wave-lane-at.slang test to test with laneId that is uniform across the Wave. * Added WaveShuffle intrinsic. Test for WaveShuffle intrinsic. * Added some documentation on WaveShuffle * Fix that version required for subgroupBroadcast to be non constexpr is actually 1.5 27 March 2020, 20:16:27 UTC
5b0b843 Support for WaveReadLaneAt with dynamic (but uniform across Wave) on Vk by enabling VK1.4. (#1297) Fixed wave-lane-at.slang test to test with laneId that is uniform across the Wave. 27 March 2020, 18:49:41 UTC
cc753f3 Disable CPU tests on TC. (#1295) Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 26 March 2020, 13:35:35 UTC
423b558 Fix a bug in exiting SSA form for loops (#1293) The Slang compiler was bit by a known issue when translating from SSA form back to straight-line code. Give code like the following: int x = 0; int y = 1; while(...) { ... int t = x; x = y; y = t; } ... The SSA construction pass will eliminate the temporary `t` and yield code something like: br(b, 0, 1); block b(param x : Int, param y : Int): ... br(b, y, x); The loop-dependent variables have become parameters of the loop block, and the branchs to the top of the loop pass the appropriate values for the next iteration (e.g., the jump that starts the loop sends in `0` and `1`). The problem comes up when translating the back-edge the continues the loop out of SSA form. Our generated code will re-introduce temporaries for `x` and `y`: int x; int y; // jump into loop becomes: x = 0; y = 1; for(;;) { ... // back-edge becomes x = y; y = x; continue; } The problem there is that we've naively translated a branch like `br(b, <a>, <b>)` into `x = <a>; y = <b>;` but that doesn't work correctly in the case where `<b>` is `x`, because we will have already clobbered the value of `x` with `<a>`. The simplest fix is to introduce a temporary (just like the input code had), and generate: // back-edge becomes int t = x; x = y; y = t; This change modifies the `emitPhiVarAssignments()` function so that it detects bad cases like the above and emits temporaries to work around the problem. A new test case is included that produced incorrect output before the change, and now produces the expected results. A secondary change is folded in here that tries to guard against a more subtle version of the problem: for(...) { ... int x1 = x + 1; int y1 = y + 1; x = y1; y = x1; } In this more complicated case, each of `x` and `y` is being assigned to a value derived from the other, but neither is being set using a block parameter directly, so the changes to `emitPhiVarAssignments()` do not apply. The problem in this case would be if the `shouldFoldInstIntoUseSites()` logic decided to fold the computation of `x1` or `y1` into the branch instruction, resulting in: x = y + 1; y = x + 1; which would again violate the semantics of the original code, because now there is an assignment to `x` before the computation of `x + 1`. Right now it seems impossible to force this case to arise in practice, due to implementation details in how we generate IR code for loops. In particular, the block that computes the `x+1` and `y+1` values is currently always distinct from the block that branches back to the top of the loop, and we do not allow "folding" of sub-expressions from different blocks. It is possible, however, that future changes to the compiler could change the form of the IR we generate and make it possible for this problem to arise. The right fix for this issue would be to say that we should introduce a temporary for any branch argument that "involves" a block parameter (whether directly using it or using it as a sub-expression). Unfortunately, the ad hoc approach we use for folding sub-expressions today means that testing if an operand "involves" something would be both expensive and unwieldy. A more expedient fix is to disallow *all* folding of sub-expressions into unconditional branch instructions (the ones that can pass arguments to the target block), which is what I ended up implementing in this change. Making that defensive change alters the GLSL we output for some of our cross-compilation tests, in a way that required me to update the baseline/gold GLSL. A better long-term fix for this whole space of issues would be to have the "de-SSA" operation be something we do explicitly on the IR. Such an IR pass would still need to be careful about the first issue addressed in this change, but the second one should (in principle) be a non-issue given that our emit/folding logic already handles code with explicit mutable local variables correctly. 25 March 2020, 23:13:26 UTC
b3e6f1b Unroll target improvements (#1291) * Add unroll support for CUDA, and preliminary for C++. Document [unroll] support. * Fix loop-unroll to run on CPU, and test on CPU and elsewhere. Fix bug in emitting loop unroll condition. * Improved comment. * Added support for vk/glsl loop unrolling. 25 March 2020, 20:45:56 UTC
28a0ca9 Better diagnostics on failure on CUDA. (#1288) * Better diagnostics on failure on CUDA. * Catch exceptions in render-test * * Added ability to disable reporting on CUDA failures * Stopped using exception for reporting (just write to StdWriter::out() * Removed CUDAResult type * Don't set arch type on nvrtc to see if fixes CI issues. * Try compute_30 on CUDA. * Added ability to IGNORE_ a test DIsabled rw-texture-simple and texture-get-dimensions * Disable tests that require CUDA SM7.0 Use DISABLE_ prefix to disable tests. * Disable signalUnexpectedError doing printf. 25 March 2020, 18:08:21 UTC
889132e Parser changes to improve handling of static method calls (#1290) Static Method Calls ------------------- The main fix here is for parsing of calls to static methods. Given a type like: struct S { void doThing(); } the parser currently gets tripped up on a statement like: S::doThing(); The problem here is that the `Parser::ParseStatement` routine was using the first token of lookahead to decide what to do, and in the case where it saw a type name it assumed that must mean the statement would be a declaration. It turns out that `Parser::ParseStatement` already had a more intelligent bit of disambiguation later on when handling the general case of an identifier (for which it couldn't determine the type-vs-value status at parse time), and simply commetning out the special-case handling of a type name and relying on the more general identifier case fixes the issue. That catch-all case still has some issues of its own, and this change expands on the comments to make some of those issues clear so we can try to address them later. Empty Declarators ----------------- One reason why the static method call problem was hard to discover was that it was masked by the parser allowing for empty declarator. That is, given input like: S::doThing(); This can be parsed as a variable declaration with a parenthesized empty declarator `()`. Practically, there is no reason to support empty declarators anwhere except on parameters, and allowing them in other contexts could make parser errors harder to understand. This change makes the choice of whether or not empty declarators are allowed something that can be decided at each point where we parse a declarator, and makes it so that only parsing of parameters opts in to allowing them. By disabling support for empty declarators in contexts where they don't make sense, we make code like the above a parse error when it appears at global scope, rather than a weird semantic error. A more complete future version of this change might *also* make support for parenthesized declarators an optional feature, or remove that support entirely. Slang doesn't actually support pointers (yet) so there is no real reason to allow parenthesized declarators right now. One note for future generations is that using an emptye declarator on a parameter of array type can actually create an ambiguity. If the user writes: void f(int[2][3]); did they mean for it to be interpreted as: void f(int a[2][3]); or as: void f(int[2][3] a); or even as: void f(int[2] a[3]); The first case there yields a different type for `a` than the other two, but is also what we pretty much *have* to support for backwards compatibility with HLSL. Requiring all function declarations to include parameter names would eliminate this potentially confusing case. Layout Modifiers ---------------- One of the above two syntax changes led to a regression in the output for a diagnostic test for `layout` modifiers (which are a deprecated but still functional feature from back when `slangc` supported GLSL input). The original output of the test case seemed odd, and when I looked at the parsing logic I saw that an early-exit error case was leading to spurious error messages because it failed to consume all the tokens inside the `layout(...)`. Fixing the logic to not use an early-exit (and instead rely on the built-in recovery behavior of `Parser`) produced more desirable diagnosic output. I changed the input file to put the `binding` and `set` specifiers on differnet lines so that the error output could show that the compiler properly tags both of the syntax errors. 24 March 2020, 22:24:44 UTC
e71e75a Fix some bad behavior around static methods (#1289) These are steps towards a fix for the problem of not being able to call a static method as follows: SomeType::someMethod(); One problem in the above is that the parser gets confused and parses an (anonynmous!) function declaration. This change doesn't address that problem, but *does* fix the problem that when checking fails to coerce `SomeType::someMethod` into a type it was triggering an unimplemented-feature exception rather than a real error message. Another problem was that if the above is re-written to try to avoid the parser bug: (SomeType::someMethod)(); we end up with a call where the base expression (the callee) is a `ParenExpr` and the code for handling calls wasn't expecting that. Instead, it sent the overload resolution logic into an unimplemented case that was bailing by throwing an arbitrary C++ exception instead of emitting a diagnostic. This latter issue was fixed in two ways. First, the code path that failed to emit a diagnostic now emits a reasonable one for the unimplemented feature (this still ends up being a fatal compiler error). Second, we properly handle the case of trying to call a `ParenExpr` by unwrapping it and using the base expression instead, so that `(<func>)(<args>)` is always treated the same as `<func>(<args>)`. 24 March 2020, 21:04:30 UTC
7b4e0e1 First pass at a Target Compatibility document (#1287) * WIP compatibility docs. * Test transpose in matrix-float. * Small improvement to CUDA docs. * Added some discussion around tessellation. * Small improvements to target-compatibility.md * Improve compatibility documentation. Co-authored-by: Tim Foley <tfoleyNV@users.noreply.github.com> 23 March 2020, 21:55:49 UTC
05c9a5c CPU Texture GetDimensions support (#1283) * Added CPU support for GetDimensions on C++/CPU target. Added texture-get-dimension.slang test * Fix some typos. * Update CUDA docs. * Fix output of GetDimensions on glsl when has an array. Disabled VK - because VK renderer doesn't support createTextureView * Fix typo. * Fix typo. * Fix bad-operator-call diagnostics output. 21 March 2020, 10:38:47 UTC
884a9bc Handling of switch with empty body (#1284) * Added handling for empty switch body. Added test for empty switch. * Fix testing for case in switch. 20 March 2020, 20:59:15 UTC
cc4c5b8 Remove RWTextureCube/Array from stdlib (#1285) * Remove RWTextureCube and RWTextureCubeArray - as not supported. Put multisample code in a block to make nesting more readable. * Replace a tab. * Update bad-operator-call.slang.expected 20 March 2020, 18:58:22 UTC
3435a50 Diagnostic compare special case for stdlib line number changes (#1286) * Added diagnostic compare which ignores line number changes in std library. * Change stdlib just to make sure it's all working. 20 March 2020, 17:55:08 UTC
a8a23a6 First pass at a language reference (#1279) * First pass at a language reference We already had the `language-guide.md` document under `docs/`, but this is an attempt to introduce a more full-featured reference to the Slang language and its features. Right now it is mostly focused on the syntax and what the language allows to be declared, and it is a little light on semantic details throughout (mostly relying on familiarity with C to explain the things that are left unsaid). Even so, this hopefully provides a starting point to continue adding more detail. * typos and other small fixes 18 March 2020, 18:20:20 UTC
21bbd3c Added --cuda-sdk-path option to premake5.lua. (#1278) 17 March 2020, 18:00:40 UTC
315888e Improve CUDA Wave intrinsics documentation. (#1276) * Improve CUDA Wave intrinsics documentation. Remove inappropriate comment. * Small CUDA doc improvement. 17 March 2020, 13:59:25 UTC
76b9ff6 CUDA support of MultiPrefix Wave intrinsics. (#1275) Support for cs_6_5 cand cs_6_4 in profile Added wave-multi-prefix.slang etst 16 March 2020, 19:01:21 UTC
256a20a Define compound intrinsic ops in the standard library (#1273) * Define compound intrinsic ops in the standard library The current stdlib code has a notion of "compound" intrinsic ops, which use the `__intrinsic_op` modifier but don't actually map to a single IR instruction. Instead, most* of these map to multiple IR instructions using hard-coded logic in `slang-ir-lower.cpp`. (* One special case is `kCompoundOp_Pos` that is used for unary `operator+` and that maps to *zero* IR instructions) All of the opcodes that used to use the `kCompoundOp_` enumeration values now have definitions directly in the stdlib and use the new `[__unsafeForceInlineEarly]` attribute to ensure that they get inlined into their use sites so that the output code is as close as possible to the original. For the most part, generating the stdlib definitions for the compound ops is straightforward, but here's some notes: * The unary `operator+` I just defined directly in Slang code, since it doesn't share much structure with other cases * The unary increment/decrement ops are generated as a cross product of increment/decrement and prefix/postfix. The logic is a bit messy but given that we have scalar, vector, and matrix versions to deal with it still saves code overall * Because all the compound/assignment cases got moved out, the existing code for generating unary/binary ops can be simplified a bit * All the no-op bit-cast operations like `asfloat(float)` are now inline identity functions * A few other small cleanups are made by not having to worry about the compound ops (which used to be called "pseudo ops") sometimes being encoded in to the same type of value as a real IR opcode. The one big detail here is a fix for how IR lowering works for `let` declarations: they were previously being `materialize()`d which only guarantees that they've been placed in a contiguous and addressable location, but doesn't actually convert them to an r-value. As a result a `let` declaration could accidentally capture a mutable location by reference, which is definitely *not* what we wanted it to do. Fixing this was needed to make the new postfix `++` definition work (several existing tests end up covering this). One important forward-looking note: One subtle (but significant) choice in this change is that we actually reduce the number of declarations in the stdlib, because instead of having the compound operators include both per-type and generic overloads (just listing scalar cases for now): float operator+=(in out float left, float right) { ... } int operator+=(in out int left, int right) { ... } ... T operator+= <T:__BuiltinBlahBlah>(in out T left, T right) { ... } We now have *only* the single generic version: T operator+= <T:__BuiltinBlahBlah>(in out T left, T right) { ... } In running our current tests, this change didn't lead to any regressions (perhaps surprisingly). Given that we were able to reduce the number of overloads for `operator+=` by a factor of N (where N is the number of built-in types), it seems worth considering whether we could also reduce the number of overloads of `operator+` by the same factor by only having generic rather than per-type versions. One concern that this forward-looking question raises is whether the quality of diagnostic messages around bad calls to the operators might suffer when there are only generic overloads instead of per-type overloads. In order to feel out this problem I added a test case that includes some bad operator calls both to `+=` (which is now only generic with this change) and `+` (which still has per-type overloads). Overall, I found the quality of the error messages (in terms of the candidates that get listed) isn't perfect for either, but personally I prefer the output in the generic case. As part of adding that test, I also added some fixups to how overload resolution messages get printed, to make sure the function name is printed in more cases, and also that the candidates print more consistently. These changes affected the expected output for one other diagnostic test. * fixup: disable bad operator test on non-Windows targets 16 March 2020, 16:03:19 UTC
c1743a5 Vector & Matrix Prefix Sum & Product (#1272) * Implement matrix and vector versions of prefixSum and prefix product. * Comment around how code is organized - where it seems it could be more performant. 12 March 2020, 19:47:44 UTC
69f7d28 Add a basc inlining facility for use in the stdlib (#1271) The main feature visible to the stdlib here is the `[__unsafeForceInlineEarly]` attribute, which can be attached to a function definition and forces calls to that function to be inlined immediately after initial IR lowering. The pass is implemented in `slang-ir-inline.{h,cpp}` and currently only handles the completely trivial case of a function with no control flow that ends with a single `return`. The lack of support for any other cases motivates the `__unsafe` prefix on the attribute. In order to test that the pass works, I modified the "comma operator" in the standard library to be defined directly (rather than relying on special-case handling in IR lowering), and then added a test that uses that operator to make sure it generates code as expected. The compute version of the test confirms that we generate semantically correct code for the operator, while the SPIR-V cross-compilation test confirms that our output matches GLSL where the comma operator has been inlined, rather than turned into a subroutine. Notes for the future: * With this change it should be possible (in principle) to redefine all the compound operators in the stdlib to instead be ordinary functions with the new attribute, removing the need for `slang-compound-intrinsics.h`. * Once the compound intrinsics are defined in the stdlib, it should be easier/possible to start making built-in operators like `+` be ordinary functions from the standpoint of the IR * The attribute and pass here could be extended to include an alternative inlining attribute that happens later in compilation (after linking) but otherwise works the same. This could in theory be used for functions where we don't want to inline the definition into generated IR, but still want to inline things berfore generating final HlSL/GLSL/whatever. * The inlining pass itself could be generalized to work for less trivial functions pretty easily; for the most part it would just mean "splitting" the block with the call site and then inserting clones of the blocks in the callee. Any `return` instructions in the clone would become unconditional branches (with arguments) to the block after the call (which would get a parameter to represent the returned value). * The "hard" part for such an inlining pass would be handling cases where the control flow that results from inlining can't be handled by our later restructuring passes. The long-term fix there is to implement something like the "relooper" algorithm to restructure control flow as required for specific targets. 11 March 2020, 19:53:09 UTC
935768c Clean-ups related to expanded standard library coverage (#1269) This change continues the work already started in moving the definitions of many built-in functions to the standard library. The main focus in this change was reducing the number of operations that had to be special-cased on the CPU and CUDA targets by making sure that the scalar cases of built-in functions map to the proper names in the prelude (e.g., `F32_sin()`) via the ordinary `__target_intrinsic` mechanism. In some cases this cleanup meant that special-case logic that was constructing definitions for those functions using C++ code could be scrapped. Additional changes made along the way: * A few scalar functions that were missing in the CPU/CUDA preludes got added: `round`, hyperbolic trigonometric functions, `frexp`, `modf`, and `fma` * The floating-point `min()` and `max()` definitions in the preludes were changed to use intrinsic operations on the target (which are likely to follow IEEE semantics, while our definitions did not) * For the CUDA target, many of the functions had their names translated during code emit from, e.g., `sin` to `sinf`. This change makes the CUDA target more closely match the C++/CPU target in using names like `F32_sin` consistently. * For the CUDA target, a few additional functions have intrinsics that don't exist (portably) on CPU: `sincos()` and `rsqrt()`. * For the Slang stdlib definitions to work, a new `$P` replacement was defined for `__targert_intrinsic` that expands to a type based on the first operand of the function (e.g., `F32` for `float`). * I removed the dedicated opcodes for matrix-matrix, matrix-vector, and vector-matrix multiplication, and instead turned them into ordinary functions with definitions and `__target_intrinsic` modifiers to map them appropriately for HLSL and GLSL. This is realistically how we would have implemented these if we'd had `__target_intrinsic` from the start. Notes about possible follow-on work: * The `ldexp` function is still left in the Slang stdlib because it has to account for a floating-point exponent and the `math.h` version only handles integers for the exponent. It is possible that we can/should define another overload for `ldexp` (and `frexp`) that uses an integer for exponent, and then have that one be a built-in on CPU/CUDA, with the HLSL `frexp` being defined in the stdlib to delegate to the correct `frexp` for those targets. * The `firstbithigh` and related functions are missing for our CPU and CUDA targets, and will need to be added. It is worth nothing that `firstbithigh` apparently has some very odd functionality around signed integer arguments (which are supported, despite MSDN being unclear on that point). General cleanup will be required for those functions. * Maxing the various matrix and vector products no longer be intrinsic ops might affect how we emit code for them as sub-expressions (both whether we fold them into use sites and how we parenthize them). This doesn't seem to affect any of our existing tests, but we could consider marking these functions with `[__readNone]` to ensure they can be folded, and then also adding whatever modifier(s) we might invent to control precdence and parentheses insertion during emit. 11 March 2020, 15:50:38 UTC
b380b1a Wave Prefix Product (#1270) * Fix some typos. * Add wave-prefix-sum.slang test * First pass at implementing prefixSum. * Small improvments to prefixSum CUDA. * Small improvement to prefix sum. * Enable prefix sum in stdlib. * Wave prefix product without using a divide. * Split out SM6.5 Wave intrinsics. Template mechanism for do prefix calculations. 10 March 2020, 20:43:41 UTC
a10d9cd WIP Prefix Sum for CUDA (#1268) * Fix some typos. * Add wave-prefix-sum.slang test * First pass at implementing prefixSum. * Small improvments to prefixSum CUDA. * Small improvement to prefix sum. * Enable prefix sum in stdlib. 10 March 2020, 16:31:25 UTC
721d2e8 CUDA Wave intrinsic vector/matrix support (#1267) * Distinguish between __activeMask and _getConvergedMask(). Remove need to pass in mask to CUDA wave impls. * Add support for vector/matrix Wave intrinsics for CUDA. Fix issue with CUDA parsing of errors. * Fix typo. Make WaveReadLineAt and WaveReadFirst work for vector/matrix types. * Fix typo. * Added equality wave intrinsic test. * Fix some typos * Added wave-lane-at.slang 10 March 2020, 00:03:42 UTC
7e0aa93 CUDA support for vector/matrix Wave intrinsics (#1266) * Distinguish between __activeMask and _getConvergedMask(). Remove need to pass in mask to CUDA wave impls. * Add support for vector/matrix Wave intrinsics for CUDA. Fix issue with CUDA parsing of errors. * Fix typo. 09 March 2020, 16:40:04 UTC
b1317cd Yet more definitions moved into the stdlib (#1263) The only big catch that I ran into with this batch was that I found the `float.getPi()` function was being emitted to the output GLSL even when that function wasn't being used. This seems to have been a latent problem in the earlier PR, but was only surfaced in the tests once a Slang->GLSL test started using another intrinsic that led to the `float : __BuiltinFloatingPointType` witness table being live in the IR. The fix for the gotcha here was to add a late IR pass that basically empties out all witness tables in the IR, so that functions that are only referenced by witness tables can then be removed as dead code. This pass is something we should *not* apply if/when we start supporting real dynamic dispatch through witness tables, but that is a problem to be solved on another day. The remaining tricky pieces of this change were: * Needed to remember to mark functions as target intrinsics on HLSL and/or GLSL as appropriate (hopefully I caught all the cases) so they don't get emitted as source there. * The `msad4` function in HLSL is very poorly documented, so filling in its definition was tricky. I made my best effort based on how it is described on MSDN, but it is likely that if anybody wants to rely on this function they will need us to vet our results with some tests. 09 March 2020, 16:02:36 UTC
4760829 Use templates to implement Wave Intrinsic reduce on CUDA (#1265) * Update slang-binaries to verison with SPIR-V version support. * Support vec and matrix Wave intrinsics on vk. Added wave-vector.slang test Add wave-diverge.slang test Add support for more wave intrinsics to vk. * Test out Wave intrinsic support for matrices. * Remove matrix glsl intrinsics -> not available. Fix some typo. * Remove generated slang generated headers. * Use template to generate Wave reduce functions. 06 March 2020, 23:26:17 UTC
cdee134 Remove generated header files (#1264) * Update slang-binaries to verison with SPIR-V version support. * Support vec and matrix Wave intrinsics on vk. Added wave-vector.slang test Add wave-diverge.slang test Add support for more wave intrinsics to vk. * Test out Wave intrinsic support for matrices. * Remove matrix glsl intrinsics -> not available. Fix some typo. * Remove generated slang generated headers. 06 March 2020, 22:15:58 UTC
b94a12b Wave intrinsics for Vector and Matrix types (#1262) * Update slang-binaries to verison with SPIR-V version support. * Support vec and matrix Wave intrinsics on vk. Added wave-vector.slang test Add wave-diverge.slang test Add support for more wave intrinsics to vk. * Test out Wave intrinsic support for matrices. * Remove matrix glsl intrinsics -> not available. Fix some typo. 06 March 2020, 20:46:35 UTC
18be2d8 Expand range of definitions that can be moved into stdlib (#1259) The actual definitions that got moved into the stdlib here are pretty few: * `clip()` * `cross()` * `dxx()`, `ddy()` etc. * `degrees()` * `distance()` * `dot()` * `faceforward()` The meat of the change is infrastructure changes required to support these new declarations * Generic versions of the standard operators (e.g., `operator+`) were added that are generic for a type `T` that implements the matching `__Builtin`-prefixed interface. An open question is whether we can now drop the non-generic versions in favor of just having these generic operators. * A `__BuiltinLogicalType` interface was added to capture the commonality between integers and `bool` * `__BuiltinArithmeticType` was extended so that implementations must support initialization from an `int` * `__BuiltinFloatingPointType` was extended to require an accessor that returns the value of pi for the given type, and the concrete floating-point types were extended to provide definitions of this value. * It turns out that our logic for checking if two functions have the same signature (and should thus count as redeclarations/redefinitions) wasn't taking generic constraints into account at all. That was fixed with a stopgap solution that checks if the generic constraints are pairwise identical, but I didn't implement the more "correct" fix that would require canonicalizing the constraints. * When doing overload resolution and considering potential callees, logic was added so that a non-generic candidate should always be selected over a generic one (generally the Right Thing to do), and also so that a generic candidate with fewer parameters will be selected over one with more (an approximation of the much more complicated rule we'd ideally have). * The formatting of declarations/overloads for "ambiguous overload" errors was fleshed out a bit to include more context (the "kind" of declaration where appropriate, the return type for function declarations) and to properly space thing when outputting specialization of operator overloads that end with `<` (so that we print `func < <int>(int, int)` instead of just `func <<int,int>(int,int)`). * The core lookup routines were heavily refactored and reorganized to try to make them bottleneck more effectively so that all paths handle all the nuances of inheritance, extensions, etc. * Because of the refactoring to lookup logic, the semantic checking logic related to checking if a type conforms to an interface was updated to be driven based on the `Type` that is supposed to be conforming, rather than a `DeclRef` to the type's declaration. This allows it to use the type-based lookup entry point and eliminates one special-case entry point for lookup. In addition to the various core changes, this change also refactors some of the existing stdlib code to favor writing more things in actual Slang syntax, and less in C++ code that uses `StringBuilder` to construct the Slang syntax. There is a lot more that could be done along those lines, but even pushing this far is showing that the current approach that `slang-generate` takes for how to separate meta-level C++ and Slang code isn't really ideal, so a revamp of the generator code is probably needed before I continue pushing. One surprising casualty of the refactoring of lookup is that we no longer have the `lookedUpDecls` field in `LookupResult`. That field probably didn't belong there anyway, but the role it served was important. The idea of `lookedUpDecls` was to avoid looking up in the same interface more than once in cases where a type might have a "diamond" inheritance pattern. Removing that field doesn't appear to affect correctness of any of our existing tests, but by adding a specific test for "diamond" inheritance I could see that the refactoring introduced a regression and made looking up a member inherited along multiple paths ambiguous. Rather than add back `lookedUpDecls` I went for a simpler (but arguably even hackier) solution where when ranking candidates from a `LookupResult` we check for identical `DeclRef`s and arbitrarily favor one over the other. One complication that arises here is that when comparing `DeclRef`s inherited along different paths they might have a `ThisTypeSubstitution` for the same type, but with different subtype witnesses (because different inheritance paths could lead to different transitive subtype witnesses: e.g., `A : B : D` and `A : C : D`). 06 March 2020, 19:37:36 UTC
0b91ea7 Update slang-binaries to verison with SPIR-V version support. (#1261) 06 March 2020, 16:41:35 UTC
3c7c38d Safer binary compatibilty betwee 1.0 and 1.1 versions, without using struct embedding. (#1257) 05 March 2020, 17:14:59 UTC
6684d32 Feature/glslang spirv version (#1256) * WIP add support for __spirv_version . * Added IRRequireSPIRVVersionDecoration * SPIR-V version passed to glslang. Enable VK wave tests. Split ExtensionTracker out, so can be cast and used externally to emit. Added SourceResult. * Fix warning on Clang. * Missing hlsl.meta.h * Refactor communication/parsing of __spirv_version with glslang. * Fix some debug typos. Be more precise in handling of substring handling. * Make glslang forwards and backwards binary compatible. * Small comment improvements. * Added slang-spirv-target-info.h/cpp * Fix for major/minor on gcc. * Another fix for gcc/clang. * VS projects include slang-spirv-target-info.h/cpp * Removed SPIRVTargetInfo Added SemanticVersion. Don't bother with passing a target to glslang. Should be separate from 'version'. * Renamed slang-emit-glsl-extension-tracker.cpp/.h -> slang-glsl-extension-tracker.cpp/.h Fixed some VS project issues. * Fix a comment. * Added slang-semantic-version.cpp/.h * Added slang-glsl-extension-tracker.cpp/.h * Added split that can check for input has all been parsed. * Fix problem on x86 win build. 05 March 2020, 15:59:54 UTC
5951d2a __spirv_version Decoration (#1255) * WIP add support for __spirv_version . * Added IRRequireSPIRVVersionDecoration 03 March 2020, 23:41:07 UTC
0f1f4a4 Move definitions of simple vector/matrix builtins to stdlib. (#1247) Some of the functions declared in the Slang standard library are built in on some targets (almost always the case for HLSL) but aren't available on other targets (often the case for GLSL, CUDA, and CPU). To date, the CUDA and CPU targets have worked around this issue by synthesizing definitions of the missing functions on the fly as part of output code generation, at the cost of some amount of code complexity in the emit pass. This change adds definitions inside the stdlib itself for a large number of built-in HLSL functions that act element-wise over both vectors and matrices (e.g., `sin()`, `sqrt()`, etc.), and changes the CPU/CUDA codegen path to *not* synthesize C++ code for those functions (instead relying on code generated from the Slang definitions). The element-wise vector/matrix function bodies are being defined using macros in the stdlib, so that we can more easily swap out the definitions en masse if we find an implementation strategy we like better. This could involve defining special-case syntax just for vector/matrix "map" operations that can lower directly to the IR and theoretically generate cleaner code after specialization is complete. As a byproduct of this change, the matrix versions of these functions should in principle now be available to GLSL (GLSL only defines vector versions of functions like `sin()`, and leaves out matrix ones). No testing has been done to confirm this fix. In some cases builtins were being declared with multiple declarations to split out the HLSL and GLSL cases, and this change tries to unify these as much as possible into single declarations to keep the stdlib as small as possible. Two functions -- `sincos()` and `saturate()` -- were simple enough that their full definitions could be given in the stdlib so that even the scalar cases wouldn't need to be synthesized, so the corresponding enumerants were removed in `slang-hlsl-intrinsic-set.h`. In the case of `saturate()` the pre-existing definition used for GLSL codegen could have been used for CPU/CUDA all along. In some cases functions that can and should be defined in the future have had commented-out bodies added as an outline for what should be inserted in the future. Most of these functions cannot be implemented directly in the stdlib today because basic operations like `operator+` are currently not defined for `T : __BuiltinArithmeticType`, etc. Adding such declarations should be straightforward, but brings risks of creating unexpected breakage, so it seemed best to leave for a future change. This change does not try to address making vector or matrix versions of builtin functions that map to single `IROp`s, because the existing mechanisms for target-based specialization, etc., do not apply for such cases. In the future we will either have to make those operations into ordinary functions (eliminating many `IROp`s) so that stdlib definitions can apply, or add an explicit IR pass to deal with legalizing vector/matrix ops for targets that don't support them natively. The right path for this is not yet clear, so this change doesn't wade into it. This change does not touch the `Wave*` functions added in Shader Model 6, despite many of these having vector/matrix versions that could benefit from the same default mapping. It is expected that these functions will have GLSL/Vulkan translation added soon, and it probably makes sense to know what cases are directly supported on Vulkan before adding the hand-written definitions. Because of the limitations on what could be ported into the stdlib, it is not yet possible to remove any of the infrastructure for synthesizing builtin function definitions in the CPU and CUDA back-ends. 03 March 2020, 19:49:40 UTC
back to top