Visual Studio has an option to search all files in a solution, so I
did a search in there for "default:" looking for any missing break
statements.
I've left out default statements that return something, and that throw
something, even if via ThrowInvalidType. UNREACHABLE leads towards throw
R_THROW macro leads towards a return
Given the issues with GPU accelerated ASTC decoding with NVIDIA's latest drivers, parallelize astc decoding on the CPU.
Uses half the available threads in the system for astc decoding.
This formats all copyright comments according to SPDX formatting guidelines.
Additionally, this resolves the remaining GPLv2 only licensed files by relicensing them to GPLv2.0-or-later.
This buffer was a list of EncodingData structures sorted by their bit length, with some duplication from the cpu decoder implementation.
We can take advantage of its sorted property to optimize its usage in the shader.
Thanks to wwylele for the optimization idea.
These changes should help in reducing crashes/drivers panics that may
occur due to synchronization issues between the shader completion and
later access of the decoded texture.
Per the spec, L1 is clamped to the value 0xff if it is greater than 0xff. An oversight caused us to take the maximum of L1 and 0xff, rather than the minimum.
Huge thanks to wwylele for finding this.
Co-Authored-By: Weiyi Wang <wwylele@gmail.com>
Users may want to fall back to the CPU ASTC texture decoder due to hangs
and crashes that may be caused by keeping the GPU under compute heavy
loads for extended periods of time. This is especially the case in games
such as Astral Chain which make extensive use of ASTC textures.
ASTC texture decoding is currently handled by a CPU decoder for GPU's without native ASTC decoding support (most desktop GPUs). This is the cause for noticeable performance degradation in titles which use the format extensively.
This commit adds support to accelerate ASTC decoding using a compute shader on OpenGL for GPUs without native support.