1. 12 Feb, 2021 2 commits
    • Support querying for Vulkan11 properties · 539ef8e9
      Sean Risser authored
      Vulkan 1.2 added VkPhysicalDeviceVulkan11Properties, which allows users
      to query support for several other device properties all together. We
      can use templated static functions to make sure these properties are
      only ever set in one place, similar to what we do for device features.
      
      The only struct this doesn't work for is
      VkPhysicalDeviceSubgroupProperties, because the names in that struct
      and the Vulkan11 struct differ. So the Vulkan11 struct manually copies
      the data from the getProperties(*) function for the subgroup properties.
      
      Bug: b/176248217
      Change-Id: I30e9e05ecbdb9a40fc3a59df6bd9b8ab9022c9fc
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/51388Tested-by: 's avatarSean Risser <srisser@google.com>
      Reviewed-by: 's avatarAlexis Hétu <sugoi@google.com>
      Reviewed-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Commit-Queue: Sean Risser <srisser@google.com>
    • Implement GLSLstd450Interpolate* functions · 0bcb71f9
      Alexis Hetu authored
      This cl adds an implementation for:
      - GLSLstd450InterpolateAtCentroid
      - GLSLstd450InterpolateAtSample
      - GLSLstd450InterpolateAtOffset
      
      These functions essentially replicate the behavior of
      regular interpolants in the fragment shader processing.
      
      A specific extra difficulty encountered here is detecting
      which kind of pointer offset we are dealing with. Pointer
      offsets might be caused by [] operators being used on a
      vector or on an array (possibly an array of vectors). This
      distinction is important as it impacts what interpolant
      offsets point to. Note that there's missing coverage in
      dEQP-VK for interpolant arrays and this was caught with
      SwANGLE tests (a dEQP-VK issue will be logged shortly).
      
      Another issue was dealing with dynamic interpolant offsets,
      which was solved by looping over all of them and combining
      all plane equations into one before performing the
      interpolation.
      
      Bug: b/171415086
      Change-Id: Id7c4c931918ba172d00da84655051445b110d3a9
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/51737
      Presubmit-Ready: Alexis Hétu <sugoi@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Tested-by: 's avatarAlexis Hétu <sugoi@google.com>
      Commit-Queue: Alexis Hétu <sugoi@google.com>
      Reviewed-by: 's avatarNicolas Capens <nicolascapens@google.com>
  2. 10 Feb, 2021 6 commits
    • Unify load/store operand accessors · 673a7fe5
      Nicolas Capens authored
      Load and store instructions, as well as intrinsics which access memory,
      can now shared the same methods for accessing the memory address and
      data operands.
      
      Note that while this change introduces the potential for non-load/store
      instructions to have their operands accessed through getLoadAddress(),
      getStoreAddress(), or getData(), that risk isn't any greater than using
      the wrong getSrc() index, and would stick out as a mistake much more
      clearly. The advantage this change brings is that we no longer have to
      remember where the address and data operands are stored in sub-vector
      load/store intrinsics. In addition, there are no more overly verbose
      casts, and their cost is eliminated.
      
      Bug: b/179497998
      Change-Id: I0d9208555e00b0d3053f7d3baca241fef2b8cbeb
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52531
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
    • Discern between load and store addresses · 8d50b556
      Nicolas Capens authored
      There were InstLoad::getSourceAddress() and InstStore::getAddr()
      methods, which aren't very clear and consistently named. This change
      replaces them with getLoadAddress() and getStoreAddress(), respectively.
      
      This will also enable moving these methods to the Inst class to make
      them available for SubVectorLoad and SubVectorStore intrinsics. While
      these methods don't make sense for other instructions, note that
      Inst::getSrc() already provides access to all operands and has to be
      used with knowledge of the operand meaning and layout. So this only
      provides a name to these operands, and it would stick out as a sore
      thumb if used incorrectly.
      
      Bug: b/179497998
      Change-Id: I86b1201b8a1c611682f4f91541bdb49e17ef71a8
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52530
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
    • Rename InstIntrinsicCall to InstIntrinsic · 33a77f7f
      Nicolas Capens authored
      It is no longer derived from InstCall, and doesn't take a Target
      parameter which could be a symbol to a function which implements
      the intrinsic.
      
      Note one can still emit actual Call instructions if a function call
      is needed. Since the previous change which removed the Target parameter
      we can no longer decide between implementing an intrinsic as inline
      instructions or a function call at the Subzero level, but we can still
      do that at the Reactor level which has its own concept of intrinsics.
      
      This change also removes mentions of intrinsics representing function
      calls. It also removes code related to PNaCl-specific LLVM intrinsics,
      including the ability to look up intrinsics by name. The addArg(),
      getArg(), and getNumArgs() methods, adopted from InstCall (but no longer
      inherited from it), are kept for now due to risk of replacing the ones
      for InstCall objects, while the confusion caused by keeping the
      function-related "arg" term is deemed low.
      
      Bug: b/179497998
      Change-Id: I293f039853abff6f5bebda1b714774205bdec846
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52608
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
    • Eliminate the InstIntrinsicCall Target parameter · 99bbb14b
      Nicolas Capens authored
      It is no longer used now that profiling support at the Subzero level
      is eliminated.
      
      This change adjusts all of the uses of getSrc() on intrinsics to obtain
      the correct operand, but does not yet make simplifications based on
      having them align with load/store instructions.
      
      Bug: b/179497998
      Change-Id: I93705eaa1b7626184f612ab3a9755048004e531f
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52529
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
    • Eliminate Subzero profiling support · d4f27d7a
      Nicolas Capens authored
      We're never used this functionality, and shouldn't have a need for it.
      Profiling information can be collected at the Reactor level or using
      a profiler like VTune.
      
      This functionality was the only thing using the `Target` parameter of
      `InstIntrinsicCall`, which get in the way for aligning the parameters of
      load- and store-like intrinsics with regular `InstLoad` and `InstStore`.
      
      Bug: b/179497998
      Change-Id: I5a0ad5ee8e0101f0879a97a1ea01e3efc5bebbe4
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52528
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
    • Regres: Remove GLES tests from CI test runs · 9677c6d2
      Nicolas Capens authored
      The legacy OpenGL ES implementation is deprecated in favor of SwANGLE,
      and we haven't touched the GLES code in many months. These CI tests
      consume valuable time, and tend to be flaky due to exhausting the
      number of X handles. We still have our 'daily' test runs to provide
      detection of regressions, so they can be safely removed from the CI
      test runs.
      
      Note that this change can also be (temporarily) reverted as part of a
      new change which could use CI testing of dEQP-GLES.
      
      Bug: b/153322216
      Change-Id: I52cd2b89c04c95de486d85118edef6460ba82925
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52532
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
  3. 09 Feb, 2021 4 commits
  4. 08 Feb, 2021 1 commit
    • Make vk-unittests use VulkanWrapper · 9d35d544
      Antonio Maiorano authored
      This change moves the VulkanBenchmark and DrawBenchmark classes to
      VulkanWrapper so that they can be used from other unit tests -- namely,
      vk-unittests. In doing so, it became clear that using these as base
      classes wasn't great for writing googletests, as text fixtures are
      classes themselves, and this resulted in messy multiple inheritance. So
      I modified the two classes to use callback registration instead of
      virtual functions.
      
      Apart from reworking existing tests (e.g. see TriangleBenchmark.cpp), I
      also added a new DrawTests.cpp to vk-unittests with a unit test to make
      sure we don't crash when leaving out "gl_Position", a bug that sugoi@
      fixed in swiftshader-cl/51808. This is a good example of how easy it can
      be to write such unit tests now.
      
      List of changes:
      
      * Moved VulkanBenchmark and DrawBenchmark to VulkanWrapper, and renamed
        VulkanTester and DrawTester respectively.
      * ClearImageBenchmark refactored to aggregate a VulkanTester. This is an
        example where using a class is fine as we can still use the testers
        via aggregation.
      * TriangleBenchmark tests refactored to use DrawTester and register
        callbacks.
      * Moved compute tests to a ComputeTests.cpp.
      * Moved the other tests to BasicTests.cpp.
      * Added DrawTests.cpp with new DrawTests.VertexShaderNoPositionOutput
        test.
      * CMake: add VulkanWrapper target for unittests as well as benchmarks.
      * CMake: change FOLDER to better organize the tests and benchmarks for
        VS.
      
      Bug: b/176981107
      Change-Id: Ib1a0b85b3df787d2e39da08930414f9a14954a73
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52348
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarAlexis Hétu <sugoi@google.com>
      Tested-by: 's avatarAntonio Maiorano <amaiorano@google.com>
      Commit-Queue: Antonio Maiorano <amaiorano@google.com>
  5. 05 Feb, 2021 4 commits
    • Fix crash during llvm_shutdown due to init order fiasco · 266614a3
      Antonio Maiorano authored
      Use the constexpr constructor for _MSC_VER >= 1925, otherwise
      ManagedStatic will have a dynamic initializer, which depending on init
      order, results in the Ptr field being overwritten with 0. This
      eventually leads to multiple instances of the same ManagedStatic
      instance in the StaticList, and asserts when double destroying during
      llvm_shutdown.
      
      I reported this bug [here](https://bugs.llvm.org/show_bug.cgi?id=49027),
      and learned that this bug had already been fixed in upstream LLVM.
      
      Note that llvm_subzero already has a similar change, though for _MSC_VER
      >= 1920. According to the LLVM comment, the VC++ compiler up until 1925
      may still emit a dynamic initializer for a constexpr constructor, but I
      suppose we never ran into that for Subzero, so I'll leave this as is.
      
      Bug: b/175782868
      Change-Id: Ice3944f67e496aa94f1a7ed7502b49e763d702b4
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52508
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarAlexis Hétu <sugoi@google.com>
      Tested-by: 's avatarAntonio Maiorano <amaiorano@google.com>
      Commit-Queue: Antonio Maiorano <amaiorano@google.com>
    • Reactor: fix using -x86-asm-syntax only on x86 compilations · a8da847d
      Nicolas Capens authored
      The LLVM JIT fails loudly when attempting to parse this command line
      option when targeting non-x86 CPUs.
      
      Bug: b/157555596
      Change-Id: Ic5ddccbdbc86c2f03ded5f4004369ece0100c031
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52488
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
    • Limit Subzero routine stack size to 512 KiB · ff010f9f
      Nicolas Capens authored
      Fuzzing tests generate shaders with large arrays or very high numbers of
      local variables, which can cause stack overflow. We need to limit the
      allowable stack memory usage of generated routines.
      
      Note this change does not yet gracefully deal with routines which exceed
      this limit. They will cause a null pointer dereference instead of a
      stack overflow.
      
      The default stack size limit of 1 MiB at the Subzero level is to ensure
      we catch cases of excessive stack sizes even in the case no explicit
      limit was set. At the Reactor level we reduce it to 512 KiB to prevent
      actual stack overflow for a 1 MiB stack, assuming some earlier calls
      might want to use the stack. Also, our legacy 'ASM' compiler for GLSL
      allocates 4096 'registers' of 4 components for 128-bit SIMD, which
      already requires 256 KiB.
      
      Bug: b/157555596
      Change-Id: I474285eecc786496edffbaef29719ca0cdf03f7d
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/52329
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Commit-Queue: Nicolas Capens <nicolascapens@google.com>
    • Limit LLVM routine stack size to 512 KiB · 25f0f858
      Nicolas Capens authored
      Fuzzing tests generate shaders with large arrays or very high numbers of
      local variables, which can cause stack overflow. We need to limit the
      allowable stack memory usage of generated routines.
      
      Note this change does not yet gracefully deal with routines which exceed
      this limit. They will cause a null pointer dereference instead of a
      stack overflow.
      
      The 512 KiB stack size limit is chosen to prevent actual stack overflow
      for a 1 MiB stack, assuming some earlier calls might want to use the
      stack. Also, our legacy 'ASM' compiler for GLSL allocates 4096
      'registers' of 4 components for 128-bit SIMD, which already requires
      256 KiB.
      
      Bug: b/157555596
      Change-Id: I25c57420f6d2af323ce98faf515feca0aa834a4a
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/51548
      Presubmit-Ready: Nicolas Capens <nicolascapens@google.com>
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarAntonio Maiorano <amaiorano@google.com>
      Tested-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Commit-Queue: Nicolas Capens <nicolascapens@google.com>
  6. 03 Feb, 2021 1 commit
  7. 01 Feb, 2021 3 commits
  8. 31 Jan, 2021 1 commit
    • Allow sampling usage when querying linear image format properties. · df5dee64
      Yilong Li authored
      Currently, VkImages with VK_IMAGE_TILING_OPTIMAL tiling actually
      also have a linear texel layout in our current implementation,
      thus it can be sampled or used as blit source. And we would
      like to add support to sampling features for linear images when
      calling vkGetPhysicalDeviceImageFormatProperties() to query image
      format properties.
      
      Exceptions include images with compressed formats and images
      created to be used as cube maps.
      
      This will unblock SwiftShader users like AEMU/FEMU from sampling
      host-visible memory shared between guest processes.
      
      Bug: b/171299814
      Bug: fuchsia:54153
      Bug: fuchsia:68365
      Test: dEQP-VK.*
      
      Change-Id: Id9019fc9d9239fc85d0d2b086d4efd468844d254
      Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/49108
      Kokoro-Result: kokoro <noreply+kokoro@google.com>
      Reviewed-by: 's avatarNicolas Capens <nicolascapens@google.com>
      Commit-Queue: Yilong Li <liyl@google.com>
      Tested-by: 's avatarYilong Li <liyl@google.com>
  9. 30 Jan, 2021 2 commits
  10. 29 Jan, 2021 2 commits
  11. 28 Jan, 2021 6 commits
  12. 27 Jan, 2021 1 commit
  13. 26 Jan, 2021 5 commits
  14. 25 Jan, 2021 2 commits