Skip to content

[pytorch upstream] [feature request] Support tensor descriptor for general triton kernel #6003

@jianyizh

Description

@jianyizh

inductor now support tensor descriptor for xpu by setting config.triton.use_tensor_descriptor = True. This will use tensor descriptor for all triton kernels if possible (template kernels like gemm/flex attention are not included, they are in another path). Triton should support this feature. This should also be helpful for helion, as it will try to use tensor descriptor during tuning.

  1. functionality. [inductor] Fail to compile when using tensor descriptor #5947
  2. there should be no performance regression compare to tl.load
  3. should support block 2d load on non-2d rank tensors

We expect this feature can be done on pytorch 2.12 or 2.13

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions