The PR: #2791 removed inliner pass from version converter and broke benchmark.
Some un-traced functions are revealed by benchmarking:
|
@torch_op(("aten::split", "aten::split.Tensor")) |
|
def aten_split(self: TTensor, split_size: INT64, dim: int = 0) -> TTensor: |
|
"""split.Tensor(Tensor(a -> *) self, SymInt split_size, int dim=0) -> Tensor(a)[]""" |
|
|
|
return op.SplitToSequence(self, split_size, axis=dim) |
|
|
|
|
|
def aten_split_copy(self: TensorType, split_size: INT64, dim: int = 0) -> TensorType: |
|
"""split_copy.Tensor(Tensor self, SymInt split_size, int dim=0) -> Tensor[]""" |
|
|
|
raise NotImplementedError() |
|
|
|
|
|
@torch_op("aten::split_with_sizes") |
|
def aten_split_with_sizes(self: TTensor, split_sizes: INT64, dim: int = 0) -> TTensor: |
|
"""split_with_sizes(Tensor(a -> *) self, SymInt[] split_sizes, int dim=0) -> Tensor(a)[]""" |
|
|
|
return op.SplitToSequence(self, split_sizes, axis=dim) |
Although I will add back the inliner into version converter, should we trace all our functions ahead in torchlib? Or there is something blocking us?
cc @justinchuby @xadupre @gramalingam
The PR: #2791 removed inliner pass from version converter and broke benchmark.
Some un-traced functions are revealed by benchmarking:
onnxscript/onnxscript/function_libs/torch_lib/ops/core.py
Lines 9071 to 9088 in e6f79e1
Although I will add back the inliner into version converter, should we trace all our functions ahead in torchlib? Or there is something blocking us?
cc @justinchuby @xadupre @gramalingam