Skip to content

Support batching variable size tensors using nested tensors #219

@mosheraboh

Description

@mosheraboh

Is your feature request related to a problem? Please describe.
Support batching variable size tensors using nested tensors (https://pytorch.org/tutorials/prototype/nestedtensor.html)
To avoid padding and improve the running time.

Describe the solution you'd like
Add such an option in CollateDefault as an alternative to CollateDefault.pad_all_tensors_to_same_size.

Describe alternatives you've considered
N/A

Additional context
N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions