Skip to content

Use temp allocator for kernel registry #13012

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 6, 2025

Conversation

JakeStevens
Copy link
Contributor

@JakeStevens JakeStevens commented Jul 30, 2025

Summary: When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Differential Revision: D79285675

cc @larryliu0820 @JacobSzwejbka @lucylq

Copy link

pytorch-bot bot commented Jul 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13012

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 2 New Failures, 1 Unrelated Failure

As of commit c831756 with merge base 90ff059 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 30, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79285675

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@JakeStevens JakeStevens added the module: runtime Issues related to the core runtime and code under runtime/ label Jul 30, 2025
JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Jul 30, 2025
Summary:

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Differential Revision: D79285675
JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Jul 30, 2025
Summary:

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Differential Revision: D79285675
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79285675

JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Jul 30, 2025
Summary:
Pull Request resolved: pytorch#13012

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Differential Revision: D79285675
Copy link
Contributor

@larryliu0820 larryliu0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for doing this.

JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Jul 31, 2025
Summary:

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79285675

JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Jul 31, 2025
Summary:
Pull Request resolved: pytorch#13012

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
@JakeStevens JakeStevens force-pushed the export-D79285675 branch 2 times, most recently from 23ddf27 to 8f73ee3 Compare August 1, 2025 01:05
JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Aug 1, 2025
Summary:

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79285675

JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Aug 1, 2025
Summary:
Pull Request resolved: pytorch#13012

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Aug 1, 2025
Summary:

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79285675

JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Aug 1, 2025
Summary:
Pull Request resolved: pytorch#13012

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
JakeStevens added a commit to JakeStevens/executorch that referenced this pull request Aug 5, 2025
Summary:

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
Summary:
Pull Request resolved: pytorch#13012

When indexing to the registry to get the op, memory is allocated from the method allocator to instantiate some TensorMeta and included objects. This memory is only used for that purpose and is not needed for the entire lifetime of the Method. Thus, we can instead use temp allocator which can later be reset and free up memory as needed.

Reviewed By: larryliu0820

Differential Revision: D79285675
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79285675

@JakeStevens
Copy link
Contributor Author

Failing tests pre-existing. moshi from undefined libtorchaudio symbol, openvino from backend import. Landing.

@JakeStevens JakeStevens merged commit 414fc32 into pytorch:main Aug 6, 2025
101 of 105 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported module: runtime Issues related to the core runtime and code under runtime/
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants