Skip to content

Faster weight only quantized gemm #12862

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 29, 2025

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Jul 25, 2025

Summary:

Context

As title. Provide an implementation for the gemm counterpart of the weight int4 quantized gemv implementation added in D78275584 / #12444.

This new kernel is quite similar to the existing one, with the primary difference being that it uses the same weight packing used in the gemv implementation.

Next Steps

  • Reduce framework overhead from command buffer re-encoding between tokens. Achieve this by caching more artifacts that can be re-used between command buffer encodings, and only re-encoding command buffers when necessary.

  • Experiment with dynamic quantization, which should provide speedup via int8 dot product extension

Differential Revision: D78994081

Copy link

pytorch-bot bot commented Jul 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12862

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 581503e with merge base d5232a0 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 25, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78994081

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jul 29, 2025
Summary:

## Context

As title. Provide an implementation for the `gemm` counterpart of the weight int4 quantized `gemv` implementation added in D78275584 / pytorch#12444.

This new kernel is quite similar to the existing one, with the primary difference being that it uses the same weight packing used in the `gemv` implementation.

## Next Steps

* Reduce framework overhead from command buffer re-encoding between tokens. Achieve this by caching more artifacts that can be re-used between command buffer encodings, and only re-encoding command buffers when necessary.

* Experiment with dynamic quantization, which should provide speedup via int8 dot product extension

Reviewed By: digantdesai

Differential Revision: D78994081
@SS-JIA SS-JIA force-pushed the export-D78994081 branch from ae26978 to 34391e9 Compare July 29, 2025 19:05
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jul 29, 2025
Summary:
Pull Request resolved: pytorch#12862

## Context

As title. Provide an implementation for the `gemm` counterpart of the weight int4 quantized `gemv` implementation added in D78275584 / pytorch#12444.

This new kernel is quite similar to the existing one, with the primary difference being that it uses the same weight packing used in the `gemv` implementation.

## Next Steps

* Reduce framework overhead from command buffer re-encoding between tokens. Achieve this by caching more artifacts that can be re-used between command buffer encodings, and only re-encoding command buffers when necessary.

* Experiment with dynamic quantization, which should provide speedup via int8 dot product extension

Reviewed By: digantdesai

Differential Revision: D78994081
@SS-JIA SS-JIA force-pushed the export-D78994081 branch from 34391e9 to 310b20b Compare July 29, 2025 19:11
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78994081

Summary:

## Context

As title. Provide an implementation for the `gemm` counterpart of the weight int4 quantized `gemv` implementation added in D78275584 / pytorch#12444.

This new kernel is quite similar to the existing one, with the primary difference being that it uses the same weight packing used in the `gemv` implementation.

## Next Steps

* Reduce framework overhead from command buffer re-encoding between tokens. Achieve this by caching more artifacts that can be re-used between command buffer encodings, and only re-encoding command buffers when necessary.

* Experiment with dynamic quantization, which should provide speedup via int8 dot product extension

Reviewed By: digantdesai

Differential Revision: D78994081
@SS-JIA SS-JIA force-pushed the export-D78994081 branch from 310b20b to 581503e Compare July 29, 2025 19:31
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78994081

@facebook-github-bot facebook-github-bot merged commit 8b204c0 into pytorch:main Jul 29, 2025
101 of 103 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants