-
Notifications
You must be signed in to change notification settings - Fork 516
[HuggingFace][Neuronx] Inference - Optimum Neuron 0.3.0 - Neuron sdk 2.24.0 - Transformers to 4.51.3 #5274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The branch needs update, it shows around 30 commits. Hard to know which one is related. Can you rebase? |
huggingface/pytorch/inference/docker/2.7/py3/sdk2.24.0/Dockerfile.neuronx
Show resolved
Hide resolved
huggingface/pytorch/inference/docker/2.7/py3/sdk2.24.0/Dockerfile.neuronx
Show resolved
Hide resolved
huggingface/pytorch/inference/docker/2.7/py3/sdk2.24.0/Dockerfile.neuronx
Outdated
Show resolved
Hide resolved
huggingface/pytorch/inference/docker/2.7/py3/sdk2.24.0/Dockerfile.neuronx
Outdated
Show resolved
Hide resolved
huggingface/pytorch/inference/docker/2.7/py3/sdk2.24.0/Dockerfile.neuronx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, some of the pip commands can be combined to reduce layers
b59b3d3
to
80b2e0b
Compare
Hi @ahsan-z-khan, the branch is rebased and I addressed some of your comments. Thanks for the review. Do you have any idea about the sagemaker tests? I don't have access to the logs. |
its looking for similar allowlist files - https://github.com/aws/deep-learning-containers/tree/master/huggingface/pytorch/inference/docker/2.1/py3/sdk2.20.0 |
@JingyaHuang please let us know if you need more info on the allowlist files mentioned above. Looks like the PR is waiting on those before it can be merged. |
Issue #5273
transformers: 4.51.3
torch: 2.7.1
diffusers: 0.35.1
peft: 0.17.0
Note:
If merging this PR should also close the associated Issue, please also add that Issue # to the Linked Issues section on the right.
All PR's are checked weekly for staleness. This PR will be closed if not updated in 30 days.
Description
Tests Run
By default, docker image builds and tests are disabled. Two ways to run builds and tests:
How to use the helper utility for updating dlc_developer_config.toml
Assuming your remote is called
origin
(you can find out more withgit remote -v
)...python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -cp origin
python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -t sanity_tests -cp origin
python src/prepare_dlc_dev_environment.py -rcp origin
NOTE: If you are creating a PR for a new framework version, please ensure success of the local, standard, rc, and efa sagemaker tests by updating the dlc_developer_config.toml file:
sagemaker_remote_tests = true
sagemaker_efa_tests = true
sagemaker_rc_tests = true
sagemaker_local_tests = true
How to use PR description
Use the code block below to uncomment commands and run the PR CodeBuild jobs. There are two commands available:# /buildspec <buildspec_path>
# /buildspec pytorch/training/buildspec.yml
# /tests <test_list>
# /tests sanity security ec2
sanity, security, ec2, ecs, eks, sagemaker, sagemaker-local
.Formatting
black -l 100
on my code (formatting tool: https://black.readthedocs.io/en/stable/getting_started.html)PR Checklist
Expand
Pytest Marker Checklist
Expand
@pytest.mark.model("<model-type>")
to the new tests which I have added, to specify the Deep Learning model that is used in the test (use"N/A"
if the test doesn't use a model)@pytest.mark.integration("<feature-being-tested>")
to the new tests which I have added, to specify the feature that will be tested@pytest.mark.multinode(<integer-num-nodes>)
to the new tests which I have added, to specify the number of nodes used on a multi-node test@pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">)
to the new tests which I have added, if a test is specifically applicable to only one processor typeBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.