Skip to content

mlcr option similar to docker --network host? #527

@peroutkb

Description

@peroutkb

I'm running into a issue ""fatal: unable to access 'https://github.com/mlcommons/mlperf-automations.git/': server certificate verification failed. CAfile: none CRLfile: none:"

The containers seem to be defaulting to a self-signed CA so I'm unable to run the test.

Is there a variable I can pass?

labuser@aipg-1-1:~$ sudo mlcr run-mlperf,inference,_find-performance,_full,_r5.1-dev
--model=3d-unet-99
--implementation=nvidia
--framework=tensorrt
--category=datacenter
--scenario=Offline
--execution_mode=test
--device=cuda
--docker --quiet
--test_query_count=50 --rerun
--docker_cache=no
--env.MLC_DOCKER_USE_DEFAULT_USER=yes

[2025-07-21 21:19:51,600 module.py:576 INFO] - * mlcr run-mlperf,inference,_find-performance,_full,_r5.1-dev
[2025-07-21 21:19:51,613 module.py:576 INFO] - * mlcr get,mlcommons,inference,src
[2025-07-21 21:19:51,613 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-mlperf-inference-src_6cadf8c8/mlc-cached-state.json
[2025-07-21 21:19:51,621 module.py:576 INFO] - * mlcr get,mlperf,inference,results,dir,_version.r5.1-dev
[2025-07-21 21:19:51,622 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5/mlc-cached-state.json
[2025-07-21 21:19:51,628 module.py:576 INFO] - * mlcr install,pip-package,for-mlc-python,_package.tabulate
[2025-07-21 21:19:51,628 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/install-pip-package-for-mlc-python_b1dbc1a0/mlc-cached-state.json
[2025-07-21 21:19:51,636 module.py:576 INFO] - * mlcr get,mlperf,inference,utils
[2025-07-21 21:19:51,663 module.py:576 INFO] - * mlcr get,mlperf,inference,src
[2025-07-21 21:19:51,664 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-mlperf-inference-src_6cadf8c8/mlc-cached-state.json
[2025-07-21 21:19:51,669 module.py:5340 INFO] - ! call "postprocess" from /root/MLC/repos/mlcommons@mlperf-automations/script/get-mlperf-inference-utils/customize.py
Using MLCommons Inference source from /root/MLC/repos/local/cache/get-git-repo_inference-src_e9e40194/inference
[2025-07-21 21:19:51,680 customize.py:273 INFO] -
Running loadgen scenario: Offline and mode: performance
[2025-07-21 21:19:51,835 module.py:576 INFO] - * mlcr detect,os
[2025-07-21 21:19:51,837 module.py:5194 INFO] - ! cd /home/labuser
[2025-07-21 21:19:51,837 module.py:5195 INFO] - ! call /root/MLC/repos/mlcommons@mlperf-automations/script/detect-os/run.sh from tmp-run.sh
[2025-07-21 21:19:51,881 module.py:5340 INFO] - ! call "postprocess" from /root/MLC/repos/mlcommons@mlperf-automations/script/detect-os/customize.py
[2025-07-21 21:19:51,921 module.py:576 INFO] - * mlcr build,dockerfile
[2025-07-21 21:19:51,935 module.py:576 INFO] - * mlcr get,docker
[2025-07-21 21:19:51,937 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-docker_c4f8c830/mlc-cached-state.json
[2025-07-21 21:19:52,142 customize.py:394 INFO] - mlc pull repo && mlcr --tags=app,mlperf,inference,generic,_nvidia,_3d-unet-99,_tensorrt,_cuda,_test,_r5.1-dev_default,_offline --quiet=true --env.MLC_DOCKER_USE_DEFAULT_USER=yes --env.MLC_QUIET=yes --env.MLC_MLPERF_IMPLEMENTATION=nvidia --env.MLC_MLPERF_MODEL=3d-unet-99 --env.MLC_MLPERF_DEVICE=cuda --env.MLC_MLPERF_LOADGEN_SCENARIO=Offline --env.MLC_MLPERF_RUN_STYLE=test --env.MLC_MLPERF_SKIP_SUBMISSION_GENERATION=False --env.MLC_DOCKER_PRIVILEGED_MODE=True --env.MLC_MLPERF_SUBMISSION_DIVISION=open --env.MLC_MLPERF_INFERENCE_TP_SIZE=1 --env.MLC_MLPERF_SUBMISSION_SYSTEM_TYPE=datacenter --env.MLC_MLPERF_USE_DOCKER=True --env.MLC_MLPERF_BACKEND=tensorrt --env.MLC_RERUN=True --env.MLC_TEST_QUERY_COUNT=50 --env.MLC_MLPERF_FIND_PERFORMANCE_MODE=yes --env.MLC_MLPERF_LOADGEN_ALL_MODES=no --env.MLC_MLPERF_LOADGEN_MODE=performance --env.MLC_MLPERF_RESULT_PUSH_TO_GITHUB=False --env.MLC_MLPERF_SUBMISSION_GENERATION_STYLE=full --env.MLC_MLPERF_INFERENCE_VERSION=5.1-dev --env.MLC_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r5.1-dev_default --env.MLC_MLPERF_SUBMISSION_CHECKER_VERSION=v5.1 --env.MLC_MLPERF_INFERENCE_SOURCE_VERSION=5.0.25 --env.MLC_MLPERF_LAST_RELEASE=v5.1 --env.MLC_MLPERF_INFERENCE_RESULTS_VERSION=r5.1-dev --env.MLC_MODEL=3d-unet-99 --env.MLC_MLPERF_LOADGEN_COMPLIANCE=no --env.MLC_MLPERF_LOADGEN_EXTRA_OPTIONS= --env.MLC_MLPERF_LOADGEN_SCENARIOS,=Offline --env.MLC_MLPERF_LOADGEN_MODES,=performance --env.MLC_OUTPUT_FOLDER_NAME=test_results --add_deps_recursive.coco2014-original.tags=_full --add_deps_recursive.coco2014-preprocessed.tags=_full --add_deps_recursive.imagenet-original.tags=_full --add_deps_recursive.imagenet-preprocessed.tags=_full --add_deps_recursive.openimages-original.tags=_full --add_deps_recursive.openimages-preprocessed.tags=_full --add_deps_recursive.openorca-original.tags=_full --add_deps_recursive.openorca-preprocessed.tags=_full --add_deps_recursive.coco2014-dataset.tags=_full --add_deps_recursive.igbh-dataset.tags=_full --add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r5.1-dev --add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r5.1-dev --add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r5.1-dev --print_env=False --print_deps=False --dump_version_info=True --quiet
[2025-07-21 21:19:52,142 customize.py:456 INFO] - Dockerfile written at /root/MLC/repos/mlcommons@mlperf-automations/script/app-mlperf-inference/dockerfiles/nvcr.io-nvidia-mlperf-mlperf-inference-mlpinf-v5.0-cuda12.8-pytorch25.01-ubuntu24.04-x86_64-release.Dockerfile
[2025-07-21 21:19:52,143 docker.py:216 INFO] - Dockerfile generated at /root/MLC/repos/mlcommons@mlperf-automations/script/app-mlperf-inference/dockerfiles/nvcr.io-nvidia-mlperf-mlperf-inference-mlpinf-v5.0-cuda12.8-pytorch25.01-ubuntu24.04-x86_64-release.Dockerfile
[2025-07-21 21:19:52,245 module.py:576 INFO] - * mlcr get,docker
[2025-07-21 21:19:52,245 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-docker_c4f8c830/mlc-cached-state.json
[2025-07-21 21:19:52,256 module.py:576 INFO] - * mlcr get,mlperf,inference,submission,dir,local,_version.r5.1-dev
[2025-07-21 21:19:52,257 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-mlperf-inference-submission-dir_2dc2a00a/mlc-cached-state.json
[2025-07-21 21:19:52,264 module.py:576 INFO] - * mlcr get,nvidia-docker
[2025-07-21 21:19:52,264 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-nvidia-docker_599faeb4/mlc-cached-state.json
[2025-07-21 21:19:52,274 module.py:576 INFO] - * mlcr get,mlperf,inference,nvidia,scratch,space,_version.r5.1-dev
[2025-07-21 21:19:52,275 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-mlperf-inference-nvidia-scratch-space_27c37b18/mlc-cached-state.json
[2025-07-21 21:19:52,284 module.py:576 INFO] - * mlcr get,nvidia-docker
[2025-07-21 21:19:52,285 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-nvidia-docker_599faeb4/mlc-cached-state.json
[2025-07-21 21:19:52,303 module.py:576 INFO] - * mlcr run,docker,container
[2025-07-21 21:19:52,314 module.py:576 INFO] - * mlcr get,docker
[2025-07-21 21:19:52,315 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-docker_c4f8c830/mlc-cached-state.json
[2025-07-21 21:19:52,561 customize.py:56 INFO] -
[2025-07-21 21:19:52,562 customize.py:57 INFO] - Checking existing Docker container:
[2025-07-21 21:19:52,562 customize.py:58 INFO] -
[2025-07-21 21:19:52,562 customize.py:66 INFO] - docker ps --format "{{ .ID }}," --filter "ancestor=localhost/local/mlperf-inference-nvidia-v5.0-common:nvcr.io-nvidia-mlperf-mlperf-inference-mlpinf-v5.0-cuda12.8-pytorch25.01-ubuntu24.04-x8664-release-latest" 2> /dev/null
[2025-07-21 21:19:52,562 customize.py:67 INFO] -
[2025-07-21 21:19:52,610 customize.py:94 INFO] - No existing container
[2025-07-21 21:19:52,610 customize.py:105 INFO] -
[2025-07-21 21:19:52,611 customize.py:106 INFO] - Checking Docker images:
[2025-07-21 21:19:52,611 customize.py:107 INFO] -
[2025-07-21 21:19:52,611 customize.py:108 INFO] - docker images -q localhost/local/mlperf-inference-nvidia-v5.0-common:nvcr.io-nvidia-mlperf-mlperf-inference-mlpinf-v5.0-cuda12.8-pytorch25.01-ubuntu24.04-x8664-release-latest 2> /dev/null
[2025-07-21 21:19:52,611 customize.py:109 INFO] -
[2025-07-21 21:19:52,657 customize.py:122 INFO] - Docker image exists with ID: 89ad9e63251a

[2025-07-21 21:19:52,673 module.py:576 INFO] - * mlcr get,docker
[2025-07-21 21:19:52,675 module.py:1285 INFO] - ! load /root/MLC/repos/local/cache/get-docker_c4f8c830/mlc-cached-state.json
[2025-07-21 21:19:52,677 module.py:5340 INFO] - ! call "postprocess" from /root/MLC/repos/mlcommons@mlperf-automations/script/run-docker-container/customize.py
[2025-07-21 21:19:52,685 customize.py:326 INFO] -
[2025-07-21 21:19:52,686 customize.py:327 INFO] - Container launch command:
[2025-07-21 21:19:52,686 customize.py:328 INFO] -
[2025-07-21 21:19:52,686 customize.py:329 INFO] - docker run -it --entrypoint '' --group-add $(id -g $USER) --privileged --gpus=all --shm-size=32gb --cap-add SYS_ADMIN --cap-add SYS_TIME --security-opt apparmor=unconfined --security-opt seccomp=unconfined --dns 8.8.8.8 --dns 8.8.4.4 -v /root/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5:/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5 -v /root/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5:/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5 -v /root/MLC/repos/local/cache/get-mlperf-inference-submission-dir_2dc2a00a:/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-submission-dir_2dc2a00a -v /root/MLC/repos/local/cache/get-mlperf-inference-nvidia-scratch-space_27c37b18:/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-nvidia-scratch-space_27c37b18 localhost/local/mlperf-inference-nvidia-v5.0-common:nvcr.io-nvidia-mlperf-mlperf-inference-mlpinf-v5.0-cuda12.8-pytorch25.01-ubuntu24.04-x8664-release-latest bash -c '(mlc pull repo && mlcr --tags=app,mlperf,inference,generic,_nvidia,_3d-unet-99,_tensorrt,_cuda,_test,_r5.1-dev_default,_offline --quiet=true --env.MLC_DOCKER_USE_DEFAULT_USER=yes --env.MLC_QUIET=yes --env.MLC_MLPERF_IMPLEMENTATION=nvidia --env.MLC_MLPERF_MODEL=3d-unet-99 --env.MLC_MLPERF_DEVICE=cuda --env.MLC_MLPERF_LOADGEN_SCENARIO=Offline --env.MLC_MLPERF_RUN_STYLE=test --env.MLC_MLPERF_SKIP_SUBMISSION_GENERATION=False --env.MLC_DOCKER_PRIVILEGED_MODE=True --env.MLC_MLPERF_SUBMISSION_DIVISION=open --env.MLC_MLPERF_INFERENCE_TP_SIZE=1 --env.MLC_MLPERF_SUBMISSION_SYSTEM_TYPE=datacenter --env.MLC_MLPERF_USE_DOCKER=True --env.MLC_MLPERF_BACKEND=tensorrt --env.MLC_RERUN=True --env.MLC_TEST_QUERY_COUNT=50 --env.MLC_MLPERF_FIND_PERFORMANCE_MODE=yes --env.MLC_MLPERF_LOADGEN_ALL_MODES=no --env.MLC_MLPERF_LOADGEN_MODE=performance --env.MLC_MLPERF_RESULT_PUSH_TO_GITHUB=False --env.MLC_MLPERF_SUBMISSION_GENERATION_STYLE=full --env.MLC_MLPERF_INFERENCE_VERSION=5.1-dev --env.MLC_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r5.1-dev_default --env.MLC_MLPERF_SUBMISSION_CHECKER_VERSION=v5.1 --env.MLC_MLPERF_INFERENCE_SOURCE_VERSION=5.0.25 --env.MLC_MLPERF_LAST_RELEASE=v5.1 --env.MLC_MLPERF_INFERENCE_RESULTS_VERSION=r5.1-dev --env.MLC_TMP_CURRENT_PATH=/home/labuser --env.MLC_TMP_PIP_VERSION_STRING= --env.MLC_MODEL=3d-unet-99 --env.MLC_MLPERF_LOADGEN_COMPLIANCE=no --env.MLC_MLPERF_LOADGEN_EXTRA_OPTIONS= --env.MLC_MLPERF_LOADGEN_SCENARIOS,=Offline --env.MLC_MLPERF_LOADGEN_MODES,=performance --env.MLC_OUTPUT_FOLDER_NAME=test_results --add_deps_recursive.coco2014-original.tags=_full --add_deps_recursive.coco2014-preprocessed.tags=_full --add_deps_recursive.imagenet-original.tags=_full --add_deps_recursive.imagenet-preprocessed.tags=_full --add_deps_recursive.openimages-original.tags=_full --add_deps_recursive.openimages-preprocessed.tags=_full --add_deps_recursive.openorca-original.tags=_full --add_deps_recursive.openorca-preprocessed.tags=_full --add_deps_recursive.coco2014-dataset.tags=_full --add_deps_recursive.igbh-dataset.tags=_full --add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r5.1-dev --add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r5.1-dev --add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r5.1-dev --print_env=False --print_deps=False --dump_version_info=True --env.MLC_MLPERF_INFERENCE_RESULTS_DIR=/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5 --env.OUTPUT_BASE_DIR=/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-results-dir_975925f5 --env.MLC_MLPERF_INFERENCE_SUBMISSION_DIR=/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-submission-dir_2dc2a00a/mlperf-inference-submission --env.MLPERF_SCRATCH_PATH=/home/ubuntu/MLC/repos/local/cache/get-mlperf-inference-nvidia-scratch-space_27c37b18 && bash ) || bash'
[2025-07-21 21:19:52,686 customize.py:333 INFO] -
[2025-07-21 21:19:54,647 repo_action.py:297 INFO] - Repository mlperf-automations already exists at /root/MLC/repos/mlcommons@mlperf-automations. Checking for local changes...
[2025-07-21 21:19:54,660 repo_action.py:308 INFO] - No local changes detected. Pulling latest changes...
fatal: unable to access 'https://github.com/mlcommons/mlperf-automations.git/': server certificate verification failed. CAfile: none CRLfile: none
[2025-07-21 21:19:54,811 main.py:275 ERROR] - Git command failed: Command '['git', '-C', '/root/MLC/repos/mlcommons@mlperf-automations', 'pull']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/usr/local/bin/mlc", line 8, in
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/mlc/main.py", line 276, in main
raise Exception(f"""An error occurred {res}""")
Exception: An error occurred {'return': 1, 'error': "Git command failed: Command '['git', '-C', '/root/MLC/repos/mlcommons@mlperf-automations', 'pull']' returned non-zero exit status 1."}

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions