You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi Team,
I'm trying to use cgr.dev/chainguard/pytorch image with dataflow instead of original PyTorch build on Ubuntu.
Here is the original docker file:
`FROM pytorch/pytorch:2.6.0-cuda12.6-cudnn9-runtime
WORKDIR /pipeline
COPY requirements.txt .
COPY *.py ./
RUN pip install --no-cache-dir --upgrade pip
&& pip install --no-cache-dir -r requirements.txt
&& pip check
COPY --from=apache/beam_python3.11_sdk:2.61.0 /opt/apache/beam /opt/apache/beam
ENTRYPOINT [ "/opt/apache/beam/boot" ]`
Thai I have updated to:
`
FROM cgr.dev/chainguard/pytorch:latest-dev
CMD ["/bin/sh"]
USER root
RUN apk add python3~3.11 # py3-pip
WORKDIR /pipeline
COPY requirements.txt .
COPY *.py ./
RUN python -m ensurepip --upgrade
RUN python -m pip install -r requirements.txt
RUN python --version
COPY --from=apache/beam_python3.11_sdk:2.61.0 /opt/apache/beam /opt/apache/beam
ENTRYPOINT [ "/opt/apache/beam/boot" ]
`
In first case torch.cuda.is_available() is true while failing with cgr.dev/chainguard/pytorch:latest-dev.
I also tested with same accelerator:
gcloud compute instances create test-gpu1 \ --zone=europe-west2-a \ --image-family=pytorch-latest-gpu \ --image-project=deeplearning-platform-release \ --maintenance-policy=TERMINATE --accelerator="type=nvidia-tesla-t4,count=1" --metadata="install-nvidia-driver=True"
and also in this case torch is able to reveal the GPU.
Any hints?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions