You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Performed a velero backup create <name> -l <my-selector> including a mysql pod with ~200mb of data. Using restic to perform the volume snapshot.
Backup completes successfully.
Kubectl delete the mysql statefulset
Performed a velero restore create --from-backup <name>
The mysql pod is stuck indefinitely on the first init container (restic wait). The restic wait logs show it never finds the restore id on the filesystem
Inspecting the podvolumerestore resource shows that it is InProgress and has a StartTime . There is an empty Progress field.
When I enable debug logs for Restic on this node (it’s only a one node cluster), I can see that this PVR is never enqueued by the restic controller and the Restic command is never executed.
What did you expect to happen:
Volume restore completes successfully.
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)
velero restore logs <restorename> Logs for restore "qakots-kn98v" are not available until it's finished processing. Please wait until the restore has a phase of Completed or Failed and try again.
Anything else you would like to add:
I’ve been looking through the source and I’m not really sure how it gets into InProgress without running the Restic command, which I don't see for this volume restore.
All PVs are bound; reclaim policy is DELETE.
k describe podvolumerestore -n velero qakots-kn98v-wjl56
Cloud provider or hardware configuration: GCP n1-standard-4 (4cpu and 15GB ram)
OS (e.g. from /etc/os-release): Ubuntu-18 LTS
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
👍 for "I would like to see this bug fixed as soon as possible"
👎 for "There are more important bugs to focus on right now"
The text was updated successfully, but these errors were encountered:
Hi @DanStough - Sorry you're having issues with this. Out of curiosity, did the restic pod restart at all? I ask because the start time of the PodVolumeRestore (PVR) seems to precede the timestamp of the first line in the restic pod logs. I don't think the current Velero and Velero restic servers handle items that are in progress after restart. It's possible that restic pod started to process the PVR but then the pod was restarted and now the PVR is "in progress" but is not actually being processed by the new pod.
What steps did you take and what happened:
velero backup create <name> -l <my-selector>
including a mysql pod with ~200mb of data. Using restic to perform the volume snapshot.velero restore create --from-backup <name>
podvolumerestore
resource shows that it is InProgress and has a StartTime . There is an empty Progress field.What did you expect to happen:
Volume restore completes successfully.
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Logs for restore "qakots-kn98v" are not available until it's finished processing. Please wait until the restore has a phase of Completed or Failed and try again.
Anything else you would like to add:
k describe podvolumerestore -n velero qakots-kn98v-wjl56
Environment:
velero version
):velero client config get features
):features: <NOT SET>
kubectl version
):/etc/os-release
): Ubuntu-18 LTSVote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: