
We are pleased to announce the release of KubeStash v2025.7.31 , packed with major improvements. You can check out the full changelog HERE . In this post, we’ll highlight the changes done in this release.
Introducing manifest-restore and manifest-view CLI Commands
Building on the success of kubedump-restore
for selective, manifest-based resource restoration, we’re excited to bring these powerful capabilities directly into the CLI
with two new commands: manifest-restore
and manifest-view
.
Key Benefits
- To make restores more accessible, safer, and easier to manage.
- Give users better control over which resources and namespaces are restored.
- Dry-run validation before applying changes to live clusters.
manifest-view
: Inspect Snapshots Before Restoring
Now, you can explore the contents of a snapshot before initiating a restore. Use manifest-view
to preview manifests and verify exactly what will be restored.
kubectl kubestash manifest-view \
--snapshot=<snapshot-name> \
--namespace="demo-a,demo-b" \
--include-cluster-resources=true \
--and-label-selectors="app" \
--exclude-resources="endpointslices.discovery.k8s.io,endpoints"
This command displays a tree view of all resource manifests from the snapshot that have the label key app
and are in either the demo-a
or demo-b
namespaces (for namespace-scoped resources), while excluding the resource group endpointslices.discovery.k8s.io
and the resource endpoints
.
Example output:
┌─ .
└─
└─ kubestash-tmp
└─ manifest
├─ deployments.apps
│ └─ namespaces
│ └─ demo-a
│ └─ my-deployment-a.yaml
├─ controllerrevisions.apps
│ └─ namespaces
│ └─ demo-b
│ └─ my-statefulset-7bc9c486fc.yaml
├─ services
│ └─ namespaces
│ ├─ demo-a
│ │ └─ my-service-a.yaml
│ └─ demo-b
│ └─ my-service-b.yaml
...
├─ serviceaccounts
│ └─ namespaces
│ ├─ demo-a
│ │ └─ my-serviceaccount-a.yaml
│ └─ demo-b
│ └─ my-serviceaccount-b.yaml
└─ configmaps
└─ namespaces
├─ demo-a
│ └─ my-config-a.yaml
└─ demo-b
└─ my-config-b.yaml
# (output truncated)
manifest-restore
: Restore Selected Resources to the Cluster
This allows you to apply specific resource manifests from a snapshot directly to your cluster, with lots of filtering options to customize the restore process.
kubectl kubestash manifest-restore \
--snapshot=<snapshot-name> \
--namespace=demo \
--exclude-resources="nodes.metrics.k8s.io,pods.metrics.k8s.io" \
--include-cluster-resources=true \
--and-label-selectors="app" \
--max-iterations=5
This command applies all resource manifests (YAML files) from the snapshot that have the label key app
in the demo
namespace (for namespace-scoped resources) to the cluster. It excludes the resource pods
as well as the resource groups nodes.metrics.k8s.io
and pods.metrics.k8s.io
.
Note: We can download and apply manifests manually from a snapshot to the cluster using an extra flag
--dry-run-dir
with themanifest-restore
command.
Automatic Restic
Unlocking — No More Manual Hassles
We’ve added a thoughtful little feature in this release that quietly takes care of something annoying: locked Restic repositories after a backup pod vanishes.
Sometimes, In Kubernetes cluster, a node might crash, resources could become scarce, or the cluster autoscaler might decide to reschedule workloads.
In such cases, Kubernetes may terminate a backup pod while it’s still running, even if the backup hasn’t finished. This can leave the backup incomplete and, in the case of Restic
, the repository locked, blocking future backups until someone manually unlocks it. Not ideal.
But now, KubeStash will automatically unlock the Restic
repo if it detects that the BackupSession
failed because the pod disappeared. No more manual commands. No more wondering why your next backup won’t start.
What’s better now:
- Auto-Unlock Magic — If the
Restic
repo get locked, KubeStash will notice and unlock the repo for you. - Smoother Experience — Less manual cleanup, less friction. Backups just keep working.
- Less Downtime — No waiting around or debugging why your backups are stuck.
Improvements and Bug Fixes
- Automatic owner reference updates in kubedump to simplify dependency handling.
- Multiple-iteration restore process in kubedump to avoid resource creation being blocked due to dependencies.
- Support for group resources in filtering flags, alongside individual resources in kubedump.
- Example:
--include-resources="deployments,clusterroles.rbac.authorization.k8s.io" --exclude-resources="endpointslices,nodes.metrics.k8s.io,nodes"
- Example:
- Updated
label-selectors
flag to support filtering withkey
.- Previous format:
"key1:value1,key2:value2"
- New format:
"key1:value1,key2:value2,key3"
- Example:
--and-label-selectors="app:my-app,app:my-sts,app" --or-label-selectors="db:postgres,db"
- Previous format:
What Next?
Please try the latest release and give us your valuable feedback.
- If you want to install KubeStash in a clean cluster, please follow the installation instruction from HERE .
- If you want to upgrade KubeStash from a previous version, please follow the upgrade instruction from HERE .
Support
To speak with us, please leave a message on our website .