Skip to content

kubevirt-datamover-controller deployment#2087

Open
Joeavaikath wants to merge 10 commits intoopenshift:oadp-devfrom
Joeavaikath:kdc-deployment
Open

kubevirt-datamover-controller deployment#2087
Joeavaikath wants to merge 10 commits intoopenshift:oadp-devfrom
Joeavaikath:kdc-deployment

Conversation

@Joeavaikath
Copy link
Contributor

Why the changes were made

Adds the controller, API, RBAC, validation code for deploying the kubevirt-datamover-controller
Builds on #2074

How to test the changes made

Once the quay image for kubevirt-datamover-controller is up:

  1. Specify kubevirt-datamover plugin in the default plugin list

This should trigger the deployment from the kubevirt-datamover-controller

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 11, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 11, 2026

Walkthrough

Adds KubeVirt DataMover controller: reconcile logic and deployment management, RBAC and CSV entries, readiness condition and checks, singleton validation and warnings, related image/env wiring, Makefile/tooling updates, extensive unit tests, and two tech-debt design documents.

Changes

Cohort / File(s) Summary
Build & Tooling
Makefile
Bumped GOLANGCI_LINT_VERSION v2.6.1→v2.9.0; added -a to go install invocation (GOBIN=$(dir $(1)) go install -a -mod=mod $(2)); extended BUNDLE_GEN_FLAGS to include extra service-account oadp-kubevirt-datamover-controller-manager.
Kustomize / Manager Config
config/manifests/kustomization.yaml, config/manager/manager.yaml
Added kubevirt-datamover-controller_rbac to manifests and injected RELATED_IMAGE_KUBEVIRT_DATAMOVER_CONTROLLER env var into manager container.
API Types
api/v1alpha1/dataprotectionapplication_types.go
Added readiness condition constant ConditionKubevirtDatamoverReady.
RBAC Manifests
bundle/manifests/oadp-kubevirt-datamover-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
New ClusterRole granting GET on nonResourceURL /metrics for datamover metrics reader.
CSV / Operator Bundle
bundle/manifests/oadp-operator.clusterserviceversion.yaml
Added RBAC permission blocks and serviceAccount wiring for oadp-kubevirt-datamover-controller-manager, added relatedImage entry and manager env wiring for kubevirt-datamover-controller.
Controller Integration
internal/controller/dataprotectionapplication_controller.go, internal/controller/kubevirt_datamover_controller.go
Integrated ReconcileKubevirtDatamoverController into reconcile flow; new reconciliation implementation to create/update/delete oadp-kubevirt-datamover-controller-manager Deployment, including image resolution, specs, labels/annotations, env, probes, security context, resources, and SA handling.
Readiness & Validation
internal/controller/readiness.go, internal/controller/validator.go
Added updateKubevirtDatamoverReadinessCondition and hook into overall readiness; added validation enforcing cluster-wide singleton for datamover and emitting warning when kubevirt plugin is missing.
Tests
internal/controller/kubevirt_datamover_controller_test.go, internal/controller/readiness_test.go, internal/controller/validator_test.go
Large test additions: reconciliation unit tests, deployment builder tests, readiness scenarios, and validation/warning tests. New test scaffolding, helpers, and many table-driven cases.
Docs / Tech Debt
docs/design/tech-debt/controller-state-management.md, docs/design/tech-debt/image-pull-policy-error-handling.md
Added two design notes documenting controller state-management and image-pull-policy error-handling technical debt and potential mitigations.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot requested review from mpryc and sseago February 11, 2026 17:39
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@internal/controller/kubevirt_datamover_controller.go`:
- Around line 46-48: The two package-level variables kdmDpaResourceVersion and
previousKubevirtDatamoverEnabled are global mutable state and cause race
conditions across concurrent reconciliations; move them into the reconciler's
instance state (e.g., fields on the KubevirtDatamoverReconciler struct) or
implement a per-DPA state map keyed by NamespacedName with synchronization
(sync.RWMutex) to guard reads/writes, then update all references in reconcile
logic to use the reconciler's fields or the synchronized map instead of the
package-level variables.
- Around line 96-99: The call to common.GetImagePullPolicy(...) logs the regex
error but lets execution continue with an invalid imagePullPolicy; update the
error handling in kubevirt_datamover_controller.go (around the
GetImagePullPolicy call where imagePullPolicy and err are set) to return the
error instead of just logging it—e.g., after calling GetImagePullPolicy with
r.dpa.Spec.ImagePullPolicy and kubevirtDatamoverControllerImage, propagate the
error back (return it from the reconcile function or wrap and return) so the
controller does not proceed with an invalid imagePullPolicy; remove or keep the
log as desired but ensure the function returns on error.

Comment on lines +46 to +48
kdmDpaResourceVersion = ""
previousKubevirtDatamoverEnabled = false
)
Copy link
Contributor

@coderabbitai coderabbitai bot Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Race condition: Global state shared across concurrent reconciliations.

These package-level variables are shared across all reconciler invocations. Multiple DPA instances or concurrent reconcile loops will read/write these variables simultaneously without synchronization, leading to undefined behavior.

Consider storing this state on the reconciler struct or using a per-DPA state map with proper synchronization.

🤖 Prompt for AI Agents
In `@internal/controller/kubevirt_datamover_controller.go` around lines 46 - 48,
The two package-level variables kdmDpaResourceVersion and
previousKubevirtDatamoverEnabled are global mutable state and cause race
conditions across concurrent reconciliations; move them into the reconciler's
instance state (e.g., fields on the KubevirtDatamoverReconciler struct) or
implement a per-DPA state map keyed by NamespacedName with synchronization
(sync.RWMutex) to guard reads/writes, then update all references in reconcile
logic to use the reconciler's fields or the synchronized map instead of the
package-level variables.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documented this tech debt, happens in multiple other places

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Joeavaikath, thank you for documenting this tech debt. I understand this pattern exists in other parts of the codebase.

Would you like me to create a GitHub issue to track the refactoring of these global state variables across the codebase? This could help consolidate the effort to address this pattern in a future PR.

Comment on lines +96 to +99
imagePullPolicy, err := common.GetImagePullPolicy(r.dpa.Spec.ImagePullPolicy, kubevirtDatamoverControllerImage)
if err != nil {
r.Log.Error(err, "imagePullPolicy regex failed")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error from GetImagePullPolicy is logged but not returned.

If the regex validation fails, execution continues with a potentially invalid imagePullPolicy. Consider returning the error to surface configuration problems.

Proposed fix
 	imagePullPolicy, err := common.GetImagePullPolicy(r.dpa.Spec.ImagePullPolicy, kubevirtDatamoverControllerImage)
 	if err != nil {
-		r.Log.Error(err, "imagePullPolicy regex failed")
+		return err
 	}
🤖 Prompt for AI Agents
In `@internal/controller/kubevirt_datamover_controller.go` around lines 96 - 99,
The call to common.GetImagePullPolicy(...) logs the regex error but lets
execution continue with an invalid imagePullPolicy; update the error handling in
kubevirt_datamover_controller.go (around the GetImagePullPolicy call where
imagePullPolicy and err are set) to return the error instead of just logging
it—e.g., after calling GetImagePullPolicy with r.dpa.Spec.ImagePullPolicy and
kubevirtDatamoverControllerImage, propagate the error back (return it from the
reconcile function or wrap and return) so the controller does not proceed with
an invalid imagePullPolicy; remove or keep the log as desired but ensure the
function returns on error.

Joeavaikath and others added 7 commits February 11, 2026 14:09
Add ConditionKubevirtDatamoverReady constant to track the health
of the kubevirt-datamover-controller deployment when the
kubevirt-datamover plugin is enabled.

This condition will be set to:
- True when the controller is disabled or deployment is ready
- False when the deployment is not ready or not found

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add ReconcileKubevirtDatamoverController to manage the deployment
of kubevirt-datamover-controller when the kubevirt-datamover plugin
is enabled in spec.configuration.velero.defaultPlugins.

Key features:
- Automatic deployment creation/deletion based on plugin presence
- 3-tier image resolution (UnsupportedOverrides → env var → default)
- Security hardened with non-root user, read-only filesystem
- DPA resource version tracking triggers pod restarts on config changes
- Health probes on :8081 (healthz, readyz)
- Metrics endpoint on :8443

Service Account: oadp-kubevirt-datamover-controller-manager
Pattern: Follows vmfilerestore_controller.go implementation

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Track kubevirt-datamover-controller deployment health in DPA status
conditions, enabling users to monitor the controller via:

  kubectl get dpa -o yaml

The KubevirtDatamoverReady condition reports:
- Status: True, Reason: ComponentDisabled - plugin not enabled
- Status: True, Reason: DeploymentReady - deployment is healthy
- Status: False, Reason: DeploymentNotReady - replicas not ready
- Status: False, Reason: ComponentNotFound - deployment missing

This condition is integrated into overall DPA readiness calculation,
so DPA won't report as Reconciled until all enabled components
(including kubevirt-datamover) are ready.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add validation logic to prevent misconfigurations:

1. Cluster-wide singleton enforcement:
   Blocks DPA creation if another namespace already has
   kubevirt-datamover enabled. This prevents resource conflicts
   as the controller uses ClusterRole permissions and reconciles
   cluster-scoped DataUpload/DataDownload CRs.

   Error: "only a single instance of KubeVirt DataMover Controller
   can be installed across the entire cluster"

2. Soft dependency warning:
   Emits a warning event if kubevirt-datamover plugin is enabled
   without the kubevirt plugin. While not blocking, this guides
   users toward proper configuration:

   spec.configuration.velero.defaultPlugins:
   - kubevirt           # VM backup/restore
   - kubevirt-datamover # Incremental backup support

Pattern: Follows vmfilerestore_controller.go validation (lines 308-360)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add RELATED_IMAGE_KUBEVIRT_DATAMOVER_CONTROLLER environment variable
to enable image overrides for the kubevirt-datamover-controller.

Image resolution priority:
1. DPA spec.unsupportedOverrides[kubevirtDatamoverImageKey]
2. RELATED_IMAGE_KUBEVIRT_DATAMOVER_CONTROLLER env var
3. Default: quay.io/konveyor/kubevirt-datamover-controller:latest

This follows the same pattern as other controllers (velero, node-agent,
vm-file-restore, non-admin-controller).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add oadp-kubevirt-datamover-controller-manager to OLM bundle generation:

1. Makefile: Add to --extra-service-accounts for operator-sdk bundle
2. config/manifests/kustomization.yaml: Include RBAC resources

This ensures the OLM ClusterServiceVersion (CSV) includes:
- ServiceAccount: oadp-kubevirt-datamover-controller-manager
- ClusterRole: kubevirt-datamover-controller permissions
- ClusterRoleBinding: bind service account to cluster role

Required permissions (already defined in config/kubevirt-datamover-controller_rbac/):
- DataUploads (velero.io/v2alpha1) - reconcile incremental backups
- VirtualMachineBackups (backup.kubevirt.io) - coordinate with KubeVirt
- VirtualMachines (kubevirt.io) - read VM configurations
- PersistentVolumeClaims - manage data movement

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Regenerate OLM bundle after kubevirt-datamover changes:

  make generate
  make manifests
  make bundle

Changes:
- Add kubevirt-datamover-controller-manager ServiceAccount to CSV
- Include kubevirt-datamover RBAC resources
- Update deployment permissions and related resources

This commit contains only generated files and should not be manually edited.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 11, 2026
@Joeavaikath Joeavaikath changed the title Kdc deployment kubevirt-datamover-controller deployment Feb 11, 2026
@Joeavaikath Joeavaikath linked an issue Feb 11, 2026 that may be closed by this pull request
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@internal/controller/kubevirt_datamover_controller.go`:
- Around line 129-130: The function ensureKubevirtDatamoverRequiredLabels
mutates the package-level map kubevirtDatamoverDeploymentLabels via maps.Copy,
which can race; stop modifying the global map at runtime by either merging
kubevirtDatamoverControlPlaneLabel into kubevirtDatamoverDeploymentLabels once
at package initialization, or change ensureKubevirtDatamoverRequiredLabels to
create a new local map (clone kubevirtDatamoverDeploymentLabels into a fresh
map) and copy kubevirtDatamoverControlPlaneLabel into that local map before
using it — reference the function ensureKubevirtDatamoverRequiredLabels, the
global maps kubevirtDatamoverDeploymentLabels and
kubevirtDatamoverControlPlaneLabel, and the maps.Copy call when making the
change.

Comment on lines +129 to +130
func ensureKubevirtDatamoverRequiredLabels(deploymentObject *appsv1.Deployment) {
maps.Copy(kubevirtDatamoverDeploymentLabels, kubevirtDatamoverControlPlaneLabel)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Mutating package-level map can cause race conditions.

maps.Copy(kubevirtDatamoverDeploymentLabels, kubevirtDatamoverControlPlaneLabel) modifies the global kubevirtDatamoverDeploymentLabels map on every call. This is both unnecessary (the control plane label is already static) and unsafe with concurrent reconciliations.

Proposed fix

Either merge the labels at initialization time, or create a local copy:

 func ensureKubevirtDatamoverRequiredLabels(deploymentObject *appsv1.Deployment) {
-	maps.Copy(kubevirtDatamoverDeploymentLabels, kubevirtDatamoverControlPlaneLabel)
+	// Create merged labels locally to avoid mutating package-level state
+	requiredLabels := make(map[string]string)
+	maps.Copy(requiredLabels, kubevirtDatamoverDeploymentLabels)
+	maps.Copy(requiredLabels, kubevirtDatamoverControlPlaneLabel)
 	deploymentObjectLabels := deploymentObject.GetLabels()
 	if deploymentObjectLabels == nil {
-		deploymentObject.SetLabels(kubevirtDatamoverDeploymentLabels)
+		deploymentObject.SetLabels(requiredLabels)
 	} else {
-		for key, value := range kubevirtDatamoverDeploymentLabels {
+		for key, value := range requiredLabels {
 			deploymentObjectLabels[key] = value
 		}
 		deploymentObject.SetLabels(deploymentObjectLabels)
 	}
 }

Or merge at package initialization:

 var (
 	kubevirtDatamoverControlPlaneLabel = map[string]string{
 		kubevirtDatamoverControlPlaneKey: kubevirtDatamoverControlPlaneValue,
 	}
-	kubevirtDatamoverDeploymentLabels = map[string]string{
+	kubevirtDatamoverDeploymentLabels = map[string]string{
 		"app.kubernetes.io/component":  "manager",
 		"app.kubernetes.io/created-by": common.OADPOperator,
 		"app.kubernetes.io/instance":   kubevirtDatamoverObjectName,
 		"app.kubernetes.io/managed-by": "kustomize",
 		"app.kubernetes.io/name":       "deployment",
 		"app.kubernetes.io/part-of":    common.OADPOperator,
+		kubevirtDatamoverControlPlaneKey: kubevirtDatamoverControlPlaneValue,
 	}
🤖 Prompt for AI Agents
In `@internal/controller/kubevirt_datamover_controller.go` around lines 129 - 130,
The function ensureKubevirtDatamoverRequiredLabels mutates the package-level map
kubevirtDatamoverDeploymentLabels via maps.Copy, which can race; stop modifying
the global map at runtime by either merging kubevirtDatamoverControlPlaneLabel
into kubevirtDatamoverDeploymentLabels once at package initialization, or change
ensureKubevirtDatamoverRequiredLabels to create a new local map (clone
kubevirtDatamoverDeploymentLabels into a fresh map) and copy
kubevirtDatamoverControlPlaneLabel into that local map before using it —
reference the function ensureKubevirtDatamoverRequiredLabels, the global maps
kubevirtDatamoverDeploymentLabels and kubevirtDatamoverControlPlaneLabel, and
the maps.Copy call when making the change.

@weshayutin
Copy link
Contributor

weshayutin commented Feb 12, 2026

let's get an image up

pod/oadp-kubevirt-datamover-controller-manager-55f67bc4ff-4ftnl       0/1     ErrImagePull   0    	66

    - lastTransitionTime: '2026-02-12T19:35:25Z'
      message: 'KubeVirt DataMover controller not ready: 0/1 replicas ready'
      reason: DeploymentNotReady
      status: 'False'
      type: KubevirtDatamoverReady

@shubham-pampattiwar
Copy link
Member

let's get an image up

pod/oadp-kubevirt-datamover-controller-manager-55f67bc4ff-4ftnl       0/1     ErrImagePull   0    	66
    - lastTransitionTime: '2026-02-12T19:35:25Z'
      message: 'KubeVirt DataMover controller not ready: 0/1 replicas ready'
      reason: DeploymentNotReady
      status: 'False'
      type: KubevirtDatamoverReady

The onboarding PR was merged openshift/release#73948

@shubham-pampattiwar
Copy link
Member

@shubham-pampattiwar
Copy link
Member

@shubham-pampattiwar
Copy link
Member

Fix PR: openshift/release#74839

@weshayutin
Copy link
Contributor

/retest

Copy link
Member

@shubham-pampattiwar shubham-pampattiwar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good, follows established patterns. Missing unit tests though. Let's get test coverage in before merging.

@weshayutin
Copy link
Contributor

tion", "controllerGroup": "oadp.openshift.io", "controllerKind": "DataProtectionApplication", "DataProtectionApplication": {"name":"dpa-sample","namespace":"openshift-adp"}, "namespace": "openshift-adp", "name": "dpa-sample", "reconcileID": "35e56dbc-293a-40fe-bafd-fa14784c267e"}
openshift-adp-controller-manager-579dfb456b-x7qmj manager 2026-02-13T18:57:14Z	ERROR	Reconciler error	{"controller": "dataprotectionapplication", "controllerGroup": "oadp.openshift.io", "controllerKind": "DataProtectionApplication", "DataProtectionApplication": {"name":"dpa-sample","namespace":"openshift-adp"}, "namespace": "openshift-adp", "name": "dpa-sample", "reconcileID": "35e56dbc-293a-40fe-bafd-fa14784c267e", "error": "Operation cannot be fulfilled on dataprotectionapplications.oadp.openshift.io \"dpa-sample\": the object has been modified; please apply your changes to the latest version and try again"}
openshift-adp-controller-manager-579dfb456b-x7qmj manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
openshift-adp-controller-manager-579dfb456b-x7qmj manager 	/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.21.0/pkg/internal/controller/controller.go:353
openshift-adp-controller-manager-579dfb456b-x7qmj manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
openshift-adp-controller-manager-579dfb456b-x7qmj manager 	/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.21.0/pkg/internal/controller/controller.go:300
openshift-adp-controller-manager-579dfb456b-x7qmj manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.1
openshift-adp-controller-manager-579dfb456b-x7qmj manager 	/opt/app-root/src/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.21.0/pkg/internal/controller/controller.go:202
+ velero-5f6fd67fbb-lrcf5 › kubevirt-velero-plugin
- velero-5f6fd67fbb-lrcf5 › kubevirt-velero-plugin
+ oadp-kubevirt-datamover-controller-manager-84d6d4bb44-8mcwn › manager
oadp-kubevirt-datamover-controller-manager-84d6d4bb44-8mcwn manager 2026-02-13T18:57:16Z	INFO	controller-runtime.metrics	Serving metrics server	{"bindAddress": ":8443", "secure": true}
+ velero-5f6fd67fbb-lrcf5 › openshift-velero-plugin
- velero-5f6fd67fbb-lrcf5 › openshift-velero-plugin
+ velero-5f6fd67fbb-lrcf5 › velero-plugin-for-aws
- velero-5f6fd67fbb-lrcf5 › velero-plugin-for-aws
+ velero-5f6fd67fbb-lrcf5 › hypershift-oadp-plugin
- velero-5f6fd67fbb-lrcf5 › hypershift-oadp-plugin
+ velero-5f6fd67fbb-lrcf5 › kubevirt-datamover-plugin
- velero-5f6fd67fbb-lrcf5 › kubevirt-datamover-plugin
velero-55b5b69b-vlzcp velero time="2026-02-13T18:57:21Z" level=debug msg="enqueueing resources ..." controller=backup-storage-location logSource="pkg/util/kube/periodical_enqueue_source.go:72" resource="*v1.BackupStorageLocationList"
velero-55b5b69b-vlzcp velero time="2026-02-13T18:57:21Z" level=debug msg="enqueueing resources ..." controller=restore-operations logSource="pkg/util/kube/periodical_enqueue_source.go:72" resource="*v1.RestoreList"
velero-55b5b69b-vlzcp velero time="2026-02-13T18:57:21Z" level=debug msg="enqueueing resources ..." controller=backup-operations logSource="pkg/util/kube/periodical_enqueue_source.go:72" resource="*v1.BackupList"
velero-55b5b69b-vlzcp velero time="2026-02-13T18:57:21Z" level=debug msg="no resources, skip" controller=restore-operations logSource="pkg/util/kube/periodical_enqueue_source.go:83" resource="*v1.RestoreList"
velero-55b5b69b-vlzcp velero time="2026-02-13T18:57:21Z" level=debug msg="no resources, skip" controller=backup-operations logSource="pkg/util/kube/periodical_enqueue_source.go:83" resource="*v1.BackupList"
velero-55b5b69b-vlzcp velero time="2026-02-13T18:57:21Z" level=debug msg="skip enqueue object openshift-adp/dpa-sample-1 due to the predicate." controller=backup-storage-location logSource="pkg/util/kube/periodical_enqueue_source.go:98" resource="*v1.BackupStorageLocationList"
^C

@weshayutin
Copy link
Contributor

still not working.. Let's get the image right folks
/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 13, 2026
@shubham-pampattiwar
Copy link
Member

@weshayutin
Copy link
Contributor

ah.. the image pull was just slow

  - lastTransitionTime: '2026-02-13T18:57:44Z'
      message: 'KubeVirt DataMover controller ready: 1/1 replicas'
      reason: DeploymentReady
      status: 'True'
      type: KubevirtDatamoverReady
Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME                                                                  READY   STATUS      RESTARTS   AGE
pod/a790968d43b37616254999e0585a44c4e8621f550eba021b54b60bf8bcflgbw   0/1     Completed   0          6m22s
pod/node-agent-cdnf2                                                  1/1     Running     0          4m48s
pod/node-agent-n2v25                                                  1/1     Running     0          4m48s
pod/node-agent-n4dlt                                                  1/1     Running     0          4m48s
pod/oadp-kubevirt-datamover-controller-manager-84d6d4bb44-8mcwn       1/1     Running     0          4m28s
pod/oadp-operator-catalog-h72v2                                       1/1     Running     0          6m50s
pod/openshift-adp-controller-manager-579dfb456b-x7qmj                 1/1     Running     0          5m48s
pod/openshift-adp-oadp-cli-server-64c6569b7-kbk8d                     1/1     Running     0          5m44s
pod/velero-5f6fd67fbb-lrcf5                                           1/1     Running     0          4m28s

weshayutin
weshayutin previously approved these changes Feb 13, 2026
Copy link
Contributor

@weshayutin weshayutin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/LGTM

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 13, 2026
@shubham-pampattiwar
Copy link
Member

/woof

@openshift-ci
Copy link

openshift-ci bot commented Feb 13, 2026

@shubham-pampattiwar: dog image

Details

In response to this:

/woof

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@weshayutin
Copy link
Contributor

@kaovilai le bark, le review

Signed-off-by: Joseph <jvaikath@redhat.com>
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Feb 13, 2026
@openshift-ci
Copy link

openshift-ci bot commented Feb 13, 2026

New changes are detected. LGTM label has been removed.

Signed-off-by: Joseph <jvaikath@redhat.com>
@Joeavaikath
Copy link
Contributor Author

docs/design/tech-debt readmes are just things coderabbitai pointed out. We follow the pattern in a few places so I just doc'ed it up in case we want to take a closer look at them, can ignore and delete if it's not a big deal

golangci-lint v2.6.1 pre-built binaries were compiled with Go 1.24, causing version mismatch errors when linting code targeting Go 1.25.0. Upgraded to v2.9.0 and added -a flag to force building from source with the current Go toolchain.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Joseph <jvaikath@redhat.com>
@openshift-ci
Copy link

openshift-ci bot commented Feb 14, 2026

@Joeavaikath: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-ci
Copy link

openshift-ci bot commented Feb 14, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Joeavaikath, weshayutin
Once this PR has been reviewed and has the lgtm label, please assign mpryc for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

kubevirt-dm: DPA and Operator updates

4 participants