Thanks! Yes, the issue is that when I run the installer, on the final step, it crashes. The log says the following:
================ runtime-installer 1.1.187 ================
Runtime Version: 0.3.15
Playbooks loaded: 4
- Kubernetes
system_version: 0.3.15
- Infrastructure Services
istio_version: 1.22.0
local_static_provisioner_version: 2.0.0
metallb_version: 5.0.2
- System Services
elasticsearch_version: 22.0.3
fuseki_version: 4.10.1
gpu_operator_version: v25.3.0
grafana_version: 11.3.0
kafka_version: 26.11.4
kibana_version: 12.1.6
logstash_version: 6.0.0
node_exporter_version: 4.3.1
postgresql_version: 0.23.2
prometheus_version: 1.3.1
opentelemetry_collector_version: 0.129.0
- Application Platform
app_center_version: 0.0.30
system_settings_version: 1.0.15
helm_dashboard_version: 0.1.10
kubernetes_dashboard_version: 7.4.0
app_sandbox_version: 1.1.0
[2026-01-21T08:30:26.567Z] [INFO] Starting runtime-installer v1.1.187
[2026-01-21T08:30:27.362Z] [INFO] Cache cleared
[2026-01-21T08:31:00.436Z] [ERROR] Error during connection: {}
[2026-01-21T08:31:01.316Z] [ERROR] Error during connection: {}
[2026-01-21T08:32:05.717Z] [ERROR] Error during connection: {}
[2026-01-21T08:32:25.874Z] [ERROR] Error during connection: {}
[2026-01-21T08:32:26.993Z] [ERROR] Error during connection: {}
[2026-01-21T08:32:27.665Z] [ERROR] Error during connection: {}
[2026-01-21T08:32:28.282Z] [ERROR] Error during connection: {}
[2026-01-21T08:33:46.113Z] [INFO] User ‘**********’ already exists; skipping useradd.
[2026-01-21T08:33:46.653Z] [INFO] Configuring passwordless for user ‘**********’
[2026-01-21T08:33:46.840Z] [INFO] Creating .ssh directory for user ‘**********’
[2026-01-21T08:33:47.019Z] [INFO] Setting password for user ‘**********’
[2026-01-21T08:33:47.566Z] [ERROR] Error during initialization: cluster_authentication_failed
[2026-01-21T08:33:53.881Z] [INFO] User ‘**********’ already exists; skipping useradd.
[2026-01-21T08:33:54.414Z] [INFO] Configuring passwordless for user ‘**********’
[2026-01-21T08:33:54.588Z] [INFO] Creating .ssh directory for user ‘**********’
[2026-01-21T08:33:54.768Z] [INFO] Setting password for user ‘**********’
[2026-01-21T08:33:55.281Z] [ERROR] Error during initialization: cluster_authentication_failed
[2026-01-21T08:33:56.708Z] [INFO] User ‘**********’ already exists; skipping useradd.
[2026-01-21T08:33:57.249Z] [INFO] Configuring passwordless for user ‘**********’
[2026-01-21T08:33:57.443Z] [INFO] Creating .ssh directory for user ‘**********’
[2026-01-21T08:33:57.616Z] [INFO] Setting password for user ‘**********’
[2026-01-21T08:33:58.152Z] [ERROR] Error during initialization: cluster_authentication_failed
[2026-01-21T08:34:30.133Z] [INFO] User ‘**********’ already exists; skipping useradd.
[2026-01-21T08:34:30.653Z] [INFO] Configuring passwordless for user ‘**********’
[2026-01-21T08:34:30.828Z] [INFO] Creating .ssh directory for user ‘**********’
[2026-01-21T08:34:31.004Z] [INFO] Setting password for user ‘**********’
[2026-01-21T08:34:31.532Z] [ERROR] Error during initialization: cluster_authentication_failed
[2026-01-21T08:34:38.053Z] [INFO] User ‘**********’ already exists; skipping useradd.
[2026-01-21T08:34:38.584Z] [INFO] Configuring passwordless for user ‘**********’
[2026-01-21T08:34:38.757Z] [INFO] Creating .ssh directory for user ‘**********’
[2026-01-21T08:34:38.931Z] [INFO] Setting password for user ‘**********’
[2026-01-21T08:34:46.806Z] [INFO] Inventory file created: C:\Users\BurnerPC\AppData\Local\Temp\aciw\playbook\infrastructure-services\inventory
[2026-01-21T08:35:55.791Z] [INFO] ------------------------------------
[2026-01-21T08:35:55.792Z] [INFO] Processing playbook CHECK: Kubernetes
[2026-01-21T08:35:55.808Z] [INFO] Inventory file created: C:\Users\BurnerPC\AppData\Local\Temp\aciw\playbook\kubernetes\inventory
[2026-01-21T08:36:12.485Z] [INFO] Processing playbook APPLY: Kubernetes
[2026-01-21T08:37:26.309Z] [ERROR] Playbook APPLY failed: 1 failed, 0 unreachable hosts.
[2026-01-21T08:37:26.347Z] [ERROR] [stderr] sh: 1: kubectl: not found[2026-01-21T08:37:26.347Z] [ERROR] [error] command “kubectl get all -A --output json >> /var/log/milestone/aciw/kubernetes_snapshot.log” failed with code 127: sh: 1: kubectl: not found
[2026-01-21T08:37:26.347Z] [ERROR] Failed to capture Kubernetes snapshot. Most likely kubectl is not installed on the remote host. {}
[2026-01-21T08:37:26.347Z] [ERROR] Playbook Kubernetes apply failed.
[2026-01-21T08:37:26.348Z] [ERROR] Error during deployment: {}
This seems to indicate that kubectl is not installed on the server, but I can see that it is. Looking at the installation report that the installer links to, we can se the following in the kubernetes log:
TASK [microk8s : Creating helm plug-in directory if it does not already exist] ***
fatal: [REDACTED-IP]: FAILED! => {“changed”: false, “msg”: “There was an issue creating as requested: [Errno 2] No such file or directory: b’'”, “path”: “”}PLAY RECAP *********************************************************************
REDACTED-IP : ok=43 changed=6 unreachable=0 failed=1 skipped=33 rescued=0 ignored=0
I initially managed to fix this issue, but then got:
TASK [microk8s : Applying updated CNI configuration] ***************************
fatal: [IP_ADDRESS]: FAILED! => {“changed”: true, “cmd”: “microk8s.kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml”, “delta”: “0:00:06.949728”, “end”: “2026-01-16 15:46:12.503560”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2026-01-16 15:46:05.553832”, “stderr”: “Error from server (AlreadyExists): error when creating \”/var/snap/microk8s/current/args/cni-network/cni.yaml\": clusterroles.rbac.authorization.k8s.io
After managing to fix that, I got:
TASK [microk8s : Listing all current master nodes] *****************************
fatal: [IP-ADRESS]: FAILED! => {“changed”: false, “cmd”: “kubectl get nodes --selector node-role.kubernetes.io/master=true --no-headers -o=custom-columns=NAME:.metadata.name”, “msg”: “[Errno 2] No such file or directory: b’kubectl’”, “rc”: 2, “stderr”: “”, “stderr_lines”: [], “stdout”: “”, “stdout_lines”: []}’
After I kept getting a new issue I gave up and decided to contact support.