Issue with installation of milestone runtime for app center on ubuntu server

Hello!

We are developing an app/extension for Milestone XProtect App Center. I am trying to get set up according to the documentation: Download files

We are trying to set up the Milestone runtime for app center on a server. So we got a Ubuntu server on AWS, and ran the installation wizard on a windows computer. When trying to connect to the server, we get the error message “unknown_error”, even though credentials and IP are correct.

When checking the application logs for the wizard, I get the error: “command ‘ip addr show | grep 3x.2x.1x.2x’ failed with exit code 1.”

The issue seems to be that the Wizard first accesses the server through SSH, and then tries to grep the same IP address. But locally, the server has a 172.x.x.x address, because of NAT. Which is why the grep command fails. Is there any known solution to this issue, or any other way to set up the Ubuntu server with the milestone runtime?

Thanks.

Hi Rikard,

Thank you for reaching out!

If I understand the scenario correctly, you have a public IP in AWS (3x.2x.1x.2x), but the actual host machines are behind a NAT and only have private IPs (172.x.x.x). In practice that means we SSH via the NAT instance and then forwarded the connection to the target host, so we’re not SSH’ing directly to the host machine.

Our current installer doesn’t support this setup today. As I see it, we have a couple of options:

  1. Script-based experience: We provide a set of scripts that you would SCP to the host machines, unpack, and run manually. There’s no UI, and it requires that you can SCP files to the host machines.

  2. Product fix: We’ve added this scenario to our backlog. As of today we do not have a deliver date for a fix in the installer.

  3. Create a Ubuntu VM on-prem and and then install.

Let me know which option you’d prefer, and if you choose option 1, I can share the exact steps and required access/permissions.

Regards,
Assad

Hello Assad! Thank you for your response.

I set up an on-prem ubuntu server but am facing other issues, related to the installer. I am in contact with your support, so I hopefully can resolve it there. Thank

Hi Rikard,
Okay, please reach out if you have any other question.

Regards,

Assad

Hello again.

Since I am facing some issues with the on-prem installation, is it possible for you to share the scripts that the installer runs so I can run them manually?

No problem, we’ll take care of that. Before we proceed, we’d like to understand what went wrong so we can correctly update the installer. Would you mind to send us some logs from the installer or/and some description what is going wrong?

Thanks! Yes, the issue is that when I run the installer, on the final step, it crashes. The log says the following:

================ runtime-installer 1.1.187 ================
Runtime Version: 0.3.15
Playbooks loaded: 4

  • Kubernetes
    system_version: 0.3.15
  • Infrastructure Services
    istio_version: 1.22.0
    local_static_provisioner_version: 2.0.0
    metallb_version: 5.0.2
  • System Services
    elasticsearch_version: 22.0.3
    fuseki_version: 4.10.1
    gpu_operator_version: v25.3.0
    grafana_version: 11.3.0
    kafka_version: 26.11.4
    kibana_version: 12.1.6
    logstash_version: 6.0.0
    node_exporter_version: 4.3.1
    postgresql_version: 0.23.2
    prometheus_version: 1.3.1
    opentelemetry_collector_version: 0.129.0
  • Application Platform
    app_center_version: 0.0.30
    system_settings_version: 1.0.15
    helm_dashboard_version: 0.1.10
    kubernetes_dashboard_version: 7.4.0
    app_sandbox_version: 1.1.0
    [2026-01-21T08:30:26.567Z] [INFO] Starting runtime-installer v1.1.187
    [2026-01-21T08:30:27.362Z] [INFO] Cache cleared
    [2026-01-21T08:31:00.436Z] [ERROR] Error during connection: {}
    [2026-01-21T08:31:01.316Z] [ERROR] Error during connection: {}
    [2026-01-21T08:32:05.717Z] [ERROR] Error during connection: {}
    [2026-01-21T08:32:25.874Z] [ERROR] Error during connection: {}
    [2026-01-21T08:32:26.993Z] [ERROR] Error during connection: {}
    [2026-01-21T08:32:27.665Z] [ERROR] Error during connection: {}
    [2026-01-21T08:32:28.282Z] [ERROR] Error during connection: {}
    [2026-01-21T08:33:46.113Z] [INFO] User ‘**********’ already exists; skipping useradd.
    [2026-01-21T08:33:46.653Z] [INFO] Configuring passwordless for user ‘**********’
    [2026-01-21T08:33:46.840Z] [INFO] Creating .ssh directory for user ‘**********’
    [2026-01-21T08:33:47.019Z] [INFO] Setting password for user ‘**********’
    [2026-01-21T08:33:47.566Z] [ERROR] Error during initialization: cluster_authentication_failed
    [2026-01-21T08:33:53.881Z] [INFO] User ‘**********’ already exists; skipping useradd.
    [2026-01-21T08:33:54.414Z] [INFO] Configuring passwordless for user ‘**********’
    [2026-01-21T08:33:54.588Z] [INFO] Creating .ssh directory for user ‘**********’
    [2026-01-21T08:33:54.768Z] [INFO] Setting password for user ‘**********’
    [2026-01-21T08:33:55.281Z] [ERROR] Error during initialization: cluster_authentication_failed
    [2026-01-21T08:33:56.708Z] [INFO] User ‘**********’ already exists; skipping useradd.
    [2026-01-21T08:33:57.249Z] [INFO] Configuring passwordless for user ‘**********’
    [2026-01-21T08:33:57.443Z] [INFO] Creating .ssh directory for user ‘**********’
    [2026-01-21T08:33:57.616Z] [INFO] Setting password for user ‘**********’
    [2026-01-21T08:33:58.152Z] [ERROR] Error during initialization: cluster_authentication_failed
    [2026-01-21T08:34:30.133Z] [INFO] User ‘**********’ already exists; skipping useradd.
    [2026-01-21T08:34:30.653Z] [INFO] Configuring passwordless for user ‘**********’
    [2026-01-21T08:34:30.828Z] [INFO] Creating .ssh directory for user ‘**********’
    [2026-01-21T08:34:31.004Z] [INFO] Setting password for user ‘**********’
    [2026-01-21T08:34:31.532Z] [ERROR] Error during initialization: cluster_authentication_failed
    [2026-01-21T08:34:38.053Z] [INFO] User ‘**********’ already exists; skipping useradd.
    [2026-01-21T08:34:38.584Z] [INFO] Configuring passwordless for user ‘**********’
    [2026-01-21T08:34:38.757Z] [INFO] Creating .ssh directory for user ‘**********’
    [2026-01-21T08:34:38.931Z] [INFO] Setting password for user ‘**********’
    [2026-01-21T08:34:46.806Z] [INFO] Inventory file created: C:\Users\BurnerPC\AppData\Local\Temp\aciw\playbook\infrastructure-services\inventory
    [2026-01-21T08:35:55.791Z] [INFO] ------------------------------------
    [2026-01-21T08:35:55.792Z] [INFO] Processing playbook CHECK: Kubernetes
    [2026-01-21T08:35:55.808Z] [INFO] Inventory file created: C:\Users\BurnerPC\AppData\Local\Temp\aciw\playbook\kubernetes\inventory
    [2026-01-21T08:36:12.485Z] [INFO] Processing playbook APPLY: Kubernetes
    [2026-01-21T08:37:26.309Z] [ERROR] Playbook APPLY failed: 1 failed, 0 unreachable hosts.
    [2026-01-21T08:37:26.347Z] [ERROR] [stderr] sh: 1: kubectl: not found[2026-01-21T08:37:26.347Z] [ERROR] [error] command “kubectl get all -A --output json >> /var/log/milestone/aciw/kubernetes_snapshot.log” failed with code 127: sh: 1: kubectl: not found
    [2026-01-21T08:37:26.347Z] [ERROR] Failed to capture Kubernetes snapshot. Most likely kubectl is not installed on the remote host. {}
    [2026-01-21T08:37:26.347Z] [ERROR] Playbook Kubernetes apply failed.
    [2026-01-21T08:37:26.348Z] [ERROR] Error during deployment: {}

This seems to indicate that kubectl is not installed on the server, but I can see that it is. Looking at the installation report that the installer links to, we can se the following in the kubernetes log:

TASK [microk8s : Creating helm plug-in directory if it does not already exist] ***
fatal: [REDACTED-IP]: FAILED! => {“changed”: false, “msg”: “There was an issue creating as requested: [Errno 2] No such file or directory: b’'”, “path”: “”}PLAY RECAP *********************************************************************
REDACTED-IP : ok=43 changed=6 unreachable=0 failed=1 skipped=33 rescued=0 ignored=0

I initially managed to fix this issue, but then got:

TASK [microk8s : Applying updated CNI configuration] ***************************
fatal: [IP_ADDRESS]: FAILED! => {“changed”: true, “cmd”: “microk8s.kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml”, “delta”: “0:00:06.949728”, “end”: “2026-01-16 15:46:12.503560”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2026-01-16 15:46:05.553832”, “stderr”: “Error from server (AlreadyExists): error when creating \”/var/snap/microk8s/current/args/cni-network/cni.yaml\": clusterroles.rbac.authorization.k8s.io

After managing to fix that, I got:

TASK [microk8s : Listing all current master nodes] *****************************
fatal: [IP-ADRESS]: FAILED! => {“changed”: false, “cmd”: “kubectl get nodes --selector node-role.kubernetes.io/master=true --no-headers -o=custom-columns=NAME:.metadata.name”, “msg”: “[Errno 2] No such file or directory: b’kubectl’”, “rc”: 2, “stderr”: “”, “stderr_lines”: [], “stdout”: “”, “stdout_lines”: []}’

After I kept getting a new issue I gave up and decided to contact support.

I managed to get past it by installing kubectl. But now I get

TASK [microk8s : Creating kubectl alias] ***************************************
fatal: [IP]: FAILED! => {“changed”: false, “cmd”: [“snap”, “alias”, “microk8s.kubectl”, “kubectl”], “delta”: “0:00:00.147835”, “end”: “2026-01-21 12:01:03.595512”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2026-01-21 12:01:03.447677”, “stderr”: “error: cannot perform the following tasks:\n- Setup manual alias \“kubectl\” => \“kubectl\” for snap \“microk8s\” (cannot enable alias \“kubectl\” for \“microk8s\”, it conflicts with the command namespace of installed snap \“kubectl\”)”, “stderr_lines”: [“error: cannot perform the following tasks:”, “- Setup manual alias \“kubectl\” => \“kubectl\” for snap \“microk8s\” (cannot enable alias \“kubectl\” for \“microk8s\”, it conflicts with the command namespace of installed snap \“kubectl\”)”], “stdout”: “”, “stdout_lines”: []}

We have updated the installer. Please try the new installer.

Installer 1.1.328 can now be found at Download files

Hi again,
There is a new version of the installer available now. The version you have is the Novermber release and a bit old.
Have you installed Microk8s or any other Kubernetes distro manually via snap?

We have seen similar conflict appear if user installs Kubernetes manually.

It’s possible that MicroK8s was installed during the Ubuntu Server installation (there’s an optional “MicroK8s” selection). If so, we should uninstall it:

1) Stop MicroK8s

sudo microk8s stop

2) Remove the snap (including snap-managed data)

sudo snap remove microk8s --purge

3) Verify it’s gone

sudo snap list | grep -i microk8s || echo "microk8s snap removed"

4) Remove leftover directories (optional, only if they still exist)

sudo rm -rf /var/snap/microk8s
sudo rm -rf /var/snap/microk8s-common

On our side, I will report it to the team so the installer is updated to handle this scenario as well.

Hello again!

I removed microk8s according to the commands you sent and tried the new installer, v1.1.328. Unfortunately, I am still getting this issue:

TASK [microk8s : Creating helm plug-in directory if it does not already exist] ***
fatal: [IP]: FAILED! => {“changed”: false, “msg”: “There was an issue creating as requested: [Errno 2] No such file or directory: b’'”, “path”: “”}

This seems to come from the file “/home/k8sadmin/playbook/kubernetes/roles/microk8s/tasks/install_microk8s.yml”, at the bottom of the file:

- name: Install helm diff plug-in on designated master
block:

  • name: Querying location of helm plug-in directory
    shell:
    cmd: >
    microk8s.helm3 env | grep ‘^HELM_PLUGINS=’ | grep -o ‘“.*”’ | tr -d ‘"’
    register: helm_plugin_directory
    changed_when: false

  • name: Creating helm plug-in directory if it does not already exist
    file:
    path: “{{ helm_plugin_directory.stdout }}”
    state: directory
    recurse: yes

  • name: Downloading and installing helm diff tarball into plug-in directory
    unarchive:
    src: “{{ microk8s_helm_diff_tarball }}”
    dest: “{{ helm_plugin_directory.stdout }}”
    remote_src: true

when:

  • not ansible_check_mode
  • inventory_hostname == designated_master_node

OK, I haven’t encountered this error before. I’ll ask our PM to contact you and either schedule a call or continue supporting you via email. In the meantime, would it be possible for you to try running the installation on a clean Ubuntu server?

Hello!

That sounds good. I will look into getting a clean server and see if that works.

Thanks for the help so far!