PREPARED Migrations 23.4
The release 23.4 prepares Git Ops style deployment fo Vidinet enterprise systems. Based on the Git Ops Tool Flux it will become possible to automate updates and installations based on changes to the source PREPARED inventory. While release 23.4 prepares the ground for GitOps managed enterprise systems, expect the next release 24.1 to contain more information on how to enable and use Flux and manage systems in a Git Ops style.
To not immediately enforce a change to GitOps deployment in all systems the changes to the PREPARED scripts required for GitOps were done with downwards compatibility in mind so that systems can be installed and managed using standalone PREPARED same as before and the change to GitOps can be done at a later stage.
There is a single switch that enables Flux GitOps for the system which is by default set to off so you actively have to opt in to run the system in GitOps mode.
To prepare PREPARED for GitOps style management, all functionality of PREPARED had to be shifted to Helm charts since with Flux GitOps only manages Helm chart installations and updates. Hence custom deployment tasks, which were handled in the Ansible scripts of PREPARED, were either removed when outdated or changed to be executed in the context of Helm Charts.
The following major changes were made to the PREPARED scripts for preparing a switch to Flux Gitops:
marked all standard helm charts with a new property
depends_on
which denotes which other Helm charts need to be installed prior to the Helm chart having the property. In summary thedepends_on
settings form a dependency tree which is evaluated by Flux.add
dependency_walk
plugin to dynamically compute dependencies between installation steps based ondepends_on
propertiesadded
flux
roles which are executed when PREPARED runs in the cluster and corresponding playbooksflux-get-inventory
andflux-update-charts
All of above changes do not affect existing PREPARED script usage, the following topics however do and require a migration.
Removal ob Kubernetes Objects deployed by PREPARED directly
The following objects were deployed directly from within the PREPARED scripts and out of Helm chart scope. The next chapters discuss how they were replaced.
Grafana Haproxy Dashboard
A Grafana Dashboard was deployed for monitoring HaProxy Ingress resources. It is unclear if this was in use at all (since no Grafana is deployed anymore by default) and if it is required it can be added manually to a systems.
Prometheus Custom Resource Definitions for ServiceMonitor
and PodMonitor
Since monitoring in Vidispine enterprise systems relies on Prometheus CustomResources of these types it is crucial that they are installed in every system.
→ The PREPARED deployment hast thus been replaced by a new Helm chart based deployment of the CRDs. An existing hull-canvas
based step in the environment
role named ops-setup
was replaced by a dedicated standalone Helm chart of the same name (ops-setup)
.
Since CRD deployment with Helm is best handled by putting the CRDs to create in a dedicated crds
folder it was required to create a standalone chart for this. A hull-canvas
based step lacks the possibility to add files to the Helm chart at deploy time.
The See the last chapter on this page for the concrete migration measures to apply for the adoption of objects into the ops-setup
chart!
ConfigMap storing HaProxy 404 page content
An unfinished leftover was the idea to replace the HaProxy ingress controllers 404 page with a new one indicating that the 404 occured in the context of the HaProxy routing. The benefit of having a non-standard 404 page is that it signals more directly in which context the page was not found, helping to debug issues faster especially when setting up new systems.
→ With 23.4 this task was completed and the haproxy-error-pages
ConfigMap has also been adopted by the ops-setup
chart.
This is what will be returned when hitting a 404 in the HaProxy managed context:404 - Not Found
This page is not found in your Vidispine installation ... Sorry
You could try /API to reach the VidiCore API
ServiceAccount ops-setup-default
and registry ops-setup-registry
Replacing the former hull-canvas
chart for ops-setup
with an updated standalone chart requires (re)-adopting a ServiceAccount (ops-setup-default
) and a Registry secret ops-setup-registry
. Previously these objects were created by the old chart but were not managed-by the chart due to the usage of a helm-hook annotation on them.
Note that Helm Hook resources are not managed within the Helm charts context as explained here: https://helm.sh/docs/topics/charts_hooks/#hook-resources-are-not-managed-with-corresponding-releases
MetalLB IPAddressPool
CustomResources (CRs)
In on-premise systems metallb
plays an important role as a software based load balancer. The way it is integrated into PREPARED is by PREPARED creating IPAddressPool
CRs for the load balancer IPs to be used.
→ The parentship of existing CRs has been moved to the scope of the existingops-ingress-setup
chart.
Storing SBOM artifacts in ConfigPortal
Initially licenses were rather small PDFs that were automatically posted to ConfigPortal on installation of the Helm charts.
Due to the growing content and size of the now called SBOM (Software Bill of Materials) documents produced in the build pipelines by MEND it was required to shift storing of licenses to a dedicated place and deploying them differently. The place of storage chosen was initially the Azure Blob Storage and by now is the central OCI cr.vidinet.net
registry. PREPARED had dedicated tasks to load the SBOM artifacts from the external storage and post it to ConfigPortal
Moving towards an only Helm chart based deployment with Flux, the functionality to push the SBOMs has again been moved to the Helm charts where it is made available in the hull-vidispine-addon
.
The only accepted storage for the new mechanism implemented in hull-vidispine-addon
is an OCI registry! The usage of the Azure Blob storage is starting to become obsolete with release 23.4.
The license upload feature within hull-vidispine-addon
requires to use a version >= 1.28.6 of the library chart. It was originally planned to update all major Helm charts to this version, however due to time constraints it was not possible to achieve this for all major charts.
The following charts were updated and are capable of Helm chart based license upload:
mediaportal
medialogger
VidiFlow:
vidiflow-core-services
vidiflow-clients
vidiflow-tools
vidiflow-essence-agents
vidiflow-vidicore-agents
vidiflow-media-agents
vidiflow-media-analysis
vidiflow-log
vidiflow-api
vidiflow-deletion-monitor
vidiflow-vantage
vidiflow-vidinet
To workaround the problem meanwhile it is possible to use a hull-canvas
step to upload the license(s) in an extra deployment step until the Helm chart has been upgraded to include this feature.
For the vidieditor
this workaround has been applied, please see the migration chapter at the end of this article
For the licenses of vidicore
, vidicore-agent
and authservice
Helm charts a new step was added into the configportal
role named standard-licenses.
The standard-licenses
hull-canvas
chart deploys the licenses to ConfigPortal once ConfigPortal is installed and running because all mentioned products are actually installed before CofigPortal. Note that this is not a migration but actually a new functionality or fix because previously the licenses of above products could only be successfully installed on subsequent executions of PREPARED (once ConfigPortal has been started), never on the first run !
Migration Prerequisite
It is mandatory to change your systems docker registry endpoint to an OCI compliant registry before migrating to 23.4, prefereably use the central cr.vidinet.net
OCI registry as described below!
Usage of Central OCI Repository cr.vidinet.net
In order to be able to prepare the Flux GitOps management approach in a customer system, it is required that it uses the new centralized OCI repository as its single artifacts endpoint. This can be done by the following setting in the inventories server.yaml
:
virtual_hosts:
- name: deployer
purpose: deployment_server
type: deployer
create: false
attributes:
host: cr.vidinet.net
docker_registry_port:
docker_registry_user: XXX
docker_registry_password: XXX
In some inventory configurations the respective values are set in the 00.vpms3.global.v2.yaml
:
global:
docker:
registry:
endpoint: "{{ lookup('deployer','docker_registry_endpoint') }}"
user: "{{ lookup('deployer','docker_registry_user') }}"
pass: "{{ lookup('deployer','docker_registry_password') }}"
By default the endpoint and credentials are sourced from the above mentioned server.yaml
’s values so essentially it is mostly irrelevant where you configure them.
You are responsible for determining the docker_registry_user
and docker_registry_password
values since they will not be put into this document!
The recently setup cr.vidinet.net
endpoint is the result of creating an OCI compliant repository for storing:
Docker Images
Helm Charts
SBOM licenses
other future artifacts (PREPARED layers, …)
When using the Azure ACR endpoints DNS name directly, the routing is influenced by Azure CDN policies so that no unique IP is available for whitelisting in customer systems. To overcome this, the endpoint cr.vidinet.net
is setup as a redundant proxy to the new managed Azure Container Registry (ACR).
On an infrastructure level, customers system need to whitelist access to IP 40.113.102.127 for cr.vidinet.net
so that it is reachable from within the Kubernetes cluster and the machine used for PREPARED deployment!
If the inventory is setup to use the legacy vpms3.azurecr.io
ACR Docker Registry the setting needs to be changed to point to cr.vidinet.net
!
This change affects all components running in the system since the Docker registry information is spread throughout the cluster in form of Docker Registry secrets.
Notice that manual changing of all secrets or redeployment of all Helm Charts is required to reflect the docker registry endpoint change!
If the inventory is already setup to use the legacy harbor.arvato-systems-media.net
self-hosted Harbor Docker Registry the setting can be kept as it is. The harbor.arvato-systems-media.net
address is being routed to cr.vidinet.net
on the DNS level so that it remains fully future compliant with the OCI storage.
Notice that this requires a point in time when DNS is switched over, depending on when you read this it may already have been redirected or not. After the DNS switch was successful the old self-hosted Harbor registry will be defunct.
Still it is recommended to move to cr.vidinet.net
directly prior to installing release 23.4 and start of using Flux to avoid later system configuration changes!
Technical Migration
Besides the above highlighted migration to an OCI based registry, in summary the following migration steps need to be executed when installing Release 23.4:
add ownership of
haproxy-error-pages
ConfigMap toops-setup
chart releaseadd ownership of
ops-setup-default
ServiceAccount toops-setup
chart releaseadd ownership of
ops-setup-registry
Registry Secret toops-setup
chart releaseadd ownership of MetalLB
IPAddressPool
toops-ingress-setup
chart releaseadd extra
vidieditor-license
hull-canvas
step to temporarily upload VidiEditor license to CP until functionality is part ofvidieditor
Helm chart
To cater for more automation, the mentioned migration steps are now fully automated and don’t require manual execution of kubectl/helm commands or scripts
To perform the migration you need to add the 23.4.1 migration
layer artifact to your projects layers.yaml
:
layers:
migrations:
origin: "oci://{{ hostvars['deployer']['docker_registry_user'] }}:{{ hostvars['deployer']['docker_registry_password'] }}@{{ hostvars['deployer']['host'] }}/prepared/layers/migrations:23.4.1"
The migration
layer artefact contains two overlay files to execute the mentioned steps in the correct order:
01_vpms3.environment.v2.yaml:
#jinja2:variable_start_string:'[%' , variable_end_string:'%]'
################################################
# Environment
################################################
environment:
ops-setup-migration:
helm_chart:
install: true
chart_name: cluster-admin
chart_subpath:
chart_folder: ClusterAdmin
chart_category: VPMS
chart_version: "0.1.12"
chart_namespace: "{{ vpms3.environment.base.default.ops_namespace }}"
chart_instance_name: "ops-setup-migration"
install_before: ops-setup
chart_values:
hull:
config:
general:
globalImageRegistryToFirstRegistrySecretServer: true
fullnameOverride: "ops-setup-migration"
{% if not vpms3.global.base.default.features.gitops.flux.enabled %}
metadata:
annotations:
custom:
vidispine.prepared/version: "{{ prepared_version }}"
{% endif %}
objects:
registry:
registry:
server: "{{ vpms3.global.docker.registry.endpoint }}"
username: "{{ vpms3.global.docker.registry.user }}"
password: "{{ vpms3.global.docker.registry.pass }}"
configmap:
cluster-admin:
data:
cluster-admin.sh:
inline: |-
#!/bin/sh -e
adopt() {
type=$1
instance=$2
chart=$3
exists=`kubectl get $type $instance -n {{ vpms3.environment.base.default.ops_namespace }} -o json 2>&1`
if grep -q "Error from server (NotFound)" <<< "$exists"; then
echo "$type $instance does not exist, nothing to migrate ..."
else
if grep -q "the server doesn't have a resource type" <<< "$exists"; then
echo "$type not registered in API, nothing to migrate ..."
else
echo "$type $instance exists, add to Helm Chart ..."
adopt=`kubectl annotate $type $instance --overwrite -n {{ vpms3.environment.base.default.ops_namespace }} meta.helm.sh/release-name=$chart 2>&1`
echo "Add release-name:"
echo $adopt
adopt=`kubectl annotate $type $instance --overwrite -n {{ vpms3.environment.base.default.ops_namespace }} meta.helm.sh/release-namespace={{ vpms3.environment.base.default.ops_namespace }} 2>&1`
echo "Add release-namespace:"
echo $adopt
adopt=`kubectl label $type $instance --overwrite -n {{ vpms3.environment.base.default.ops_namespace }} app.kubernetes.io/managed-by=Helm 2>&1`
echo "Add managed-by:"
echo $adopt
fi
fi
}
adopt "serviceaccount" "ops-setup-default" "ops-setup"
adopt "secret" "ops-setup-registry" "ops-setup"
adopt "configmap" "haproxy-error-pages" "ops-setup"
adopt "IPAddressPool" "metallb-environment-haproxy" "ops-ingress-setup"
ops-setup:
helm_chart:
chart_depends_on:
- step: "ops-setup-migration"
role: environment
40_vpms3.vidieditor.v2.yaml:
################################################
# VidiEditor
################################################
---
{% if 'vidieditor' in vpms3 %}
{% if 'chart_version' in vpms3.vidieditor.default.helm_chart %}
vidieditor:
vidieditor-license:
helm_chart:
install: true
chart_name: hull-canvas
chart_subpath:
chart_folder: HullCanvas
chart_category: VPMS
chart_version: "{{ vpms3.global.system.default_versions.hull_canvas }}"
chart_namespace: "{{ vpms3.vidieditor.base.default.namespace }}"
chart_instance_name: "vidieditor-license"
chart_depends_on:
- step: default
role: vidieditor
helm_config:
atomic: false
wait: false
chart_values:
hull:
objects:
registry:
vpms3:
server: "{{ vpms3.global.docker.registry.endpoint }}"
username: "{{ vpms3.global.docker.registry.user }}"
password: "{{ vpms3.global.docker.registry.pass }}"
config:
general:
globalImageRegistryToFirstRegistrySecretServer: true
fullnameOverride: "vidieditor-license"
{% if not vpms3.global.base.default.features.gitops.flux.enabled %}
metadata:
annotations:
custom:
vidispine.prepared/version: "{{ prepared_version }}"
{% endif %}
data:
endpoints: {{ vpms3.vidieditor.default.helm_chart.chart_values.hull.config.general.data.endpoints }}
installation:
config:
proxy:
httpProxy: "{{ vpms3.global.system.proxy.http_proxy }}"
httpsProxy: "{{ vpms3.global.system.proxy.https_proxy }}"
noProxy: "{{ vpms3.global.system.proxy.no_proxy }}"
{% if vpms3.global.system.ssl.custom_ca_certs | length > 0 %}
customCaCertificates:
{% for key, value in vpms3.global.system.ssl.custom_ca_certs.items() %}
{{ key }}: "{{ (value | singleline) + '\n' }}"
{% endfor %}
{% else %}
customCaCertificates: {}
{% endif %}
endpoints:
30_configportal:
subresources:
60_productcomponents:
entities:
license:
processNoOp: true
_helm_charts_:
- name: vidieditor
version: "{{ vpms3.vidieditor.default.helm_chart.chart_version }}"
{% endif %}
{% endif %}
After having added the layer to your layer configuration the required steps will be performed on next execution of PREPARED on the environment
(tag env
) and vidieditor
(tag ve
) roles.
By default, FLUX will not be activated in your system. If you want to use FLUX, you need to apply the following changes:
00_vpms3.global.v2.yaml
################################################
# Global
################################################
---
global:
base:
default:
features:
gitops:
flux:
enabled: true
url: "https://dev.azure.com/arvato-systems-dmm/PREPARED/_git/project-test"
branch: master
auth:
username: 'git'
password: [Your AzureDevops PAT needs to go in here]
role_tags: []