182d4c2734
Signed-off-by: genofire <geno+dev@fireorbit.de> Change-Id: Ia51b23e9636ffa381aa1041f13487f4d9b632655 |
||
---|---|---|
.. | ||
grafana_dashboards | ||
templates | ||
Chart.yaml | ||
README.md | ||
values.yaml |
Collabora Online for Kubernetes
In order for Collaborative Editing to function correctly on kubernetes, it is vital to ensure that all users editing the same document end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc) for use with this document. Thus load balancing can be done by using WOPISrc – ensuring that all URLs that contain the same WOPISrc are sent to the same pod.
Helm chart for deploying Collabora Online in Kubernetes cluster
How to test this specific setup:
-
Install Kubernetes cluster locally - minikube - https://minikube.sigs.k8s.io/docs/
-
Install helm - https://helm.sh/docs/intro/install/
-
Install HAProxy Kubernetes Ingress Controller - https://www.haproxy.com/documentation/kubernetes/latest/installation/community/kubernetes/
-
Create an
my_values.yaml
for your minikube setup (if you setup differe e.g. take an look in thenvalues.yaml
of the helmchart - e.g. for annotations using NGINX Ingress Controller or more komplex setups, see Nodes ):Here an example
my_values.yaml
:replicaCount: 3 ingress: enabled: true annotations: haproxy.org/timeout-tunnel: "3600s" haproxy.org/backend-config-snippet: | mode http balance leastconn stick-table type string len 2048 size 1k store conn_cur http-request set-var(txn.wopisrcconns) url_param(WOPISrc),table_conn_cur() http-request track-sc1 url_param(WOPISrc) stick match url_param(WOPISrc) if { var(txn.wopisrcconns) -m int gt 0 } stick store-request url_param(WOPISrc) hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific image: tag: "latest" pullPolicy: Always
Important notes:
-
If you have multiple host and aliases setup set aliasgroups in my_values.yaml
collabora: - host: "<protocol>://<host-name>:<port>" aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"]
-
Specify
server_name
when the hostname is not reachable directly for example behind reverse-proxycollabora: server_name: <hostname>:<port>
-
-
Install helm-chart using below command (with a new namespace collabora)
helm repo add collabora https://collaboraonline.github.io/online/ helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f my_values.yaml
-
Finally spin the collabora-online in kubernetes
A. HAProxy service is deployed as NodePort so we can access it with node's ip address. To get node ip
minikube ip
Example output:
192.168.0.106
B. Each container port is mapped to a
NodePort
port via theService
object. To find those portskubectl get svc --namespace=haproxy-controller
Example output:
|----------------|---------|--------------|------------|------------------------------------------| |NAME |TYPE |CLUSTER-IP |EXTERNAL-IP |PORT(S) | |----------------|---------|--------------|------------|------------------------------------------| |haproxy-ingress |NodePort |10.108.214.98 |<none> |80:30536/TCP,443:31821/TCP,1024:30480/TCP | |----------------|---------|--------------|------------|------------------------------------------|
In this instance, the following ports were mapped:
- Container port 80 to NodePort 30536
- Container port 443 to NodePort 31821
- Container port 1024 to NodePort 30480
C. Now in this case to make our hostname available we have to add following line into /etc/hosts:
192.168.0.106 chart-example.local
-
To check if everything is setup correctly you can run:
curl -I -H 'Host: chart-example.local' 'http://192.168.0.106:30536/'
It should return a similar output as below:
HTTP/1.1 200 OK last-modified: Tue, 18 May 2021 10:46:29 user-agent: COOLWSD WOPI Agent 6.4.8 content-length: 2 content-type: text/plain
Some useful commands to check what is happening :
-
Where is this pods, are they ready ?
kubectl -n collabora get pod
example output :
NAME READY STATUS RESTARTS AGE collabora-online-5fb4869564-dnzmk 1/1 Running 0 28h collabora-online-5fb4869564-fb4cf 1/1 Running 0 28h collabora-online-5fb4869564-wbrv2 1/1 Running 0 28h
-
What is the outside host that multiple coolwsd servers actually answering ?
kubectl get ingress -n collabora
example output :
|-----------|------------------|---------------------|------------------------|-------| | NAMESPACE | NAME | HOSTS | ADDRESS | PORTS | |-----------|------------------|---------------------|------------------------|-------| | collabora | collabora-online | chart-example.local | | 80 | |-----------|------------------|---------------------|------------------------|-------|
-
To uninstall the helm chart
helm uninstall --namespace collabora collabora-online
Notes:
-
For big setups, you maybe NOT want to restart every pod to modify WOPI hosts, therefore it is possible to setup an additional webserver to serve a ConfigMap for using Remote/Dynamic Configuration:
collabora: env: - name: remoteconfigurl value: https://dynconfig.public.example.com/config/config.json dynamicConfig: enabled: true ingress: enabled: true annotations: "cert-manager.io/issuer": letsencrypt-zprod hosts: - host: "dynconfig.public.example.com" tls: - secretName: "collabora-online-dynconfig-tls" hosts: - "dynconfig.public.example.com" configuration: kind: "configuration" storage: wopi: alias_groups: groups: - host: "https://nextcloud\\.public\\.example\\.com/" allow: true - host: "https://moodle\\.public\\.example\\.com/" allow: true aliases: - "https://moodle3\\.public\\.example2\\.de/"
PS: In current state of Collabora needs outside of debuggin for Remove/Dynamic Configuration HTTPS, see here in wsd/COOLWSD.cpp
-
Works well with Prometheus Operator (Helmchart) and there setup of Grafana, by enabling following values:
prometheus: servicemonitor: enabled: true labels: release: "kube-prometheus-stack" rules: enabled: true # will deploy alert rules additionalLabels: release: "kube-prometheus-stack" grafana: dashboards: enabled: true # will deploy default dashboards
PS: The labels
release=kube-prometheus-stack
is setup with the helmchart of the Prometheus Operator. For Grafana Dashboards it maybe need scan enable to scan in correct namespaces (or ALL), enabled bysidecar.dashboards.searchNamespace
in Helmchart of grafana (which is part of PrometheusOperator, sografana.sidecar.dashboards.searchNamespace
)