Datorios Flink for Kubernetes
This guide explains how to install and use Datorios Flink for Kubernetes.
⚠️ Note: This is not a replacement for the official Flink Kubernetes Operator but an extension that deploys Datorios Flink images.
General Requirements
Before deploying the Datorios Helm chart, ensure the following components are installed:
- Kubernetes cluster
- kubectl (Install Tools)
- Helm 3.17.0 or above (Helm | Installing Helm)
- Flink Operator v1.10 and above (Apache Flink Kubernetes Operator)
- K8s Metrics server (Metrics Server GitHub)
Deploying Datorios Helm Chart
Step 1: Add the Helm Repository and Update
Before proceeding, you need to add the Datorios Helm repository and update it.
🔹 Contact us for more information on repository access.
Step 2: Configure the values.yaml
File
Before installing the Helm chart, update the values.yaml file with the appropriate settings.
Example Configuration:
data:
CLUSTER_NAME: "demo-cluster"
ORG_NAME: ""
S3_REGION: ""
S3_BUCKET: ""
AWS_ACCESS_KEY_ID: ""
AWS_SECRET_ACCESS_KEY: ""
FLINK_VERSION: "1.17.2"
Parameter Value CLUSTER_NAME Change cluster name (Optional) ORG_NAME Provided by Datorios S3_REGION Provided by Datorios S3_BUCKET Provided by Datorios AWS_ACCESS_KEY_ID Provided by Datorios AWS_SECRET_ACCESS_KEY Provided by Datorios FLINK_VERSION 1.17.2 or 1.19.1
Step 3: Install the Datorios Helm Chart
To deploy the Datorios Helm chart, run the following command:
helm install <deployment-name> datorios/datorios-flink -f <values.yaml>
Example Deployment:
helm install datorios-env01 datorios/datorios-flink -f values.yaml
Step 4: Verify Deployment Status
After installation, check if all components are running:
kubectl get pods
Expected Output (Example):
NAME READY STATUS RESTARTS AGE
datorios-demo-cluster-monitoring-7d9bbd9cfc-mjdv2 1/1 Running 0 59s
datorios-demo-cluster-otel-collector-worker-744bdc9676-ct88c 1/1 Running 0 59s
datorios-session-deployment-b574ff7f5-jhrvt 2/2 Running 0 56s
flink-kubernetes-operator-694976d45-v4ksf 2/2 Running 0 2m53s
flink-otel-collector-668bbb99f8-9bsbz 1/1 Running 0 59s
Running a Flink Job
To submit a Flink job in Datorios, create a job YAML file with the following format: Example Job YAML:
apiVersion: batch/v1
kind: Job
metadata:
name: datorios-job-submitter-ds-runner
spec:
template:
spec:
containers:
- name: runner
image: <DATORIOS-FLINK-IMAGE>
envFrom:
- configMapRef:
name: datorios-config
tty: true
stdin: true
args:
- /root/flink/bin/flink
- run
- -m
- datorios-session-deployment-rest:8081
- -p
- "4"
- /root/flink/examples/streaming/WindowDemoFlexExample.jar
- --limit
- "200000"
- --tumblingWindowSize
- "100"
- --rate
- "20000"
- --eventSize
- "100"
restartPolicy: Never
ttlSecondsAfterFinished: 30
⚠️ Important: The exact image location has been removed as per internal policy. Contact Datorios for the appropriate image reference.
Step 1: Deploy the Job
kubectl apply -f job-submitter-example.yaml
Viewing Jobs in the Datorios Dashboard
To monitor running jobs and their status, log in to the Datorios Dashboard:
🔗 Datorios Dashboard
Important Notes
- Repository Access: The location for downloading the HELM chart is not publicly available. Please contact us for more details.
- Flink Job Image: The exact Docker image path has been omitted for security reasons. Reach out to Datorios for the appropriate reference.