Categories

Versions

Kubernetes deployment with Helm

Following some internal experiments, we have attempted to reduce the complexity of Kubernetes configuration by introducing Helm Charts, but the results are still not what you would call “plug and play”.

Nevertheless, in the interest of progress, knowing that some of our users are already experienced with Kubernetes, we have decided to release some skeletal documentation – “skeletal” in the sense that it is not complete, but perhaps adequate for experienced users who know how to fill in the gaps.

For a simpler, single-host deployment of Altair AI Hub, see Docker-compose deployment or the cloud images.

To help deliver docker images to an air-gapped environment, see Altair AI Hub docker images.

We tested our example configuration with the following Kubernetes services:

To deploy Altair AI Hub with Kubernetes / Helm:

All versions: [2024.1.1] [2024.1.0] [2024.0.3] [2024.0.1] [2024.0.0] [10.3.2] [10.3.1] [10.3.0] [10.2.0] [10.1.3] [10.1.2] [10.1.0] [10.0.1] [10.0.0]

Table of contents

Before you begin

To deploy the Helm chart, you need basic Kubernetes infrastructure. This documentation will not explain Kubernetes infrastructure setup. The links below are intended as hints for getting started.

  1. Create Kubernetes infrastructure.

  2. As part of your Kubernetes setup, create NFS storage with a root folder:

    whose name we recommend you set to <NAMESPACE-PLACEHOLDER>, the same as your namespace, so that you can support multiple deployments on the same cluster, with the same NFS storage -- see productNS and nfsPath in values.yaml. To enable non-root container users to read and write files in this folder that is dedicated to your Altair RapidMiner stack, set the following permissions:

     chown -R 2011.root <NAMESPACE-PLACEHOLDER>
     chmod g+w <NAMESPACE-PLACEHOLDER>
    
  3. Create a namespace, also with name <NAMESPACE-PLACEHOLDER>.

  4. Have your server certificate ready. Alternatively, use the built-in Let's Encrypt.

Introduction to Helm

Helm is a package manager for Kubernetes. A Helm Chart bundles the Kubernetes YAML files as templates, which you then configure via the file values.yaml. The details of this configuration depend on the details of your Kubernetes deployment. You and I may share the same templates, but our configurations (values.yaml) will differ. A typical Chart is a folder resembling the following:

mychart/
  Chart.yaml
  values.yaml
  charts/
  templates/
Chart.yaml
The Chart.yaml file contains a description of the chart. You can access it from within a template.
values.yaml
The file that defines your configuration, it contains the default values for a chart. These values may be overridden during helm install or helm upgrade.
charts/
The charts/ directory may contain other charts, called subcharts.
templates/
This folder contains the Kubernetes YAML files, as templates. When Helm evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. It then collects the results of those templates and sends them on to Kubernetes. The placeholders in the YAML files are defined by values.yaml.

Read more:

Introductory videos:

Instructions

To simplify the configuration of the Kubernetes YAML files, we use Helm, the package manager for Kubernetes.

  1. Make sure that your Kubernetes infrastructure is in place, including Helm.

  2. Download the Helm archive, and extract values.yaml, renaming it to custom-values.yaml:

     helm show values ./rapidminer-aihub-2024.1.1.tgz > custom-values.yaml
    
  3. Edit custom-values.yaml and define your configuration by setting the appropriate values.

  4. Then apply the following command to the Kubernetes cluster:

helm upgrade -n <NAMESPACE-PLACEHOLDER> --install rapidminer-aihub --values custom-values.yaml ./rapidminer-aihub-2024.1.1.tgz

Note that the value <NAMESPACE-PLACEHOLDER> is the same as the one you gave in custom-values.yaml for the key productNS.

EBS volumes are sensitive to multi-attach errors during rolling updates. It is best to scale down all the deployments before the update.

Using profiles with Helm deployment

Using profiles in our docker compose deployment you can selectively start components in the stack. Starting with the 2024.0 release a similar feature was implemented in our helm chart as well.

You can provide the list of the required components in the deploymentProfiles attribute. Components not present in the list will not be deployed, or will be removed if they were deployed previously.

The list is flexible, any combination of the components can be deployed.

Please take care to include all the required components in the list including dependecies.

Please note, that you can verify the rendered templates using the helm template command.

Please note, that helm.sh/resource-policy: keep annotation was added to every PVC, so in case a component gets temporally disabled, the belonging PVC won’t be deleted and the data remains there as well.

At the top of the values.yml you can find the deploymentProfiles attribute:

deploymentProfiles:
- deployment-init
- jupyter
- panopticon
- grafana
- keycloak
- scoring-agent
- ces
- token-tool
- platform-admin
- landing-page
- letsencrypt
- aihub-webapi-gateway
- aihub-webapi-agent
- aihub-activemq
- aihub-backend
- aihub-frontend
- aihub-job-agent
- altair-license
- proxy

The list can be overwritten in your custom-values.yml.

To avoid accidental data loss while turning on and off profiles, we added a helm annotation to every PVC in the Helm chart:

annotations:
  helm.sh/resource-policy: keep

Please note, that a new annotation was added to each PVC, that will avoid the PVCs from being deleted by helm and from now they shall be manually deleted if they aren’t needed anymore.

Starting from the 2024.0 release the annotations for the proxy service can be customized in the proxy section:

proxy:
  serviceName: "proxy-svc-pub"
  annotations:
# Sample for EKS
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "false"
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Name=rapidminer-proxy-elb,Namespace=<NAMESPACE-PLACEHOLDER>"

The HELM configuration file (values.yaml)

##############################################################################
#                                                                            #
# Configure Deployment Profiles                                                    #
#                                                                            #
##############################################################################
deploymentProfiles:
- deployment-init
- jupyter
- panopticon
- grafana
- keycloak
- scoring-agent
- ces
- token-tool
- platform-admin
- landing-page
- letsencrypt
- aihub-webapi-gateway
- aihub-webapi-agent
- aihub-activemq
- aihub-backend
- aihub-frontend
- aihub-job-agent
- altair-license
- proxy

common:
  domain: "<FQDN-PLACEHOLDER>"
  deploymentPort: "443"
  deploymentProtocol: "https"
# The public facing URL of your deployment
  publicUrl: "https://<FQDN-PLACEHOLDER>"
# The public facing domain of your deployment's Keycloak service
  ssoDomain: "<FQDN-PLACEHOLDER>"
# The public facing URL of your deployment's Keycloak service
  ssoPublicUrl: "https://<FQDN-PLACEHOLDER>"
# The namespace of the deployment
  productNS: "<NAMESPACE-PLACEHOLDER>"
# The docker image tag
  mainVersion: "2024.1.1"
# The docker image tag for Coding Environment Storage
  cesVersion: "2024.1.1"
# Docker registry prefix rapidminer/ references our public docker registry, but that can be changed to the fqdn of your internal registry
  dockerURL: "rapidminer/"
# The TZ database name of the deployment's timezone, for example "America/New_York"
# See: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
  timeZone: "<TIMEZONE-PLACEHOLDER>"
# An externally managed configmap holding HHWU license variables.  Can be generated from an almutil config file with the commands
# $ kubectl create secret --namespace <NAMESPACE-PLACEHOLDER> generic altair-hhwu --from-env-file /usr/local/altair/altair_hostedhwu.cfg
# externalLicenseSecret: "altair-hhwu"

##############################################################################
#                                                                            #
# Custom CA config block                                                     #
#                                                                            #
##############################################################################
  customCA:
    enabled: False
    tlsSecretName: customca

#
# DO NOT MODIFY THE PATHS
#
  jdkCACertPath: "/mnt/cacerts"
  debCertPath: "/etc/ssl/certs"
##############################################################################
#
# platform related Values, please choose one from:
# "OpenShift" : OpenShift related security and other infrastructure settings
# "EKS" : Amazon Elastic Kubernetes related security and other infrastructure settings
# 'AKS' : Azure Kubernetes related security and other infrastructure settings
# "GKE" : Google Kubernetes Engine related security and other infrastructure settings
# 'Other' : Other (like on-prem installations)
  platform: "EKS"
# Platform Specifications
  platformSpec:
    gke:
      # Specify the VPC name in which the GKE cluster is created.
      vpcName: ""
    openshift:
      # In OpenShift deployments a ClusterIP is used for the AiHub proxy service and a route can be created to expose the service.
      # The route can be created by this chart or manually by the sysadmin.
      createRoute: True
# With the nodeSelectors you can instruct the kubernetes scheduer to start your pods on nodes having the provided labels.
# Any label of the worker nodes can be used, if there are no matching nodes, the pod will remain in Pending state
#  nodeSelector:
#    <NODE-LABEL-1-NAME-PLACEHOLDER>: "<NODE-LABEL-1-VALUE-PLACEHOLDER>"
#    <NODE-LABEL-2-NAME-PLACEHOLDER>: "<NODE-LABEL-2-VALUE-PLACEHOLDER>"
  nodeSelector: {}
# If not empthy, this image pull secret name will be referenced at the deployments
# creating the secret itself is out of scope of this chart, it shall be created manually
  imagePullSecret: []

# This will be the initial user, which will have admin permission in the deployment.
  initialUser: "admin"
# Initial password for the initial user
  initialPass: "<ADMIN-PASS-PLACEHOLDER>"
# The built in OIDC server realm, this realm will be used by the components in the SSO communication (KeyCloak)
  defaultSSORealm: "master"
# Default SSL requirement to access KeyCloak SSO
  ssoSSL: "external"

license:
  # Possible values are 'altair_unit' and 'rapidminer'
  # Use 'altair_unit' to enable Altair Unit Licensing
  # Use 'rapidminer' to use a legacy Rapidminer license
  type: "altair_unit"
  # Configurations for 'altair_unit' license type
  altair:
    # Possible values are 'on_prem' and 'altair_one'
    # Use 'on_prem' to connect to an Altair License Manager installed on-prem
    # Use 'altair_one' to connect to public Altair One
    mode: "altair_one"
    # Configurations for 'on_prem' mode
    onPrem:
      # Altair Lincense Manager endpoint host and port required only for on_prem Altair license mode
      host: "<ALTAIR-LICENSE-MANAGER-HOST-PLACEHOLDER>"
      port: "<ALTAIR-LICENSE-MANAGER-PORT-PLACEHOLDER>"
    # Configurations for 'altair_one' mode
    altairOne:
      # Authentication type for communicating with license server
      # possible values are 'credentials', 'auth_code' and 'static_token'
      authType: 'credentials'
      # Required only for altair_one Altair license mode and authType credentials
      credentials:
        # Altair One username
        username: "<ALTAIR-ONE-USERNAME-PLACEHOLDER>"
        # Altair One password
        password: "<ALTAIR-ONE-PASSWORD-PLACEHOLDER>"
        # When mode is 'altair_one', resets any stored auth code when 'credentials' already persisted a valid auth token
        resetAuthToken: false
      # Required only for altair_one Altair license mode and authType static_token NOT YET SUPPORTED
      staticToken:
        token: <ACCESS-TOKEN-PLACEHOLDER>
      # Required only for altair_one Altair license mode and authType auth_code NOT YET SUPPORTED
      authCode:
        code: <AUTH_CODE-PLACEHOLDER>
  # Configurations for 'rapidminer' license type
  # Please note, that if you use Rapidminer licensing, you have to set panopticonVizapp.license.detached to true and
  # provide Panopticon licensing settings
  rapidminer:
    # The value of the legacy license
    licenseValue: "<AIHUB-LICENSE-PLACEHOLDER>"
    # The name of the kubernetes secret, which contains the legacy RapidMiner License (only matters if 'enableAltairUnitLicense' is false)
    licenseSecretName: "aihub-license"
    # The key of the legacy license in the Kubernetes secret, default value is "LICENSE_LICENSE" (only matters if 'enableAltairUnitLicense' is false)
    licenseSecretKey: "LICENSE_LICENSE"

storage:
# To disable PVC creation set this to false
# (requires pre-provisioned PVCs)
  createPVCs: "true"
# Default storageclass, for one POD (single mount)
  defaultStorageClassRWO: "<STORAGECLASS-PLACEHOLDER_RWO>"
# Default storageclass, for serveral PODS (multiple mounts)
  defaultStorageClassRWX: "<STORAGECLASS-PLACEHOLDER_RWX>"

proxy:
  serviceName: "proxy-svc-pub"
# Sample POD annotation specify the PV which velero needs to backup by node-agent (former restic)
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  annotations:
# Sample for AKS
#    service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "30" #It is the maximum value
# Set this as well if your deployment shall be airgapped
#    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# Sample for EKS
#    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
#    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
#    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "false"
#    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Name=rapidminer-proxy-elb,Namespace=<NAMESPACE-PLACEHOLDER>"
# Set this as well if your deployment shall be airgapped
#    service.beta.kubernetes.io/aws-load-balancer-scheme: internal
# Sample for GKE
#   Set this as well if your deployment shall be airgapped
#    networking.gke.io/load-balancer-type: "Internal"
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-proxy"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "proxy-config"
# Deprecated, please use httpPort and httpsPort
  unprivilegedPorts: "true"
# Proxy ports: httpPort will respond with a redirect once configured with https
# These ports will be applied to the service, exposed from the proxy container and nginx will listen on these ports inside the container
  httpPort: 1080
  httpsPort: 1443
  dataUploadLimit: "25GB"
  metrics:
    authBasic:
      user: "admin"
      password: "changit"
  https:
    crtPath: /etc/nginx/ssl/tls.crt
    keyPath: /etc/nginx/ssl/tls.key
    keyPasswordPath: /etc/nginx/ssl/password.txt
    dhPath: /etc/nginx/ssl/dhparam.pem
# You can overwrite the defaultStorageClassRBX value for this component
# dhparamStorageClass: "<STORAGECLASS-PLACEHOLDER_RWX>"
  dhparamStorageSize: "100M"
# You can overwrite the defaultStorageClassRWO value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "proxy-pvc"
  # initialdelayseconds + failurethreshold * (periodseconds + timeoutseconds)
  readinessprobe:
    failurethreshold: 3
    initialdelayseconds: 60
    periodseconds: 60
    timeoutseconds: 1
  storageSize: "10Gi"
  debug: "false"
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

letsEncrypt:
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rm-letsencrypt-client"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "letsencrypt-client-config"
  allowLetsEncrypt: "true"
  certsHome: "/certificates/"
  readinessprobe:
    failurethreshold: 3
    initialdelayseconds: 60
    periodseconds: 60
    timeoutseconds: 1
  webMasterEmail: "<WEBMASTER-EMAIL-PLACEHOLDER>"
  resources:
    requests:
      memory: "128M"
      cpu: "0.2"
    limits:
      memory: "128M"
      cpu: "0.2"

landingPage:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  serviceName: "landing-page-svc"
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-deployment-landing-page"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "landing-page-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "landing-page-uploaded-pvc"
  storageSize: "100M"
  ssoClientId: "landing-page"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<LANDING-PAGE-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  readinessprobe:
    failurethreshold: 6
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  debug: "false"
  resources:
    requests:
      memory: "128M"
      cpu: "0.2"
    limits:
      memory: "128M"
      cpu: "0.5"
  securityContext:
    fsGroup: 33

aihubDB:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  serviceName: "aihub-db-svc"
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "postgres-14"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "aihub-db-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "aihub-db-pvc"
  storageSize: "10Gi"
  dbName: "<SERVER-DB-NAME-PLACEHOLDER>"
  dbPort: "5432"
  dbUser: "<SERVER-DB-USER-PLACEHOLDER>"
  dbPass: "<SERVER-DB-PASS-PLACEHOLDER>"
# dataDirectory shall be mountDirectory/data
  dataDirectory: '/rapidminer/data'
  mountDirectory: '/rapidminer'
# Postgres initdb args
# The last parameter is the DB container mountPath
  initdbArgs: "--encoding UTF8 --locale=C /rapidminer/data"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

aihubFrontend:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  serviceName: "aihub-frontend-svc"
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-aihub-ui"
  # You can overwrite the mainVersion value for this component
  # version: "2024.1.0"
  configName: "aihub-frontend-config"
  nginxPort: "1080"
  ssoClientId: "aihub-frontend"
  keycloakOnLoad: "login-required"
  readinessprobe:
    failurethreshold: 6
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "2G"
      cpu: "1"
    limits:
      memory: "2G"
      cpu: "1"
  securityContext:
    fsGroup: 0

activemq:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  serviceName: "activemq-svc"
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-activemq-artemis"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "activemq-config"
  pvcName: "activemq-artemis-pvc"
  storageSize: "10Gi"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 20
    periodseconds: 60
    timeoutseconds: 1
  broker:
    port: 61616
    username: "<SERVER-AMQ-USER-NAME-PLACEHOLDER>"
    password: "<SERVER-AMQ-PASS-PLACEHOLDER>"
  resources:
    requests:
      memory: "4G"
      cpu: "2"
    limits:
      memory: "4G"
      cpu: "2"
  securityContext:
    fsGroup: 0

aihubBackendInit:
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "aihub-backend-init-container"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "aihub-backend-config"
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

aihubBackend:
# Sample POD annotation
  podAnnotations:
#   prometheus.io/scrape: "true"
#   prometheus.io/path: "/system/prometheus"
#   prometheus.io/port: "8077"
#   backup.velero.io/backup-volumes: proxy-pv
  serviceName: "aihub-backend-svc"
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-aihub"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "aihub-backend-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "aihub-home-pvc"
  storageSize: "500Gi"
  ssoClientId: "aihub-backend"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<SERVER-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  springProfilesActive: "default,prometheus"
  memLimit: "2048M"
  logLevel: "INFO"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 100
    periodseconds: 60
    timeoutseconds: 1
  platformAdminSyncDebug: "False"
  legacyRESTBasicAuth: "false"
  loadUserCertificates: "true"
  resources:
    requests:
      memory: "4G"
      cpu: "2"
    limits:
      memory: "4G"
      cpu: "2"
  securityContext:
    fsGroup: 0
    runAsUser: 2011
# SMTP settings
  smtpEnabled: False
  smtpHost: ""
  smtpPort: ""
  smtpUserName: ""
  smtpPassword: ""
  smtpAuth: "true"
  smtpStartTLS: "true"
  reportErrMailTo: ""
  reportErrMailSubject: ""
  reportErrMailFromAddress: ""
  reportErrMailFromName: ""
# Jobs Cleanup
# https://docs.rapidminer.com/latest/hub/manage/job-execution-infrastructure/job-cleanup.html
  jobservice:
    scheduledArchiveJob:
      cleanupEnabled: False
      jobCronExpression: "0 0 * * * *"
      jobContextCronExpression: ""
      maxAge: ""
      jobBatchSize: ""
      jobContextBatchSize: ""

jobagents:
# Sample POD annotation
  podAnnotations:
#   prometheus.io/scrape: "true"
#   prometheus.io/path: "/system/prometheus"
#   prometheus.io/port: "8066"
#   backup.velero.io/backup-volumes: proxy-pv
  ssoClientId: "aihub-jobagent"
  ssoClientSecret: "<JOBAGENT-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-jobagent"
  # You can overwrite the mainVersion value for this component
  # version: "2024.1.0"
  agents:
    - configName: "job-agents-config-default-queue"
      serviceName: "job-agents-default-queue"
      statefulsetName: "job-agents-default-queue"
      selectorLabels:
        app: job-agents-default-queue
        tier: execution
      replicas: 1
      # You can overwrite the SC where JA store its config
      # storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
      homeStorageSize: "10Gi"
      # legacy name:
      # homePvcName: "jobagent-home-pvc"
      # huggingfacePvcName: "jobagent-huggingface"
      homePvcName: "job-agents-default-queue-home-pvc"
      huggingfacePvcName: "job-agents-default-queue-huggingface-pvc"
      huggingfaceStorageSize: "10Gi"
      name: "JOBAGENT-DEFAULT-QUEUE"
      springProfilesActive: "default,prometheus"
      logLevel: "INFO"
      jobQueue: "DEFAULT"
      containerCount: "1"
      containerMemLimit: "2048"
      initSharedCondaSettings: "true"
      containerJavaOpts: ""
      javaOpts: "-Djobagent.container.jvmCustomProperties=Dlogging.level.com.rapidminer=INFO"
      resources:
        requests:
          memory: "4G"
          cpu: "2"
        limits:
          memory: "4G"
          cpu: "2"
      securityContext:
        fsGroup: 0
#    - configName: "job-agents-config-second-queue"
#      serviceName: "job-agents-second-queue"
#      statefulsetName: "job-agents-second-queue"
#      selectorLabels:
#        app: job-agents-second-queue
#        tier: execution
#      replicas: 2
#      # You can overwrite the SC where JA store its config
#      # storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
#      homeStorageSize: "10Gi"
#      homePvcName: "job-agents-second-queue-home-pvc"
#      huggingfacePvcName: "job-agents-second-queue-huggingface-pvc"
#      huggingfaceStorageSize: "10Gi"
#      name: "JOBAGENT-SECOND-QUEUE"
#      springProfilesActive: "default,prometheus"
#      logLevel: "INFO"
#      jobQueue: "SECOND-QUEUE"
#      containerCount: "1"
#      containerMemLimit: "2048"
#      initSharedCondaSettings: "true"
#      containerJavaOpts: ""
#      javaOpts: "-Djobagent.container.jvmCustomProperties=Dlogging.level.com.rapidminer=INFO"
#      resources:
#        requests:
#          memory: "4G"
#          cpu: "2"
#        limits:
#          memory: "4G"
#          cpu: "2"
#      securityContext:
#        fsGroup: 0

# Legacy Job Agent configuration, will be removed, code in templates/job-agent.yml is commented out
#jobagent:
## You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
## repoName: "<registry.example.com/> or <customedockerhubreponame/>"
#  imageName: "rapidminer-jobagent"
## You can overwrite the mainVersion value for this component
## version: "10.3.2"
#  configName: "job-agents-config"
## You can overwrite the SC where JA store its config
## storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
#  homeStorageSize: "10Gi"
#  homePvcName: "jobagent-home-pvc"
#  huggingfacePvcName: "jobagent-huggingface"
#  name: "JOBAGENT-1" # TODO make this dynamic with the StatefulSet
#  ssoClientId: "aihub-jobagent"
#  ssoClientSecret: "<JOBAGENT-OIDC-CLIENT-SECRET-PLACEHOLDER>"
#  springProfilesActive: "default,prometheus"
#  logLevel: "INFO"
#  jobQueue: "DEFAULT"
#  containerCount: "1"
#  containerMemLimit: "2048"
#  initSharedCondaSettings: "true"
#  containerJavaOpts: ""
#  javaOpts: "-Djobagent.container.jvmCustomProperties=Dlogging.level.com.rapidminer=INFO"
#  resources:
#    requests:
#      memory: "4G"
#      cpu: "2"
#    limits:
#      memory: "4G"
#      cpu: "2"
#  securityContext:
#    fsGroup: 0

keycloak:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "keycloak-svc"
  imageName: "rapidminer-keycloak"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "keycloak-config"
  logLevel: "info"
  features: "token-exchange"
  healthEnabled: "true"
  hostname:
    strict: "false"
    strictBackchannel: "false"
    strictHttps: "false"
    backchannel:
      dynamic: "false"
  proxyHeaders: "xforwarded"
  httpEnabled: "true"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 45
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "1G"
      cpu: "0.5"
    limits:
      memory: "1G"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

keycloakDB:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "keycloak-db-svc"
  imageName: "postgres-14"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "keycloak-db-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "keycloak-db-pvc"
  storageSize: "10Gi"
  vendor: "postgres"
  dbName: "<KEYCLOAK-DB-NAME-PLACEHOLDER>"
  dbUser: "<KEYCLOAK-DB-USER-PLACEHOLDER>"
  dbPass: "<KEYCLOAK-DB-PASS-PLACEHOLDER>"
# dataDirectory shall be mountDirectory/data
  dataDirectory: '/rapidminer/data'
  mountDirectory: '/rapidminer'
# Postgres initdb args
# The last parameter is the DB container mountPath
  initdbArgs: "--encoding UTF8 --locale=C /rapidminer/data"
  dbSchema: "public"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 15
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

licenseProxy:
  springProfilesActive: "default,prometheus"
# Sample POD annotation
  podAnnotations:
#   prometheus.io/scrape: "true"
#   prometheus.io/path: "/actuator/prometheus"
#   prometheus.io/port: "9191"
#   backup.velero.io/backup-volumes: proxy-pv
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "license-proxy-svc"
  port: "9898"
  imageName: "rapidminer-licenseproxy"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "license-proxy-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "license-proxy-pvc"
  storageSize: "1Gi"
  debug: "false"
  # If externalLicenseSecret is set it will override this value
  secretName: "license-proxy-secret"
  secretKeyName: "TOKEN"
  licenseProxyOpts: "-Xmx2g"
  readinessprobe:
    failurethreshold: 3
    initialdelayseconds: 60
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "2560M"
      cpu: "0.5"
    limits:
      memory: "2560M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

deploymentInit:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-deployment-init"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "deployment-init-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "deployment-init-pvc"
  storageSize: "100M"
  debug: "false"
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

platformAdmin:
# Sample POD annotation specify the PV which velero needs to backup by node-agent (former restic)
  podAnnotations:
#   backup.velero.io/backup-volumes: platform-admin-webui-uploaded-cnt-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "platform-admin-webui-svc"
  imageName: "rapidminer-platform-admin-webui"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "platform-admin-webui-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "platform-admin-webui-uploaded-pvc"
  storageSize: "10Gi"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 10
    periodseconds: 60
    timeoutseconds: 1
  proxyURLSuffix: "/platform-admin"
  proxyRTSWebUISuffix: "/rts-admin"
  ssoClientId: "platform-admin"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<PLATFORM-ADMIN-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  disablePython: "false"
  disableRTS: "false"
  debug: "false"
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

ces:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-coding-environment-storage"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "rapidminer-coding-environment-storage-config"
  pythonPackageLink: "git+https://github.com/rapidminer/python-rapidminer.git@9.10.0.0"
  pvcName: "coding-environment-storage"
  pvcSubPath: "coding-shared"
  storageSize: 250Gi
  #sharedStorageClass: "<STORAGECLASS-PLACEHOLDER_RWX>"
  ubuntuUid: "9999"
  ubuntuGid: "9999"
  debug: "False"
  disableDefaultChannels: "True"
  condaChannelPriority: "strict"
  rapidMinerUser: "rapidminer"
  resources:
    requests:
      memory: "256M"
      cpu: "0.1"
    limits:
      memory: "5G"
      cpu: "1"
  securityContext:
    fsGroup: 0

scoringAgent:
# Sample POD annotation
  podAnnotations:
#   prometheus.io/scrape: "true"
#   prometheus.io/path: "/system/prometheus"
#   prometheus.io/port: "8067"
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "scoring-agent-svc"
  imageName: "rapidminer-scoringagent"
# This is the last version of scoring agent, please migrate to webapi
# version: "2024.1.1"
  configName: "scoring-agent-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWX>"
  pvcName: "scoring-home-pvc"
  storageSize: "10Gi"
  licensesPvcName: "scoring-licenses-pvc"
  ssoClientId: "aihub-scoringagent"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<SCORING-AGENT-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  proxyURLSuffix: "/rts"
  springProfilesActive: "default,prometheus"
  cacheRepositoryClearOnCollection: "false"
  cacheRepositoryMaximumSize: "50"
  cacheRepositoryAccessExpiration: "3600000"
  cacheRepositoryCopyCachedIoObject: "true"
  corsPathPatter: ""
  corsAllowedMethods: "*"
  corsAllowedHeaders: "*"
  corsAllowedOrigins: "*"
  restContextPath: "/api"
  taskSchedulerPoolSize: "10"
  taskSchedulerThreadPriority: "5"
  executionCleanupEnabled: "false"
  executionCleanupCronExpression: "0 0 0-6 ? * * *"
  executionCleanupTimeout: "10000"
  executionCleanupWaitBetween: "1000"
  auditEnabled: "false"
  waitForLicenses: "1"
  basicAuth:
    enabled: "true"
    user: "admin"
    password: "changeit"
  rtsServerLicense: "true"
  readinessprobe:
    failurethreshold: 6
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "1G"
      cpu: "1"
    limits:
      memory: "4G"
      cpu: "2"
  securityContext:
    fsGroup: 0

jupyterDB:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
# Keep the serviceName: "jupyterhub-db" or the sample notebook content may fail to connect
  serviceName: "jupyterhub-db"
  imageName: "rapidminer-jupyterhub-postgres"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "jupyterhub-db-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "jupyterhub-db-pvc"
  storageSize: "10Gi"
  vendor: "POSTGRES"
  dbName: "<JUPYTERHUB-DB-NAME-PLACEHOLDER>"
  dbUser: "<JUPYTERHUB-DB-USER-PLACEHOLDER>"
  dbPass: "<JUPYTERHUB-DB-PASS-PLACEHOLDER>"
# dataDirectory shall be mountDirectory/data
  dataDirectory: '/rapidminer/data'
  mountDirectory: '/rapidminer'
  # Postgres initdb args
  # The last parameter is the DB container mountPath
  initdbArgs: "--encoding UTF8 --locale=C /rapidminer/data"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 35
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

jupyterHub:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  proxyServiceName: "jupyterhub-proxy-svc-priv"
  proxyAPIServiceName: "jupyterhub-proxy-api-svc-priv"
  serviceName: "jupyterhub-hub-svc-priv"
  imageName: "rapidminer-jupyterhub-jupyterhub"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "jupyterhub-config"
  createServiceAccount: "true"
  initRBAC: "true"
  serviceAccountName: "jupyterhub-kubespawner-service-account"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  proxyURLSuffix: "/jupyter"
# Jupyterhub crypt key can be generated with the command: openssl rand -hex 32
  cryptKey: "<JUPYTERHUB-CRYPT-KEY-PLACEHOLDER>"
  debug: "False"
  tokenDebug: "False"
  proxyDebug: "False"
  dbDebug: "False"
  spawnerDebug: "False"
  stackName: "default"
  ssoClientId: "jupyterhub"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<JUPYTERHUB-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  ssoUserNameKey: "preferred_username"
  ssoResourceAccKey: "resource_access"
  spawner: "kubespawner"
  apiProtocol: "http"
  k8sCMD: "/entrypoint.sh"
  k8sArgs: "[]"
  proxyPort: "8000"
  apiPort: "8001"
  appPort: "8081"
  envVolumeName: "coding-shared-vol"
  readinessprobe:
    failurethreshold: 1
    initialdelayseconds: 35
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

jupyterNoteBook:
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-jupyter_notebook"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  memLimit: "3G"
  cpuLimit: "100"
  ssoUidKey: "X_NB_UID"
  ssoGidKey: "X_NB_GID"
  ssoCustomBindMountsKey: "X_NB_CUSTOM_BIND_MOUNTS"
  customBindMounts: ""
  storageAccessMode: "ReadWriteOnce"
  storageSize: "5Gi"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
# For kubernetes environments imagePullAtStartup shall be false
  imagePullAtStartup: "False"
#  nodeSelector:
#    key: "<NODE-LABEL-1-NAME-PLACEHOLDER>"
#    value: "<NODE-LABEL-1-VALUE-PLACEHOLDER>"
  nodeSelector: {}

grafanaProxy:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  protocol: "http"
  serviceName: "grafana-proxy-svc"
  port: "5000"
  imageName: "rapidminer-grafana-proxy"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
# Possible values: NOTSET, DEBUG, INFO, WARNING, ERROR, CRITICAL
  logLevel: "INFO"
  logResponseData: "False"
  configName: "grafana-proxy-config"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 15
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "1"
  securityContext:
    fsGroup: 0

grafanaAnonProxy:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "grafana-anonymous-proxy-svc"
  imageName: "rapidminer-grafana-proxy"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
# Possible values: NOTSET, DEBUG, INFO, WARNING, ERROR, CRITICAL
  logLevel: "INFO"
  logResponseData: "False"
  configName: "grafana-anonymous-proxy-config"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 15
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "1"
  securityContext:
    fsGroup: 0

grafanaInit:
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-grafana-init"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "grafana-init-config"
  resources:
    requests:
      memory: "256M"
      cpu: "0.5"
    limits:
      memory: "256M"
      cpu: "0.5"
  securityContext:
    runAsUser: 472
    runAsGroup: 472
    fsGroup: 472

grafana:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  repoName: "grafana/"
  serviceName: "grafana-svc"
  imageName: "grafana"
# You can overwrite the mainVersion value for this component
# This is the version of the official Grafana docker image
  staticVersion: "10.4.11-ubuntu"
  configName: "grafana-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  homePvcName: "grafana-home-pvc"
  homeStorageSize: "10Gi"
  provisioningPvcName: "grafana-provisioning-pvc"
  provisioningStorageSize: "10Gi"
  readinessprobe:
    failurethreshold: 2
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  env:
    paths:
      data: /var/lib/grafana/aihub
      plugins: /var/lib/grafana/aihub/plugins
    auth:
      basic:
        enabled: "false"
      oauth:
        autoLogin: "true"
        enabled: "true"
        allowSignUp: "true"
        role:
          attributePath: "contains(grafana_roles[*], 'admin') && 'Admin' || contains(grafana_roles[*], 'editor') && 'Editor' || 'Viewer'"
        scopes: "email,openid"
      disableLoginForm: "true"
    server:
      serveFromSubPath: "true"
    users:
      defaultTheme: "light"
      externalManageLinkName: "false"
    panels:
      disableSanitizeHtml: "true"
    plugins:
      allowLoadingUnsignedPlugins: "rapidminer-aihub-datasource"
  proxyURLSuffix: "/grafana"
  ssoClientId: "grafana"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<GRAFANA-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  resources:
    requests:
      memory: "256M"
      cpu: "1"
    limits:
      memory: "2048M"
      cpu: "2"
  securityContext:
    fsGroup: 0

tokenTool:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  serviceName: "token-tool-svc"
  imageName: "rapidminer-deployment-landing-page"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "token-tool-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  pvcName: "token-tool-uploaded-pvc"
  storageSize: "100M"
  proxyURLSuffix: "/get-token"
  ssoClientId: "token-tool"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<TOKEN-TOOL-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  ssoCustomScope: "openid offline_access"
  customContent: "get-token"
  debug: "false"
  readinessprobe:
    failurethreshold: 6
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "128M"
      cpu: "0.2"
    limits:
      memory: "128M"
      cpu: "0.5"
  securityContext:
    fsGroup: 0

webApiGateway:
# Sample POD annotation
  podAnnotations:
#   prometheus.io/scrape: "true"
#   prometheus.io/path: "/system/prometheus"
#   prometheus.io/port: "8078"
#   backup.velero.io/backup-volumes: proxy-pv
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: rapidminer-webapi-gateway
  configName: "webapi-gateway-config"
  serviceName: "webapi-gateway"
  debugEnabled: False
  springProfilesActive: "default,prometheus"
  # The connect timeout in milliseconds
  springCloutGatewayHttpclientConnectTimeout: "15000"
  springCloutGatewayHttpclientResponseTimeout: "5m"
  retryBackoffEnabled: "off"
  retryBackoffInterval: "100ms"
  retryExceptions: "java.io.IOException, org.springframework.cloud.gateway.support.TimeoutException"
  retryMethods: "post"
  retryStatus: "not_found"
  retrySeries: "server_error"
  retryGroupRetries: "3"
  retryAgentRetries: "3"
  loadbalancerCleanUpInterval: "10s"
  loadbalancerRequestTimeout: "10s"
  loadbalancerRequestInterval: "10s"
  loadbalancerMetricStyle: "CPU_MEMORY"
  resources:
    requests:
      memory: "1G"
      cpu: "1"
    limits:
      memory: "4G"
      cpu: "2"
  readinessprobe:
    failurethreshold: 6
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  webapiRegistryUsername: "<WEBAPI-REGISTRY-USERNAME-PLACEHOLDER>"
  webapiRegistryPassword: "<>WEBAPI-REGISTRY-PASSWORD-PLACEHOLDER"
  securityContext:
    fsGroup: 0

webApiAgent:
# Sample POD annotation
  podAnnotations:
#   prometheus.io/scrape: "true"
#   prometheus.io/path: "/system/prometheus"
#   prometheus.io/port: "8067"
#   backup.velero.io/backup-volumes: proxy-pv
# You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
# repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: "rapidminer-scoringagent"
# You can overwrite the mainVersion value for this component
# version: "2024.1.0"
  configName: "webapi-agent-config"
# You can overwrite the defaultstorageClass value for this component
# storageClass: "<STORAGECLASS-PLACEHOLDER_RWX>"
  pvcName: "webapi-agent-home-pvc"
  ssoClientId: "aihub-webapiagent"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<SCORING-AGENT-OIDC-CLIENT-SECRET-PLACEHOLDER>"
  springProfilesActive: "webapi,prometheus"
  storageSize: "10Gi"
  replicasNumber: "2"
  cacheRepositoryClearOnCollection: "false"
  cacheRepositoryMaximumSize: "50"
  cacheRepositoryAccessExpiration: "3600000"
  cacheRepositoryCopyCachedIoObject: "true"
  corsPathPatter: ""
  corsAllowedMethods: "*"
  corsAllowedHeaders: "*"
  corsAllowedOrigins: "*"
  restContextPath: "/api"
  taskSchedulerPoolSize: "10"
  taskSchedulerThreadPriority: "5"
  executionCleanupEnabled: "false"
  executionCleanupCronExpression: "0 0 0-6 ? * * *"
  executionCleanupTimeout: "10000"
  executionCleanupWaitBetween: "1000"
  auditEnabled: "false"
  eurekaInstanceHostname: "webapi-agents"
  eurekaInstancePreferIPAddress: "true"
  licensesPvcName: "scoring-licenses-pvc"
  debugEnabled: False
  rapidminerScoringAgentOpts: "-Xmx4g"
  readinessprobe:
    failurethreshold: 6
    initialdelayseconds: 30
    periodseconds: 60
    timeoutseconds: 1
  resources:
    requests:
      memory: "1G"
      cpu: "1"
    limits:
      memory: "5G"
      cpu: "2"
  securityContext:
    fsGroup: 0

panopticonVizapp:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: panopticonviz
  # You can overwrite the mainVersion value for this component
  # version: "2024.1.0"
  serviceName: "panopticon-vizapp"
  catalinaOpts: "-Xms900m -Xmx1900m --add-opens java.base/java.nio=ALL-UNNAMED"
  lmxUseEpoll: '1'
  file:
    upload:
      size:
        max:
          bytes: "30000000"
  license:
    detached: "false"
# Use this section if you have to set up panopticon licensing independently from AiHub
#    hosted: "false"
    hostedAuthorization: {}
#      username: "<LICENSE_HWU_HOSTED_AUTHORIZATION_USERNAME_PLACEHOLDER>"
#      password: "<LICENSE_HWU_HOSTED_AUTHORIZATION_PASSWORD_PLACEHOLDER>"
#      # Altair Unit Auth code
#      token: "<LICENSE_HWU_HOSTED_AUTH_CODE_PLACEHOLDER>"
    uri: {}
#      host: "<LICENSE_HWU_URI_HOST_PLACEHOLDER>"
#      port: "<LICENSE_HWU_URI_PORT_PLACEHOLDER>"
#    mode: HWU
  logger:
    level:
      file: "INFO"
  ssoClientId: "panopticon"
# keycloak client secrets can be generated with the uuidgen command from the uuid package or
# with using openssl library: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)"
  ssoClientSecret: "<PANOPTICON-CLIENT-SECRET-PLACEHOLDER>"
  # You can overwrite these values:
  # appDataPvcName: "panopticon-vizapp-appdata-pvc"
  # appDataPvcWtorageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  # appDataPvcAccessMode: "ReadWriteOnce"
  # appDataPvcDiskSize: "4Gi"
  # sharedPvcName: "panopticon-vizapp-shared-pvc
  # sharedPvcStorageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  # sharedPvcAccessMode: "ReadWriteOnce"
  # sharedPvcDiskSize: 1Gi
  # logsPvcName: "panopticon-vizapp-logs-pvc"
  # logsPvcStorageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  # logsPvcAccessMode: "ReadWriteOnce"
  # logsPvcDiskSize: 4Gi
  # licensePvcName: "panopticon-vizapp-license-pvc"
  # licensePvcStorageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  # licensePvcAccessMode: "ReadWriteOnce"
  # licensePvcDiskSize: 100Mi
  resources:
    requests:
      cpu: "1"
      memory: 1Gi
    limits:
      cpu: "2"
      memory: 2Gi
  securityContext:
    fsGroup: 0

panopticonVizappPython:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: panopticon-pyserve
  # You can overwrite the mainVersion value for this component
  # version: "2024.1.0"
  serviceName: "panopticon-vizapp-python"
  #xsmall
  # You can overwrite these values:
  # storageClass: "<STORAGECLASS-PLACEHOLDER_RWX>"
  # pvcName: ""
  # pvcAccessMode: [ ReadWriteMany, ReadWriteOnce ]
  # diskSize: 500Mi
  resources:
    requests:
      cpu: "1"
      memory: 2Gi
    limits:
      cpu: "1"
      memory: 2Gi
  securityContext:
    fsGroup: 0

panopticonRserve:
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: panopticon-rserve
  # You can overwrite the mainVersion value for this component
  # version: "2024.1.0"
  serviceName: "panopticon-rserve"
  # You can overwrite these values:
  # storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  # pvcName: "panopticon-rserve-pvc"
  # diskSize: 500Mi
  resources:
    requests:
      cpu: "100m"
      memory: 250Mi
    limits:
      cpu: "500m"
      memory: 500Mi
  securityContext:
    fsGroup: 0

panopticonMonetDB:
  securityContext:
    fsGroup: 0
# Sample POD annotation
  podAnnotations:
#   backup.velero.io/backup-volumes: proxy-pv
  # You can overwrite the Docker registry prefix rapidminer/ if you have on own repository, but that can be changed to the fqdn of your internal registry
  # repoName: "<registry.example.com/> or <customedockerhubreponame/>"
  imageName: panopticon-monetdb
  deploy: true
  # You can overwrite the mainVersion value for this component
  # version: "2024.1.0"
  serviceName: "panopticon-monetdb"
  adminPass: "<ADMIN_PASSWORD_PLACEHOLDER>"
  # You can overwrite these values:
  # storageClass: "<STORAGECLASS-PLACEHOLDER_RWO>"
  # pvcName: "panopticon-monetdb-pvc"
  # diskSize: 4Gi
  ## Resources section is not compatible with monetdb, it complains about memory issues in the logs.
  resources:
    requests:
      cpu: "750m"
      memory: 1Gi
    limits:
      cpu: "1500m"
      memory: 2Gi