Docker-compose deployment
Starting with Altair AI Hub version 2025.0, AI Hub no longer supports plain HTTP. Hence, you must obtain a secure certificate from a trusted Certificate Authority, either a public issuer or a corporate CA that is trusted by all devices that have access to the deployment.
Alternatively, you can use Let’s Encrypt, a free, automated, and open certificate authority (CA), run for the public’s benefit. It is a service provided by the Internet Security Research Group (ISRG).
The letsencrypt Docker image provided with AI Hub contains the Certbot application which is the official client application for https://letsencrypt.org/.
This document will help you to deploy Altair AI Hub on a single host. For multi-host deployments, see Kubernetes deployment with Helm.
To help deliver docker images to an air-gapped environment, see Altair AI Hub docker images.
To deploy Altair AI Hub with docker compose:
- [Download] the production template, and follow instructions.
All versions: [2025.0.0] [2024.1.1] [2024.1.0] [2024.0.3] [2024.0.1] [2024.0.0] [10.3.2] [10.3.1] [10.3.0] [10.2.0] [10.1.3]
Table of contents
- System requirements
- Upgrade notes
- Profiles
- Instructions
- PUBLIC_URL
- Panopticon Licensing
- Post install steps
- The environment file (.env)
- The definition file (docker-compose.yml)
- Change the default port
- Health checks
System requirements
Rootless Docker was introduced in Docker Engine version 19.03 as an experimental feature, and graduated from that status in version 20.10.
Hence, it is recommended that you use version >= 20.10 of the Docker Engine.
For details related to the operating system, see the distribution-specific hints. Note that CentOS 6/7 are not supported, because unprivileged user namespaces are not supported in Linux kernel versions < 3.19.
Minimum recommended hardware configuration
Note that our Docker images are built on the x86-64 architecture, also known as linux/amd64.
The amount of memory needed depends heavily on the amount of data that will be processed by Altair AI Hub. By themselves, the services can run with as little as 16 GB. However, in production environments, we recommend 32GB or more depending on user data, in order to provide users with enough capacity to analyze data from realistic use cases.
Each virtual or physical machine should at least have:
- Quad core
- 32GB RAM
- >30GB free disk space
Upgrade notes
For upgrading from 10.2 version the PROXY_DATA_UPLOAD_LIMIT
value's needs to be modified, add the B suffix to the configuration value. The units should be such as GB, MB, etc.
Profiles
If you only want a subset of the features provided by Altair AI Hub, you can use the profiles feature of docker compose to pick and choose from the following set:
Profile | Description |
---|---|
aihub-activemq | aihub-activemq |
aihub-backend | aihub-backend |
aihub-frontend | aihub-frontend |
aihub-job-agent | Job Agent |
altair-license | Licensing |
ces | Coding environment storage |
deployment-init | deployment-init |
grafana | Dashboards |
jupyter | JupyterHub |
keycloak | Keycloak |
landing-page | Landing page |
letsencrypt | letsencrypt |
panopticon | Panopticon |
platform-admin | Platform Admin |
proxy | Proxy |
scoring-agent | Scoring Agent |
token-tool | Token generator |
In the environment file, edit the variable COMPOSE_PROFILES
to choose your subset.
Note that value of COMPOSE_PROFILES
is a comma-separated list with no spaces.
# Maximum set
# COMPOSE_PROFILES=deployment-init,proxy,keycloak,altair-license,landing-page,aihub-frontend,aihub-backend,aihub-activemq,aihub-job-agent,jupyter,grafana,scoring-agent,platform-admin,ces,token-tool,letsencrypt,panopticon,aihub-webapi-agent-1,aihub-webapi-agent-2,aihub-webapi-gateway
# Minimum set
# COMPOSE_PROFILES=deployment-init,proxy,keycloak,altair-license,landing-page,aihub-frontend,aihub-backend,aihub-activemq,aihub-job-agent
# Default set
COMPOSE_PROFILES=deployment-init,proxy,keycloak,altair-license,landing-page,aihub-frontend,aihub-backend,aihub-activemq,aihub-job-agent,jupyter,grafana,scoring-agent,platform-admin,ces,token-tool,letsencrypt,panopticon,aihub-webapi-agent-1,aihub-webapi-gateway
Instructions
To deploy this template, take the following steps.
If you have not yet done so, install Docker.
Download the ZIP file. Unzip and examine the contents:
- .env (note that because of the preceding dot, this file is usually hidden)
- docker-compose.yml
panopticon
folderssl
folder- numerous READMEs
(optional) By default, Altair AI Hub will start with the set of services identified in the
COMPOSE_PROFILES
variable, as discussed above. You can choose a different set of profiles by setting the following variable in the.env
file:COMPOSE_PROFILES
If you are using the traditional RapidMiner License, you should disable the
altair-license
profile.As discussed in detail below, set the following variables in the
.env
file:PUBLIC_DOMAIN
,PUBLIC_PROTOCOL
,PUBLIC_PORT
PUBLIC_URL
SSO_PUBLIC_DOMAIN
SSO_PUBLIC_URL
(Altair Units license only) Skip to the next step if you are using RapidMiner licensing.
By default, AI Hub uses Altair Unit licensing. Make sure that
altair-license
is enabled in your profiles.(RapidMiner license only) Skip to the next step if you are using Altair Units licensing.
In the
.env
file, set the following variable with a copy of your license from my.rapidminer.com:LICENSE
With
SCORING_AGENT_ENABLE_SERVER_LICENSE=true
, the AI Hub license will be used also for the bundled scoring agent. The Scoring Agent Status is visible after logging in, via Platform Administration.(Panopticon license only) If you are not using Panopticon, skip to the next step.
If you are using Panopitocon, make sure that
panopticon
is enabled in your profiles. See the detailed instructions for Panopticon licensing below.Within the
.env
file, set additional frequently used configuration values:- The initial admin password can be set using the variable
KEYCLOAK_PASSWORD
(default: "changeit"). - Replace
AUTH_SECRET
(an internal authentication encryption key) andBROKER_ACTIVEMQ_PASSWORD
with any base64 encoded string. - The secret key
JUPYTERHUB_CRYPT_KEY
is used to encrypt user data in the JupyterHub DB. We propose to change the default value to a random string, i.e. openssl rand -hex 32
# echo $RANDOM | md5sum | head -c 20; echo | base64; AUTH_SECRET="<AUTH-SECRET-PLACEHOLDER>" -- BROKER_ACTIVEMQ_PASSWORD="<SERVER-AMQ-PASS-PLACEHOLDER>" -- # Jupyterhub crypt key can be generated with the command: openssl rand -hex 32 JUPYTERHUB_CRYPT_KEY="<JUPYTERHUB-CRYPT-KEY-PLACEHOLDER>"
- The initial admin password can be set using the variable
Transfer the contents of the ZIP file, with URLs and licenses configured, to the server host, the machine where you installed Docker.
Connect to the server host, and change directory to the folder containing those files. Please make sure the
.env
file has the following permissions:sudo chmod a+rw .env
If SSO configuration is not disabled (this is the case by default), then the platform deployment needs to be initialized before the first startup. In the directory containing
docker-compose.yml
, type:docker compose up -d deployment-init
If immediately afterwards you type
docker compose logs -f deployment-init
you can observe the initialization taking place, and you will know that you are ready to execute the next step when you see the following text printed repeatedly to the screen, typically after 1-2 minutes:
[DEPLOYMENT INIT] Successfully finished.
Alternatively, if you observe the following error message:
| [RM INIT] Starting... | [RM INIT] Starting job /rapidminer/provision/tasks/01_check_permissions.sh | touch: cannot touch '/tmp/ssl/.test_permission': Permission denied | Permission denied on file/directory ssl/ ! | Please make sure about good permissions of these files/dirs: | - .env : it should be writable by anyone (666, or -rw-rw-rw-) | - ssl : it should be writable by anyone (777, or drwxrwxrwx)
make sure to set the appropriate permissions on the
ssl
directory:sudo chown -R 2011:0 ssl/ sudo chmod -R ug+w ssl/ sudo chmod -R o-rwx ssl/
before reissuing the initial command:
docker compose up -d deployment-init
Finally, start the stack by running the command:
docker compose up -d
Again, you can observe the progress of startup with the command:
docker compose logs -f deployment-init
The service
deployment-init
will exit without error when complete.
If the Docker images are not available on the host, they will be automatically downloaded from the Docker Hub.
PUBLIC_URL
The deployed stack needs to have a valid public URL, both for internal communication and so that external clients (like Altair AI Studio and a browser) can connect to it. In the .env
file, before first startup, set the values of the environment variables PUBLIC_URL
and SSO_PUBLIC_URL
to this public URL.
The values
http://localhost
andhttp://127.0.0.1
are not supported, because this URL will be used also for internal container-to-container communication between our services.If deploying on a single host, use the host's public IP address, such as
http://192.168.1.101
or a publicly resolvable hostname that resolves to this IP address, likehttp://platform.rapidminer.com
.If the deployment cannot listen on the default HTTP and HTTPS ports (80 and 443), then read Change the default port.
It is highly preferred to use HTTPS for the connection. In this case the
PUBLIC_URL
,SSO_PUBLIC_URL
,PUBLIC_PROTOCOL
andWEBAPI_REGISTRY_PROTOCOL
variables should be configured using thehttps://
prefix and the certificate chain and private key files should be provided in PEM format in thessl
sub-folder using the filenamescertificate.crt
andprivate.key
. The default filenames can be changed using the environment variables in the Proxy section of the.env
file. Make sure to set the permissions of thessl
directory as indicated above in the final point of the instructions. Also setPUBLIC_PORT
variable to443
which is used by WebAPI.
Once the deployment is running, the configured reverse proxy listens by default on the standard HTTP (80) port, and also on the HTTPS (443) port if an HTTPS certificate is configured.
The initial login credentials are set in the .env file (by the variables KEYCLOAK_USER
and KEYCLOAK_PASSWORD
).
By default you can log in using the username "admin" and password: "changeit".
From the landing page at PUBLIC_URL, the full range of services of Altair AI Hub is available.
Panopticon Licensing
The deployment-init
phase tries to derive the licensing of Panopticon from the licensing of Altair AI Hub.
The following table shows the Panopticon licensing modes and their Altair AI Hub equivalents:
Altair AI Hub licensing | Panopticon Licensing |
---|---|
Legacy Rapidminer License | License file |
ALTAIR_UNIT + altair_one | hosted HWU |
ALTAIR_UNIT + on_prem | not hosted HWU |
Altair Unit Hosted (Altair One)
When using LICENSE_MODE=ALTAIR_UNIT
and LICENSE_PROXY_MODE=altair_one
licensing, the following settings will be copied over from Altair AI Hub:
(This is the mapping from the .env file to the Panopticon.properties file)
.env | Panopticon.properties |
---|---|
LICENSE_PROXY_MODE=altair_one | license.hwu.hosted=true |
LICENSE_UNIT_MANAGER_USER_NAME | license.hwu.hosted.authorization.username |
LICENSE_UNIT_MANAGER_PASSWORD | license.hwu.hosted.authorization.password |
LICENSE_UNIT_MANAGER_AUTH_CODE | license.hwu.hosted.authorization.token |
Altair Unit On-Prem
When using LICENSE_MODE=ALTAIR_UNIT
and LICENSE_PROXY_MODE=on_prem
licensing, the following settings will be copied over from Altair AI Hub:
(This is the mapping from the .env file to the Panopticon.properties file)
.env | Panopticon.properties |
---|---|
LICENSE_PROXY_MODE=on_prem | license.hwu.hosted=false |
ALTAIR_LICENSE_PATH | license.hwu.uri |
In this case you don't need any manual setup, you can start up the deployment as usual.
Rapidminer legacy and Panopticon file based licensing
When using LICENSE_MODE=RAPIDMINER
, you need to provide both the LICENSE
variable containing your legacy RapidMiner license in the .env file and also provide your Panopticon license
using the filename: PanopticonLicense.xml
in the panopticon/AppData
folder.
In this case you don't need any manual setup, you can start up the deployment as usual.
Other options
If you want to use different licensing for Panopticon and Altair AI Hub, set the PANOPTICON_DETACHED_LICENSE
variable to true
.
In this case you can configure Panopticon separately and there's no license related mapping from the .env file.
Altough the Panopticon licensing configuration will be independent from Altair AI Hub, there are several settings, that are automatically set up by the deployment-init (for example Keycloak configuration).
To start up Panopicon:
- Run the
deployment-init
profile as you would normally. - Edit the newly created
panopticon/AppData/Panopticon_overide.properties
file according to the official Panopticon documentation. - Start the rest of the platform.
Post install steps
Docker containers may create lots of logs which fill up the host's disk. Docker logging settings explain how to configure Docker Engine for rotating the logs or using remote log server.
The environment file (.env)
# ############################################ # # Global parameters # # ############################################ # Public domain of the deployment PUBLIC_PROTOCOL=https PUBLIC_DOMAIN=platform.rapidminer.com PUBLIC_PORT=443 # Public URL of the deployment that will be used for external access (Public domain + protocol + port) PUBLIC_URL=${PUBLIC_PROTOCOL}://${PUBLIC_DOMAIN} # If you run your deployment on a non-standard port, it should be added as well (HTTP_PORT and HTTPS_PORT shall be set too) # PUBLIC_URL=${PUBLIC_PROTOCOL}://${PUBLIC_DOMAIN}:${PUBLIC_PORT} # Public domain of the SSO endpoint that will be used for external access. In most cases it should be the same as the PUBLIC_DOMAIN SSO_PUBLIC_PROTOCOL=https SSO_PUBLIC_DOMAIN=platform.rapidminer.com SSO_PUBLIC_PORT=443 # Public URL of the SSO endpoint that will be used for external access. In most cases it should be the same as the PUBLIC_URL SSO_PUBLIC_URL=${SSO_PUBLIC_PROTOCOL}://${SSO_PUBLIC_DOMAIN} # If you run your deployment on a non-standard port, it should be added as well (HTTP_PORT and HTTPS_PORT shall be set too) # SSO_PUBLIC_URL=${SSO_PUBLIC_PROTOCOL}://${SSO_PUBLIC_DOMAIN}:${SSO_PUBLIC_PORT} # SSO default parameters SSO_IDP_REALM=master # Valid values are 'all', 'external' and 'none'. SSO_SSL_REQUIRED=none WEBMASTER_MAIL=operations@sampleorg.com # Enable/disable the service build into the RapidMiner cloud images, that updates the PUBLIC_URL and SSO_PUBLIC_URL variables to the new dynamic cloud hostname/IP address AUTOMATIC_PUBLIC_URL_UPDATE_FOR_CLOUD_IMAGES=false # Enable/disable the Legacy BASIC authentication support for REST endpoints, like webservices. (lowercase true/false) LEGACY_REST_BASIC_AUTH_ENABLED=false # Timezone setting TZ=UTC # License mode for the platform # Supported modes are 'ALTAIR_UNIT' and 'RAPIDMINER' LICENSE_MODE=ALTAIR_UNIT # Legacy RapidMiner License # Please provide the LICENSE variable only if you have a legacy license. LICENSE= # Profiles # A coma separated list of active profiles # For deployments with legacy licensing please disable altair-license in your profile # Maximum set # COMPOSE_PROFILES=deployment-init,proxy,keycloak,altair-license,landing-page,aihub-frontend,aihub-backend,aihub-activemq,aihub-job-agent,jupyter,grafana,scoring-agent,platform-admin,ces,token-tool,letsencrypt,panopticon,aihub-webapi-agent-1,aihub-webapi-agent-2,aihub-webapi-gateway # Minimum set # COMPOSE_PROFILES=deployment-init,proxy,keycloak,altair-license,landing-page,aihub-frontend,aihub-backend,aihub-activemq,aihub-job-agent # Default set COMPOSE_PROFILES=deployment-init,proxy,keycloak,altair-license,landing-page,aihub-frontend,aihub-backend,aihub-activemq,aihub-job-agent,jupyter,grafana,scoring-agent,platform-admin,ces,token-tool,letsencrypt,panopticon,aihub-webapi-agent-1,aihub-webapi-gateway # Docker-compose timeout setting COMPOSE_HTTP_TIMEOUT=600 # ############################################ # # Deployment parameters # # ############################################ # Prefix to use for docker registry REGISTRY=rapidminer/ # Version of the Init container INIT_VERSION=2025.0.0 # Enable configuring server settings for Python Scripting extension INIT_SHARED_CONDA_SETTINGS=true # ############################################ # # Proxy # # ############################################ PROXY_VERSION=2025.0.0 # Deprecated, please use HTTP_PORT and HTTPS_PORT UNPRIVILEGED_PORTS=false # Ports nginx service inside the container is listening # These ports shall match with the public ports mapped on the docker host HTTP_PORT=80 HTTPS_PORT=443 PROXY_DATA_UPLOAD_LIMIT=25GB # Backends AIHUB_FRONTEND=http://aihub-frontend AIHUB_BACKEND=http://aihub-backend:8080 WEBAPI_GATEWAY_BACKEND=http://webapi-gateway:8099 GRAFANA_BACKEND=http://grafana:3000/ JUPYTERHUB_BACKEND=http://jupyterhub:8000/ KEYCLOAK_BACKEND=http://keycloak:8080 KIBANA_BACKEND=http://rm-kibana:5601 LANDING_BACKEND=http://landing-page:1080/ LETSENCRYPT_BACKEND=http://letsencrypt:1084/ METRICS_BACKEND=http://prometheus:9090/ PLATFORM_ADMIN_BACKEND=http://platform-admin:1082/ SCORING_AGENT_BACKEND=http://scoring-agent:8090/ SCORING_AGENT_WEBUI_BACKEND=http://platform-admin:1082/ STANDPY_BACKEND=http://standpy-router/ TOKEN_BACKEND=http://token-tool:1080/ PANOPTICON_BACKEND=http://panopticon-vizapp:8080 # Backend suffixes GRAFANA_URL_SUFFIX=/grafana JUPYTERHUB_URL_SUFFIX=/jupyter/ KIBANA_URL_SUFFIX=/kibana METRICS_URL_SUFFIX=/metrics PLATFORM_ADMIN_URL_SUFFIX=/platform-admin SCORING_AGENT_URL_SUFFIX=/rts SCORING_AGENT_WEBUI_URL_SUFFIX=/rts-admin STANDPY_URL_SUFFIX=/standpy TOKEN_TOOL_URL_SUFFIX=/get-token # Default Basic Auth Accesses PLATFORM_ADMIN_ENVIRONMENT_EXPORT_AUTH_BASIC_USER=foobar PLATFORM_ADMIN_ENVIRONMENT_EXPORT_AUTH_BASIC_PASSWORD=secret METRICS_AUTH_BASIC_USER=admin METRICS_AUTH_BASIC_PASS=changeit LOGS_AUTH_BASIC_USER=admin LOGS_AUTH_BASIC_PASS=changeit SCORING_AGENT_BASIC_AUTH=true # Change these when you want to use non-default pair to login SCORING_AGENT_ADMIN_USER=admin SCORING_AGENT_ADMIN_PASSWORD=changeit # HTTPS settings ALLOW_LETSENCRYPT=true HTTPS_CRT_PATH=/etc/nginx/ssl/certificate.crt HTTPS_KEY_PATH=/etc/nginx/ssl/private.key HTTPS_KEY_PASSWORD_FILE_PATH=/etc/nginx/ssl/password.txt HTTPS_DH_PATH=/etc/nginx/ssl/dhparam.pem ACCESS_CONTROL_ALLOW_ORIGIN_GENERAL=${PUBLIC_URL} ACCESS_CONTROL_ALLOW_ORIGIN_WEBAPI= ACCESS_CONTROL_ALLOW_ORIGIN_RTS= ACCESS_CONTROL_ALLOW_ORIGIN_KEYCLOAK= # Improved security value #CONTENT_SECURITY_POLICY="default-src 'self';script-src 'self' 'unsafe-inline' 'unsafe-eval';style-src 'self' 'unsafe-inline';img-src 'self' data:;connect-src 'self';frame-src 'self';font-src 'self';media-src 'self';object-src 'none';manifest-src 'self';worker-src blob: 'self';form-action 'self';frame-ancestors 'self';" # Backward compatible value CONTENT_SECURITY_POLICY="worker-src blob: 'self' 'unsafe-inline' 'unsafe-eval'; default-src https: data: 'self' 'unsafe-inline' 'unsafe-eval';" CUSTOM_CA_CERTS_FILE=placeholder.crt WAIT_FOR_DHPARAM=true DEBUG_CONF_INIT=false # ############################################ # # KeyCloak (SSO) # # ############################################ # Keycloak container version KEYCLOAK_VERSION=2025.0.0 # Keycloak database parameters KEYCLOAK_POSTGRES_VERSION=2025.0.0 KEYCLOAK_DBSCHEMA=kcdb KEYCLOAK_DBUSER=kcdbuser KEYCLOAK_DBPASS=changeit KEYCLOAK_POSTGRES_INITDB_ARGS="--encoding UTF8 --locale=C /var/lib/postgresql/data" # Default platform admin user credentials KEYCLOAK_USER=admin KEYCLOAK_PASSWORD=changeit KC_FEATURES=token-exchange KC_HOSTNAME_STRICT="false" KC_HOSTNAME_STRICT_BACKCHANNEL="false" KC_HOSTNAME_STRICT_HTTPS="false" KC_LOG_LEVEL=info KC_HOSTNAME_BACKCHANNEL_DYNAMIC="false" KC_PROXY_HEADERS=xforwarded KC_HTTP_ENABLED="true" KC_HEALTH_ENABLED="true" # ############################################ # # License Proxy # # ############################################ SKIP_LICENSE_CHECK=false LICENSE_PROXY_PROFILES_ACTIVE=default,prometheus LICENSE_PROXY_VERSION=2025.0.0 # License Proxy url with protocol and port LICENSE_PROXY_INTERNAL_URL=http://license-proxy:9898 # Unique machine id of the deployment # may be generated with the command: echo "$(openssl rand -hex 4)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 2)-$(openssl rand -hex 6)" LICENSE_AGENT_MACHINE_ID="00000000-0000-0000-0000-000000000000" # supported modes are 'on_prem' and 'altair_one' LICENSE_PROXY_MODE=on_prem # ## settings for 'on_prem' mode ### # Altair License Manager path for on-prem mode pointing to an Altair Lincense Manager endpoint in format of port@host # must be set if mode is 'on_prem' ALTAIR_LICENSE_PATH= # ## # ## settings for 'altair_one' mode ### # Authentication type while connecting to the license server # possible values are 'credentials', 'auth_code' and 'static_token' LICENSE_UNIT_MANAGER_AUTHENTICATION_TYPE=credentials # ##### settings for 'credentials' authentication type # Altair One username. Must be set if the authentication type is 'credentials' and License Proxy mode is 'altair_one' LICENSE_UNIT_MANAGER_USER_NAME= # Altair One password. Must be set if the authentication type is 'credentials' and License Proxy mode is 'altair_one' LICENSE_UNIT_MANAGER_PASSWORD= # When mode is 'altair_one', resets any stored auth code when 'credentials' already persisted a valid auth token LICENSE_UNIT_MANAGER_RESET_AUTH_TOKEN=false # ##### # ##### settings for 'static_token' authentication type # License Server access token. Must be set if the authentication type is 'static_token' and License Proxy mode is 'altair_one' LICENSE_UNIT_MANAGER_TOKEN= # ##### LICENSE_UNIT_MANAGER_AUTH_CODE= # ## # ############################################# # # CPU Constraints # # ############################################# # # By default, AI Hub and Panopticon will use all CPU cores of the host system # and draw Altair Units accordingly (if using Altair licensing). To limit the # CPU usage and unit draw, you can constrain AI Hub and Panopticon to a subset # of CPU cores, e.g., to the first 8 logical cores of the host system. # # For more information on this constraint, see: # https://docs.docker.com/engine/reference/run/#cpuset-constraint # # Please note, that by default these settings are commented in the docker-compose.yml # After you set the values here, please uncomment the relevant properties in the docker-compose.yml as well. # # Example: Constrain CPU usage to the first 8 logical cores (0-7): #AIHUB_BACKEND_CPUSET=0-7 #JOBAGENT_CPUSET=0-7 #WEBAPI_AGENT_CPUSET_1=0-7 #WEBAPI_AGENT_CPUSET_2=0-7 #SCORING_AGENT_CPUSET=0-7 #PANOPTICON_VIZAPP_CPUSET=0-7 #PANOPTICON_PYTHON_CPUSET=0-7 #PANOPTICON_RSERVE_CPUSET=0-7 # ############################################ # # Rapidminer AiHub # # ############################################ AIHUB_BACKEND_PROFILES_ACTIVE=default,prometheus AIHUB_FRONTEND_VERSION=2025.0.0 AIHUB_BACKEND_VERSION=2025.0.0 AIHUB_POSTGRES_VERSION=2025.0.0 AIHUB_DBHOST=aihub-postgresql AIHUB_DBSCHEMA=aihub-db AIHUB_DBUSER=aihub-db-user AIHUB_DBPASS=changeit AIHUB_POSTGRES_INITDB_ARGS="--encoding UTF8 --locale=C /var/lib/postgresql/data" AIHUB_FRONTEND_SSO_CLIENT_ID=aihub-frontend AIHUB_BACKEND_SSO_CLIENT_ID=aihub-backend AIHUB_BACKEND_SSO_CLIENT_SECRET= AIHUB_BACKEND_HOSTNAME=aihub-backend AIHUB_BACKEND_PORT=8080 AIHUB_BACKEND_INTERNAL_URL=http://aihub-backend:8080 # AiHub and JA authenticates using this shared secret, which shall be a random string in base64 encoded format # echo $RANDOM | md5sum | head -c 20; echo | base64; AUTH_SECRET="" RAPIDMINER_LOAD_USER_CERTIFICATES=true # SMTP settings # These settings are commented in docker-compose.yml #SPRING_MAIL_HOST= #SPRING_MAIL_PORT= #SPRING_MAIL_PROPERTIES_MAIL_SMTP_AUTH=false #SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE=false #LOGIN-USER-TO-SMTP-SERVER #SPRING_MAIL_USERNAME= #LOGIN-PASSWORD-TO-SMTP-SERVER #SPRING_MAIL_PASSWORD= # Optional parameters # Address,where reports is send to #REPORTING_ERROR_MAIL_TO= # Email subject #REPORTING_ERROR_MAIL_SUBJECT_PREFIX= # Logical sender address #REPORTING_ERROR_MAIL_FROM_ADDRESS= # Logical sender name #REPORTING_ERROR_MAIL_FROM_NAME= # # Automatic Job Cleanup # https://docs.rapidminer.com/latest/hub/manage/job-execution-infrastructure/job-cleanup.html JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_ENABLED=false JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CRON_EXPRESSION="0 0 * * * *" JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CONTEXT_CRON_EXPRESSION= JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_MAX_AGE= JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_BATCH_SIZE= JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CONTEXT_BATCH_SIZE= # # ############################################ # # Job Agent # # ############################################ JOBAGENT_VERSION=2025.0.0 JOBAGENT_SPRING_PROFILES_ACTIVE=default,prometheus JOBAGENT_QUEUE_ACTIVEMQ_URI=failover:(tcp://aihub-activemq:61616) JOBAGENT_CONTAINER_COUNT=2 JOBAGENT_SSO_CLIENT_ID=aihub-jobagent JOBAGENT_SSO_CLIENT_SECRET= JOBAGENT_QUEUE_JOB_REQUEST=DEFAULT JOBAGENT_CONTAINER_MEMORYLIMIT=2048 AIHUB_BACKEND_PROTOCOL=http JOBAGENT_NAME=JOBAGENT-1 JOBAGENT_CONTAINER_LOAD_USER_CERTIFICATES=true JOBAGENT_CONTAINER_JVM_CUSTOM_OPTIONS="-Drapidminer.general.timezone=${TZ}" # ############################################ # # ActiveMQ # # ############################################ ACTIVEMQ_VERSION=2025.0.0 BROKER_ACTIVEMQ_USERNAME=amq-user BROKER_ACTIVEMQ_PASSWORD=" " # ############################################ # # Jupyterhub # # ############################################ JUPYTERHUB_VERSION=2025.0.0 JUPYTERHUB_DBHOST=jupyterhub-db JUPYTERHUB_DBSCHEMA=jupyterhub JUPYTERHUB_DBUSER=jupyterhubdbuser JUPYTERHUB_DBPASS=changeit JUPYTERHUB_HOSTNAME=jupyterhub JUPYTERHUB_POSTGRES_INITDB_ARGS="--encoding UTF8 --locale=C /var/lib/postgresql/data" # Jupyterhub crypt key can be generated with the command: openssl rand -hex 32 JUPYTERHUB_CRYPT_KEY=" " JUPYTERHUB_DEBUG=False JUPYTERHUB_TOKEN_DEBUG=False JUPYTERHUB_PROXY_DEBUG=False JUPYTERHUB_DB_DEBUG=False JUPYTERHUB_SPAWNER_DEBUG=False JUPYTERHUB_STACK_NAME=default JUPYTERHUB_SSO_CLIENT_ID=jupyterhub JUPYTERHUB_SSO_CLIENT_SECRET= JUPYTERHUB_SPAWNER=dockerspawner JUPYTERHUB_API_PROTOCOL=http JUPYTERHUB_API_HOSTNAME=jupyterhub JUPYTERHUB_PROXY_PORT=8000 JUPYTERHUB_API_PORT=8001 JUPYTERHUB_APP_PORT=8081 # JUPYTERHUB_CUSTOM_CA_CERTS=${PWD}/ssl/deb_cacerts/ JUPYTERHUB_DOCKER_DISABLE_NOTEBOOK_IMAGE_PULL_AT_STARTUP=False # ############################################ # # Jupyter Notebook # # ############################################ JUPYTERHUB_NOTEBOOK_VERSION=2025.0.0 JUPYTERHUB_NOTEBOOK_SSO_NB_UID_KEY=X_NB_UID JUPYTERHUB_NOTEBOOK_SSO_NB_GID_KEY=X_NB_GID JUPYTERHUB_NOTEBOOK_SSO_CUSTOM_BIND_MOUNTS_KEY=X_NB_CUSTOM_BIND_MOUNTS # Content should be in json format, use quotes here instead of apostrophes # JUPYTERHUB_NOTEBOOK_CUSTOM_BIND_MOUNTS={"/usr/share/doc/apt":"/tmp/apt","/usr/share/doc/mount/":"/tmp/mount"} JUPYTERHUB_NOTEBOOK_CUSTOM_BIND_MOUNTS= JUPYTERHUB_NOTEBOOK_CPU_LIMIT=100 # Docker JUPYTERHUB_NOTEBOOK_MEM_LIMIT=3g #k8s # JUPYTERHUB_NOTEBOOK_MEM_LIMIT=3G JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_DOCKERSPAWNER=coding-shared-vol # kubespawner # JUPYTERHUB_NOTEBOOK_KUBERNETES_CMD: '/entrypoint.sh' # JUPYTERHUB_NOTEBOOK_KUBERNETES_ARGS: '' # JUPYTERHUB_NOTEBOOK_KUBERNETES_NAMESPACE=rapidminer # JUPYTERHUB_NOTEBOOK_KUBERNETES_NODE_SELECTOR_NAME: 'rapidminer.node' # JUPYTERHUB_NOTEBOOK_KUBERNETES_NODE_SELECTOR_VALUE: 'notebook' # JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_ACCESS_MODE=ReadWriteOnce # JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_CAPACITY=5Gi # JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_CLASS=ms-ebs-us-west-2b # JUPYTERHUB_NOTEBOOK_IMAGE_PULL_SECRET=rm-docker-login-secret # JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_KUBESPAWNER=python-envs-pvc # JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_SUBPATH_KUBESPAWNER=coding-shared # ############################################ # # Platform admin # # ############################################ PLATFORM_ADMIN_VERSION=2025.0.0 PLATFORM_ADMIN_SSO_CLIENT_ID=platform-admin PLATFORM_ADMIN_SSO_CLIENT_SECRET= PLATFORM_ADMIN_DISABLE_PYTHON=false PLATFORM_ADMIN_DISABLE_RTS=false # ############################################ # # Coding Environment Storage # # ############################################ CES_VERSION=2025.0.0 DISABLE_DEFAULT_CHANNELS=True CONDA_CHANNEL_PRIORITY=strict # ############################################ # # Real-Time Scoring Agent # # ############################################ SCORING_AGENT_SPRING_PROFILES_ACTIVE=default,prometheus SCORING_AGENT_VERSION=2025.0.0 SCORING_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION=false SCORING_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE=50 # Maximum age in milliseconds of entries held in the cache SCORING_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION=900000 SCORING_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS=true SCORING_AGENT_CORS_PATH_PATTERN="" SCORING_AGENT_CORS_ALLOWED_METHODS="*" SCORING_AGENT_CORS_ALLOWED_HEADERS="*" SCORING_AGENT_CORS_ALLOWED_ORIGINS="*" SCORING_AGENT_REST_CONTEXT_PATH=/api SCORING_AGENT_TASK_SCHEDULER_POOL_SIZE=10 SCORING_AGENT_TASK_SCHEDULER_THREAD_PRIORITY=5 SCORING_AGENT_EXECUTION_CLEANUP_ENABLED=false SCORING_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION="0 0 0-6 ? * * *" SCORING_AGENT_EXECUTION_CLEANUP_TIMEOUT=10000 SCORING_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN=1000 SCORING_AGENT_AUDIT_ENABLED=false # ############################################ # # WebApi Agent # # ############################################ WEBAPI_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION=false WEBAPI_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE=50 WEBAPI_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION=3600000 WEBAPI_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS=true WEBAPI_AGENT_CORS_PATH_PATTERN="" WEBAPI_AGENT_CORS_ALLOWED_METHODS="*" WEBAPI_AGENT_CORS_ALLOWED_HEADERS="*" WEBAPI_AGENT_CORS_ALLOWED_ORIGINS="*" WEBAPI_AGENT_REST_CONTEXT_PATH=/api WEBAPI_AGENT_TASK_SCHEDULER_POOL_SIZE=10 WEBAPI_AGENT_TASK_SCHEDULER_THREAD_PRIORITY=5 WEBAPI_AGENT_EXECUTION_CLEANUP_ENABLED=false WEBAPI_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION="0 0 0-6 ? * * *" WEBAPI_AGENT_EXECUTION_CLEANUP_TIMEOUT=10000 WEBAPI_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN=1000 WEBAPI_AGENT_AUDIT_ENABLED=false REACT_APP_WEBAPI_GATEWAY_URL=http://webapi-gateway:8099 WEBAPIAGENT_OPTS="-Xmx2g" WEBAPI_REGISTRY_USERNAME=foobar WEBAPI_REGISTRY_PASSWORD=secret WEBAPI_AIHUB_CONNECTION_PROTOCOL=http WEBAPI_AIHUB_CONNECTION_HOST=aihub-backend WEBAPI_AIHUB_CONNECTION_PORT=8080 WEBAPI_AGENT_VERSION=2025.0.0 WEBAPI_GROUP_NAME=DEFAULT WAIT_FOR_LICENSES=1 SCORING_AGENT_ENABLE_SERVER_LICENSE=true WEBAPI_AGENT_SPRING_PROFILES_ACTIVE=webapi,prometheus SCORING_AGENT_SSO_CLIENT_ID=aihub-scoringagent SCORING_AGENT_SSO_CLIENT_SECRET= WEBAPI_AGENT_SSO_CLIENT_ID=aihub-webapiagent WEBAPI_AGENT_SSO_CLIENT_SECRET= # Supported modes are 'ALTAIR_UNIT', 'RAPIDMINER' and 'ALTAIR_STANDALONE' # Uncomment this to use 'SCORING_AGENT_LICENSE_MODE' in Scoring Agent instead of LICENSE_MODE variable declared above # SCORING_AGENT_LICENSE_MODE=ALTAIR_STANDALONE SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES=true # ############################################ # # WebApi Gateway # # ############################################ WEBAPI_GATEWAY_PROFILES_ACTIVE=default,prometheus WEBAPI_GATEWAY_VERSION=2025.0.0 # The connect timeout in milliseconds WEBAPI_GATEWAY_SPRING_CLOUD_GATEWAY_HTTPCLIENT_CONNECT_TIMEOUT=15000 WEBAPI_GATEWAY_SPRING_CLOUD_GATEWAY_HTTPCLIENT_RESPONSE_TIMEOUT=5m WEBAPI_GATEWAY_RETRY_BACKOFF_ENABLED=off WEBAPI_GATEWAY_RETRY_BACKOFF_INTERVAL=100ms WEBAPI_GATEWAY_RETRY_EXCEPTIONS=java.io.IOException, org.springframework.cloud.gateway.support.TimeoutException WEBAPI_GATEWAY_RETRY_METHODS=post WEBAPI_GATEWAY_RETRY_STATUS=not_found WEBAPI_GATEWAY_RETRY_SERIES=server_error WEBAPI_GATEWAY_RETRY_GROUP_RETRIES=3 WEBAPI_GATEWAY_RETRY_AGENT_RETRIES=3 WEBAPI_GATEWAY_LOADBALANCER_CLEAN_UP_INTERVAL=10s WEBAPI_GATEWAY_LOADBALANCER_REQUEST_TIMEOUT=10s WEBAPI_GATEWAY_LOADBALANCER_REQUEST_INTERVAL=10s WEBAPI_GATEWAY_LOADBALANCER_METRIC_STYLE=CPU_MEMORY # ############################################ # # Grafana # # ############################################ # Official grafana image from: https://hub.docker.com/r/grafana/grafana/ OFFICIAL_GRAFANA_IMAGE=grafana/grafana:11.4.0-ubuntu GF_SECURITY_ANGULAR_SUPPORT_ENABLED=true # Image tag used by grafana-proxy and grafana-init GRAFANA_UTILS_VERSION=2025.0.0 GF_AUTH_GENERIC_OAUTH_SCOPES=email,openid # # Grafana Proxy # GRAFANA_PROXY_THREAD_NUMBERS=16 # Possible values: NOTSET, DEBUG, INFO, WARNING, ERROR, CRITICAL GRAFANA_PROXY_LOGGING_LEVEL=INFO # Comma spearated list of Scoring Agent URLs (http://scoring-agent-1:8090,https://scoring-agent-2:8888) GRAFANA_SCORING_AGENT_BACKENDS=http://scoring-agent:8090/ # Set this to 'True' to log data (eg. result from webservice) returned from GF proxy GRAFANA_PROXY_LOG_RESPONSE_DATA=False # ############################################ # # Grafana Direct (these values injected directly to Grafana) # # ############################################ GF_AUTH_GENERIC_OAUTH_AUTH_URL= GF_AUTH_GENERIC_OAUTH_TOKEN_URL= GF_AUTH_GENERIC_OAUTH_API_URL= GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET= GF_AUTH_SIGNOUT_REDIRECT_URL= GF_SERVER_ROOT_URL= # ############################################ # # LetsEncrypt Client # # ############################################ LETSENCRYPT_VERSION=2025.0.0 # ############################################ # # Docker Deployment Manager # # ############################################ DDM_VERSION=2025.0.0 # ############################################ # # Landing page # # ############################################ LANDING_PAGE_VERSION=2025.0.0 LANDING_PAGE_SSO_CLIENT_ID=landing-page LANDING_PAGE_SSO_CLIENT_SECRET= LANDING_PAGE_DEBUG=false # ############################################ # # Token Tool # # ############################################ TOKEN_TOOL_SSO_CLIENT_ID=token-tool TOKEN_TOOL_SSO_CLIENT_SECRET= TOKEN_TOOL_DEBUG=false # ############################################ # # Service overrides # - true/false - false means automatic detection # # ############################################ DEPLOYED_GRAFANA=false DEPLOYED_JUPYTERHUB=false DEPLOYED_LANDINGPAGE=false DEPLOYED_PLATFORMADMIN=false DEPLOYED_SERVER=false DEPLOYED_TOKENTOOL=false DEPLOYED_PANOPTICON=false # ############################################ # # Panopticon # # ############################################ PANOPTICON_VIZAPP_VERSION=2025.0.0 PANOPTICON_VIZAPP_PYTHON_VERSION=2025.0.0 PANOPTICON_MONETDB_IMAGE_VERSION=2025.0.0 PANOPTICON_RSERVE_IMAGE_VERSION=2025.0.0 PANOPTICON_SSO_CLIENT_ID=panopticon PANOPTICON_SSO_CLIENT_SECRET= # If set to false, platform license will be used as pano licensing. # If set to true, set your panopticon licensing in the 'Panopticon_overide.properties' and 'Panopticon_overide.properties.template' files. PANOPTICON_DETACHED_LICENSE=false PANOPTICON_MONETDB_ADMIN_PASS=changeit PANOPTICON_CATALINA_OPTS='-Xms900m -Xmx1900m --add-opens java.base/java.nio=ALL-UNNAMED' PANOPTICON_LMX_USE_EPOLL='1' # A random mac address can be generated using the following command # This is required for Altair One licensing # head -n80 /dev/urandom | tr -d -c '[:digit:]A-F' | fold -w 12 | sed -E -n -e '/^.[26AE]/s/(..)/\1-/gp' |sed -e 's/-$//g' -e 's/-/:/g' -e 's/^\S\S/66/g'| head -n10 PANOPTICON_VIZAPP_CONTAINER_MAC_ADDRESS=" " PANOPTICON_FILE_UPLOAD_SIZE_MAX_BYTES=30000000
The definition file (docker-compose.yml)
Notice that you can link directly to any of the services in the docker-compose file using the service name as an ID, for example #aihub-job-agent. You can also link to the #volumes and #networks.
- #proxy
- #letsencrypt
- #keycloak-db
- #keycloak
- #deployment-init
- #license-proxy
- #aihub-postgresql
- #aihub-frontend
- #aihub-activemq
- #aihub-backend
- #aihub-job-agent
- #platform-admin
- #webapi-gateway
- #webapi-agent-1
- #webapi-agent-2
- #scoring-agent
- #jupyterhub-db
- #jupyternotebook
- #jupyterhub
- #coding-environment-storage
- #grafana-init
- #grafana
- #grafana-proxy
- #landing-page
- #token-tool
- #panopticon-vizapp
- #panopticon-vizapp-python
- #panopticon-monetdb
- #panopticon-rserve
- #volumes
- #networks
services:
proxy:
image: "${REGISTRY}rapidminer-proxy:${PROXY_VERSION}"
hostname: proxy
restart: always
environment:
- PLATFORM_ADMIN_ENVIRONMENT_EXPORT_AUTH_BASIC_USER=${PLATFORM_ADMIN_ENVIRONMENT_EXPORT_AUTH_BASIC_USER}
- PLATFORM_ADMIN_ENVIRONMENT_EXPORT_AUTH_BASIC_PASSWORD=${PLATFORM_ADMIN_ENVIRONMENT_EXPORT_AUTH_BASIC_PASSWORD}
# Deprecated, please use HTTP_PORT and HTTPS_PORT
- UNPRIVILEGED_PORTS=${UNPRIVILEGED_PORTS}
- HTTP_PORT=${HTTP_PORT}
- HTTPS_PORT=${HTTPS_PORT}
- DEPLOYMENT_PORT=${PUBLIC_PORT}
- PROXY_DATA_UPLOAD_LIMIT=${PROXY_DATA_UPLOAD_LIMIT}
- SSO_PUBLIC_URL=${SSO_PUBLIC_URL}
- PUBLIC_URL=${PUBLIC_URL}
- SSO_IDP_REALM=${SSO_IDP_REALM}
# Backends
- AIHUB_BACKEND=${AIHUB_BACKEND}
- AIHUB_FRONTEND=${AIHUB_FRONTEND}
- GRAFANA_BACKEND=${GRAFANA_BACKEND}
- JUPYTERHUB_BACKEND=${JUPYTERHUB_BACKEND}
- KEYCLOAK_BACKEND=${KEYCLOAK_BACKEND}
- KIBANA_BACKEND=${KIBANA_BACKEND}
- LANDING_BACKEND=${LANDING_BACKEND}
- LETSENCRYPT_BACKEND=${LETSENCRYPT_BACKEND}
- METRICS_BACKEND=${METRICS_BACKEND}
- PLATFORM_ADMIN_BACKEND=${PLATFORM_ADMIN_BACKEND}
- SCORING_AGENT_BACKEND=${SCORING_AGENT_BACKEND}
- SCORING_AGENT_WEBUI_BACKEND=${SCORING_AGENT_WEBUI_BACKEND}
- STANDPY_BACKEND=${STANDPY_BACKEND}
- TOKEN_BACKEND=${TOKEN_BACKEND}
- PANOPTICON_BACKEND=${PANOPTICON_BACKEND}
# Backend suffixes
- GRAFANA_URL_SUFFIX=${GRAFANA_URL_SUFFIX}
- JUPYTERHUB_URL_SUFFIX=${JUPYTERHUB_URL_SUFFIX}
- KIBANA_URL_SUFFIX=${KIBANA_URL_SUFFIX}
- METRICS_URL_SUFFIX=${METRICS_URL_SUFFIX}
- PLATFORM_ADMIN_URL_SUFFIX=${PLATFORM_ADMIN_URL_SUFFIX}
- SCORING_AGENT_URL_SUFFIX=${SCORING_AGENT_URL_SUFFIX}
- SCORING_AGENT_WEBUI_URL_SUFFIX=${SCORING_AGENT_WEBUI_URL_SUFFIX}
- STANDPY_URL_SUFFIX=${STANDPY_URL_SUFFIX}
- TOKEN_TOOL_URL_SUFFIX=${TOKEN_TOOL_URL_SUFFIX}
# Default Basic Auth Accesses
- METRICS_AUTH_BASIC_USER=${METRICS_AUTH_BASIC_USER}
- METRICS_AUTH_BASIC_PASS=${METRICS_AUTH_BASIC_PASS}
- SCORING_AGENT_BASIC_AUTH=${SCORING_AGENT_BASIC_AUTH}
- SCORING_AGENT_ADMIN_USER=${SCORING_AGENT_ADMIN_USER}
- SCORING_AGENT_ADMIN_PASSWORD=${SCORING_AGENT_ADMIN_PASSWORD}
# HTTPS settings
- ALLOW_LETSENCRYPT=${ALLOW_LETSENCRYPT}
- HTTPS_CRT_PATH=${HTTPS_CRT_PATH}
- HTTPS_KEY_PATH=${HTTPS_KEY_PATH}
- HTTPS_KEY_PASSWORD_FILE_PATH=${HTTPS_KEY_PASSWORD_FILE_PATH}
- HTTPS_DH_PATH=${HTTPS_DH_PATH}
- WAIT_FOR_DHPARAM=${WAIT_FOR_DHPARAM}
- DEBUG_CONF_INIT=${DEBUG_CONF_INIT}
- TZ=${TZ}
- ACCESS_CONTROL_ALLOW_ORIGIN_WEBAPI=${ACCESS_CONTROL_ALLOW_ORIGIN_WEBAPI}
- ACCESS_CONTROL_ALLOW_ORIGIN_KEYCLOAK=${ACCESS_CONTROL_ALLOW_ORIGIN_KEYCLOAK}
- ACCESS_CONTROL_ALLOW_ORIGIN_RTS=${ACCESS_CONTROL_ALLOW_ORIGIN_RTS}
- ACCESS_CONTROL_ALLOW_ORIGIN_GENERAL=${ACCESS_CONTROL_ALLOW_ORIGIN_GENERAL}
- CONTENT_SECURITY_POLICY=${CONTENT_SECURITY_POLICY}
ports:
- "0.0.0.0:${HTTP_PORT}:${HTTP_PORT}"
- "0.0.0.0:${HTTPS_PORT}:${HTTPS_PORT}"
networks:
platform-int-net:
aliases:
- proxy
- ${PUBLIC_DOMAIN}
jupyterhub-user-net:
aliases:
- ${PUBLIC_DOMAIN}
panopticon-net:
aliases:
- proxy
- ${PUBLIC_DOMAIN}
volumes:
- ./ssl:/etc/nginx/ssl
- platform-admin-uploaded-vol:/rapidminer/platform-admin/uploaded/
profiles:
- proxy
- deployment-init
healthcheck:
test: service nginx status
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
letsencrypt:
image: "${REGISTRY}rm-letsencrypt-client:${LETSENCRYPT_VERSION}"
hostname: letsencrypt
restart: always
environment:
- PUBLIC_URL=${PUBLIC_URL}
- LETSENCRYPT_HOME=/certificates/
- DOMAIN=${PUBLIC_DOMAIN}
- WEBMASTER_MAIL=${WEBMASTER_MAIL}
- TZ=${TZ}
networks:
platform-int-net:
aliases:
- letsencrypt
volumes:
- ./ssl:/etc/letsencrypt/
profiles:
- letsencrypt
healthcheck:
test: service apache2 status
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
keycloak-db:
image: "${REGISTRY}postgres-14:${KEYCLOAK_POSTGRES_VERSION}"
restart: always
hostname: keycloak-db
environment:
- POSTGRES_DB=${KEYCLOAK_DBSCHEMA}
- POSTGRES_USER=${KEYCLOAK_DBUSER}
- POSTGRES_PASSWORD=${KEYCLOAK_DBPASS}
- POSTGRES_INITDB_ARGS=${KEYCLOAK_POSTGRES_INITDB_ARGS}
- TZ=${TZ}
- PGTZ=${TZ}
volumes:
- keycloak-db-vol:/var/lib/postgresql/data
networks:
idp-db-net:
aliases:
- keycloak-db
profiles:
- keycloak
- deployment-init
healthcheck:
test: pg_isready -d ${KEYCLOAK_DBSCHEMA} -U ${KEYCLOAK_DBUSER}
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
keycloak:
image: "${REGISTRY}rapidminer-keycloak:${KEYCLOAK_VERSION}"
restart: always
hostname: keycloak
environment:
- KC_DB=postgres
- KC_DB_SCHEMA=public
- KC_DB_URL_HOST=keycloak-db
- KC_DB_URL_DATABASE=${KEYCLOAK_DBSCHEMA}
- KC_DB_USERNAME=${KEYCLOAK_DBUSER}
- KC_DB_PASSWORD=${KEYCLOAK_DBPASS}
- KC_FEATURES=${KC_FEATURES}
- KC_HOSTNAME=${PUBLIC_DOMAIN}
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HOSTNAME_STRICT_BACKCHANNEL=${KC_HOSTNAME_STRICT_BACKCHANNEL}
- KC_HOSTNAME_STRICT=${KC_HOSTNAME_STRICT}
- KC_HOSTNAME_STRICT_HTTPS=${KC_HOSTNAME_STRICT_HTTPS}
- KEYCLOAK_ADMIN=${KEYCLOAK_USER}
- KEYCLOAK_ADMIN_PASSWORD=${KEYCLOAK_PASSWORD}
- KC_LOG_LEVEL=${KC_LOG_LEVEL}
- KC_HTTP_ENABLED=${KC_HTTP_ENABLED}
- TZ=${TZ}
- KC_HOSTNAME_BACKCHANNEL_DYNAMIC=${KC_HOSTNAME_BACKCHANNEL_DYNAMIC}
- KC_PROXY_HEADERS=${KC_PROXY_HEADERS}
- KC_HEALTH_ENABLED=${KC_HEALTH_ENABLED}
depends_on:
proxy:
condition: service_started
keycloak-db:
condition: service_healthy
healthcheck:
test: timeout 1 bash -c 'cat < /dev/null > /dev/tcp/localhost/8080'
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
networks:
panopticon-net:
platform-int-net:
aliases:
- keycloak
idp-db-net:
aliases:
- keycloak
profiles:
- keycloak
- deployment-init
deployment-init:
image: "${REGISTRY}rapidminer-deployment-init:${INIT_VERSION}"
restart: "no"
hostname: deployment-init
depends_on:
keycloak:
condition: service_healthy
aihub-postgresql:
condition: service_healthy
environment:
- CUSTOM_CA_CERTS_FILE=${CUSTOM_CA_CERTS_FILE}
- DEBUG=false
- SSO_INTERNAL_URL=${KEYCLOAK_BACKEND}
- TZ=${TZ}
volumes:
- ./ssl:/tmp/ssl
# Deployment-init reads all variable directly from .env file
- ./.env:/rapidminer/.env
- ./docker-compose.yml:/docker-compose.yml:ro
- keycloak-kcadm-vol:/rapidminer/.keycloak/
- deployed-services-vol:/rapidminer/deployed-services/
networks:
platform-int-net:
aliases:
- deployment-init
aihub-db-net:
aliases:
- deployment-init
profiles:
- deployment-init
license-proxy:
image: "${REGISTRY}rapidminer-licenseproxy:${LICENSE_PROXY_VERSION}"
hostname: license-proxy
restart: always
environment:
- SPRING_PROFILES_ACTIVE=${LICENSE_PROXY_PROFILES_ACTIVE}
- KEYCLOAK_AUTH_SERVER_URL=${SSO_PUBLIC_URL}/auth/
- KEYCLOAK_REALM=${SSO_IDP_REALM}
- LICENSE_PROXY_MODE=${LICENSE_PROXY_MODE}
- LICENSE_UNIT_MANAGER_AUTHENTICATION_TYPE=${LICENSE_UNIT_MANAGER_AUTHENTICATION_TYPE}
- LICENSE_UNIT_MANAGER_AUTH_CODE=${LICENSE_UNIT_MANAGER_AUTH_CODE}
- LICENSE_UNIT_MANAGER_USER_NAME=${LICENSE_UNIT_MANAGER_USER_NAME}
- LICENSE_UNIT_MANAGER_PASSWORD=${LICENSE_UNIT_MANAGER_PASSWORD}
- LICENSE_UNIT_MANAGER_RESET_AUTH_TOKEN=${LICENSE_UNIT_MANAGER_RESET_AUTH_TOKEN}
- LICENSE_UNIT_MANAGER_TOKEN=${LICENSE_UNIT_MANAGER_TOKEN}
# - DEBUG=true
- TZ=${TZ}
- ALTAIR_LICENSE_PATH=${ALTAIR_LICENSE_PATH}
volumes:
- license-proxy-vol:/license-proxy/home
depends_on:
keycloak:
condition: service_healthy
networks:
platform-int-net:
aliases:
- license-proxy
profiles:
- altair-license
healthcheck:
test: curl -s http://localhost:9898/actuator/health
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
aihub-postgresql:
image: "${REGISTRY}postgres-14:${AIHUB_POSTGRES_VERSION}"
hostname: aihub-postgresql
restart: always
environment:
- POSTGRES_DB=${AIHUB_DBSCHEMA}
- POSTGRES_USER=${AIHUB_DBUSER}
- POSTGRES_PASSWORD=${AIHUB_DBPASS}
- POSTGRES_INITDB_ARGS=${AIHUB_POSTGRES_INITDB_ARGS}
- TZ=${TZ}
- PGTZ=${TZ}
volumes:
- aihub-db-vol:/var/lib/postgresql/data
networks:
aihub-db-net:
aliases:
- aihub-postgresql
profiles:
- aihub-backend
- deployment-init
healthcheck:
test: pg_isready -d ${AIHUB_DBSCHEMA} -U ${AIHUB_DBUSER}
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
aihub-frontend:
image: ${REGISTRY}rapidminer-aihub-ui:${AIHUB_FRONTEND_VERSION}
hostname: aihub-frontend
restart: always
environment:
- REACT_APP_API_URL=${PUBLIC_URL}/api/v1/
- REACT_APP_KEYCLOAK_BASE_URL=${SSO_PUBLIC_URL}/auth
- REACT_APP_KEYCLOAK_REALM=${SSO_IDP_REALM}
- REACT_APP_KEYCLOAK_CLIENT_ID=${AIHUB_FRONTEND_SSO_CLIENT_ID}
- REACT_APP_KEYCLOAK_ON_LOAD=login-required
- REACT_APP_KEYCLOAK_SSL_REQUIRED=${SSO_SSL_REQUIRED}
- REACT_APP_WEBAPI_GATEWAY_URL=${PUBLIC_URL}/webapi
- REACT_APP_GATEWAY=${PUBLIC_URL}/webapi
- TZ=${TZ}
depends_on:
proxy:
condition: service_healthy
aihub-backend:
condition: service_healthy
keycloak:
condition: service_healthy
networks:
platform-int-net:
aliases:
- aihub-frontend
profiles:
- aihub-frontend
healthcheck:
test: service nginx status
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
aihub-activemq:
image: ${REGISTRY}rapidminer-activemq-artemis:${ACTIVEMQ_VERSION}
hostname: aihub-activemq
restart: always
environment:
- BROKER_ACTIVEMQ_HOST=aihub-activemq
- BROKER_ACTIVEMQ_PORT=61616
- BROKER_ACTIVEMQ_USERNAME=${BROKER_ACTIVEMQ_USERNAME}
- BROKER_ACTIVEMQ_PASSWORD=${BROKER_ACTIVEMQ_PASSWORD}
- ARTEMIS_USERNAME=${BROKER_ACTIVEMQ_USERNAME}
- ARTEMIS_PASSWORD=${BROKER_ACTIVEMQ_PASSWORD}
- TZ=${TZ}
networks:
platform-int-net:
aliases:
- aihub-activemq
profiles:
- aihub-activemq
volumes:
- activemq-artemis-vol:/var/lib/artemis/data
healthcheck:
test: bash -c "exec 6<> /dev/tcp/localhost/8161"
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
aihub-backend:
image: ${REGISTRY}rapidminer-aihub:${AIHUB_BACKEND_VERSION}
hostname: aihub-backend
#cpuset: ${AIHUB_BACKEND_CPUSET}
restart: always
environment:
#- LOGGING_LEVEL_COM_RAPIDMINER=DEBUG
- SERVER_FORWARD_HEADERS_STRATEGY=framework
- DB_HOST=${AIHUB_DBHOST}
- DB_PORT=5432
- DB_NAME=${AIHUB_DBSCHEMA}
- DB_USER=${AIHUB_DBUSER}
- DB_PASSWORD=${AIHUB_DBPASS}
- KEYCLOAK_REALM=${SSO_IDP_REALM}
- AUTH_REALM=${SSO_IDP_REALM}
- KEYCLOAK_AUTH_SERVER_URL=${SSO_PUBLIC_URL}/auth/
- KEYCLOAK_RESOURCE=${AIHUB_BACKEND_SSO_CLIENT_ID}
- KEYCLOAK_SSL_REQUIRED=${SSO_SSL_REQUIRED}
# SMTP settings
# - SPRING_MAIL_HOST=${SPRING_MAIL_HOST}
# - SPRING_MAIL_PORT=${SPRING_MAIL_PORT}
# - SPRING_MAIL_USERNAME=${SPRING_MAIL_USERNAME}
# - SPRING_MAIL_PASSWORD=${SPRING_MAIL_PASSWORD}
# - SPRING_MAIL_PROPERTIES_MAIL_SMTP_AUTH=${SPRING_MAIL_PROPERTIES_MAIL_SMTP_AUTH}
# - SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE=${SPRING_MAIL_PROPERTIES_MAIL_SMTP_STARTTLS_ENABLE}
# - REPORTING_ERROR_MAIL_TO=${REPORTING_ERROR_MAIL_TO}
# - REPORTING_ERROR_MAIL_SUBJECT_PREFIX=${REPORTING_ERROR_MAIL_SUBJECT_PREFIX}
# - REPORTING_ERROR_MAIL_FROM_ADDRESS=${REPORTING_ERROR_MAIL_FROM_ADDRESS}
# - REPORTING_ERROR_MAIL_FROM_NAME=${REPORTING_ERROR_MAIL_FROM_NAME}
#
# Automatic Job Cleanup
- JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_ENABLED=${JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_ENABLED}
- JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CRON_EXPRESSION=${JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CRON_EXPRESSION}
- JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CONTEXT_CRON_EXPRESSION=${JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CONTEXT_CRON_EXPRESSION}
- JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_MAX_AGE=${JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_MAX_AGE}
- JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_BATCH_SIZE=${JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_BATCH_SIZE}
- JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CONTEXT_BATCH_SIZE=${JOBSERVICE_SCHEDULED_ARCHIVE_JOB_CLEANUP_JOB_CONTEXT_BATCH_SIZE}
# Here AIHUB_CONNECTION_PROTOCOL, AIHUB_CONNECTION_HOST and AIHUB_CONNECTION_PORT shall be the public facing ones
- AIHUB_CONNECTION_PROTOCOL=${PUBLIC_PROTOCOL}
- AIHUB_CONNECTION_HOST=${PUBLIC_DOMAIN}
- AIHUB_CONNECTION_PORT=${PUBLIC_PORT}
- AUTH_SERVICE_CLIENT_ID=${AIHUB_BACKEND_SSO_CLIENT_ID}
- AUTH_SERVICE_CLIENT_SECRET=${AIHUB_BACKEND_SSO_CLIENT_SECRET}
- WEBAPI_REGISTRY_USERNAME=${WEBAPI_REGISTRY_USERNAME}
- WEBAPI_REGISTRY_PASSWORD=${WEBAPI_REGISTRY_PASSWORD}
- TZ=${TZ}
- BROKER_ACTIVEMQ_HOST=aihub-activemq
- BROKER_ACTIVEMQ_PORT=61616
- BROKER_ACTIVEMQ_USERNAME=${BROKER_ACTIVEMQ_USERNAME}
- BROKER_ACTIVEMQ_PASSWORD=${BROKER_ACTIVEMQ_PASSWORD}
- REPOSITORIES_MAX_UPLOAD_SIZE=${PROXY_DATA_UPLOAD_LIMIT}
- LICENSE_MODE=${LICENSE_MODE}
# RapidMiner licensing
- LICENSE_LICENSE=${LICENSE}
# Altair Unit Licensing
- LICENSE_AGENT_PROXY_URL=${LICENSE_PROXY_INTERNAL_URL}
- LICENSE_AGENT_MACHINE_ID=${LICENSE_AGENT_MACHINE_ID}
- RAPIDMINER_LOAD_USER_CERTIFICATES=${RAPIDMINER_LOAD_USER_CERTIFICATES}
- SPRING_PROFILES_ACTIVE=${AIHUB_BACKEND_PROFILES_ACTIVE}
volumes:
- aihub-home-vol:/aihub/home
depends_on:
aihub-postgresql:
condition: service_healthy
aihub-activemq:
condition: service_healthy
license-proxy:
condition: service_healthy
networks:
jupyterhub-user-net:
aliases:
- aihub-backend
platform-int-net:
aliases:
- aihub-backend
aihub-db-net:
aliases:
- aihub-backend
profiles:
- aihub-backend
healthcheck:
test: curl -s http://localhost:8080/api/v1/healthcheck
interval: 60s
timeout: 30s
retries: 5
start_period: 60s
aihub-job-agent:
image: ${REGISTRY}rapidminer-jobagent:${JOBAGENT_VERSION}
hostname: aihub-job-agent
#cpuset: ${JOBAGENT_CPUSET}
restart: always
environment:
#- LOGGING_LEVEL_COM_RAPIDMINER=DEBUG
- SPRING_PROFILES_ACTIVE=${JOBAGENT_SPRING_PROFILES_ACTIVE}
- JOBAGENT_NAME=${JOBAGENT_NAME}
- JOBAGENT_AUTH_AUTH_SERVER_URL=${SSO_PUBLIC_URL}/auth
- JOBAGENT_AUTH_REALM=${SSO_IDP_REALM}
- JOBAGENT_AUTH_SERVICE_CLIENT_ID=${JOBAGENT_SSO_CLIENT_ID}
- JOBAGENT_AUTH_SERVICE_CLIENT_SECRET=${JOBAGENT_SSO_CLIENT_SECRET}
- JOBAGENT_QUEUE_ACTIVEMQ_USERNAME=${BROKER_ACTIVEMQ_USERNAME}
- JOBAGENT_QUEUE_ACTIVEMQ_PASSWORD=${BROKER_ACTIVEMQ_PASSWORD}
- BROKER_ACTIVEMQ_HOST=aihub-activemq
- BROKER_ACTIVEMQ_PORT=61616
- BROKER_ACTIVEMQ_USERNAME=${BROKER_ACTIVEMQ_USERNAME}
- BROKER_ACTIVEMQ_PASSWORD=${BROKER_ACTIVEMQ_PASSWORD}
- AIHUB_CONNECTION_PROTOCOL=http
- AIHUB_CONNECTION_HOST=aihub-backend
- AIHUB_CONNECTION_PORT=8080
- JOBAGENT_CONTAINER_COUNT=${JOBAGENT_CONTAINER_COUNT}
- JOBAGENT_QUEUE_JOB_REQUEST=${JOBAGENT_QUEUE_JOB_REQUEST}
- JOBAGENT_CONTAINER_MEMORYLIMIT=${JOBAGENT_CONTAINER_MEMORYLIMIT}
- INIT_SHARED_CONDA_SETTINGS=${INIT_SHARED_CONDA_SETTINGS}
- TZ=${TZ}
- LICENSE_MODE=${LICENSE_MODE}
# Altair Unit Licensing
- JOBAGENT_LICENSE_AGENT_PROXY_URL=${LICENSE_PROXY_INTERNAL_URL}
- JOBAGENT_LICENSE_AGENT_MACHINE_ID=${LICENSE_AGENT_MACHINE_ID}
- JOBAGENT_CONTAINER_LOAD_USER_CERTIFICATES=${JOBAGENT_CONTAINER_LOAD_USER_CERTIFICATES}
- JOBAGENT_CONTAINER_JVM_CUSTOM_OPTIONS=${JOBAGENT_CONTAINER_JVM_CUSTOM_OPTIONS}
volumes:
- coding-shared-vol:/opt/coding-shared/:ro
- job-agent-vol:/jobagent/home
- job-agent-huggingface-vol:/home/rapidminer/.cache/huggingface
depends_on:
aihub-backend:
condition: service_healthy
aihub-activemq:
condition: service_healthy
license-proxy:
condition: service_healthy
networks:
platform-int-net:
aliases:
- aihub-job-agent
profiles:
- aihub-job-agent
healthcheck:
test: curl -s http://localhost:8066/system/health
interval: 60s
timeout: 30s
retries: 5
start_period: 30s
platform-admin:
image: "${REGISTRY}rapidminer-platform-admin-webui:${PLATFORM_ADMIN_VERSION}"
hostname: platform-admin
restart: always
environment:
- PLATFORM_ADMIN_URL_SUFFIX=${PLATFORM_ADMIN_URL_SUFFIX}
- PLATFORM_ADMIN_DATA_UPLOAD_LIMIT=${PROXY_DATA_UPLOAD_LIMIT}
- SCORING_AGENT_URL_SUFFIX=${SCORING_AGENT_URL_SUFFIX}
- SCORING_AGENT_BACKEND=${SCORING_AGENT_BACKEND}
- SSO_PUBLIC_URL=${SSO_PUBLIC_URL}
- SSO_IDP_REALM=${SSO_IDP_REALM}
- SSO_CLIENT_ID=${PLATFORM_ADMIN_SSO_CLIENT_ID}
- SSO_CLIENT_SECRET=${PLATFORM_ADMIN_SSO_CLIENT_SECRET}
- PLATFORM_ADMIN_DISABLE_PYTHON=${PLATFORM_ADMIN_DISABLE_PYTHON}
- PLATFORM_ADMIN_DISABLE_RTS=${PLATFORM_ADMIN_DISABLE_RTS}
- DEBUG=false
- CES_VERSION=${CES_VERSION}
- TZ=${TZ}
volumes:
- platform-admin-uploaded-vol:/var/www/html/uploaded/
networks:
jupyterhub-user-net:
aliases:
- platform-admin
platform-int-net:
aliases:
- platform-admin
coding-environment-storage-net:
aliases:
- platform-admin
profiles:
- platform-admin
healthcheck:
test: service apache2 status
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
webapi-gateway:
image: ${REGISTRY}rapidminer-webapi-gateway:${WEBAPI_GATEWAY_VERSION}
hostname: webapi-gateway
container_name: webapi-gateway
restart: always
environment:
- WEBAPI_REGISTRY_HOST=${PUBLIC_DOMAIN}
- WEBAPI_REGISTRY_PROTOCOL=${PUBLIC_PROTOCOL}
- WEBAPI_REGISTRY_PORT=${PUBLIC_PORT}
- WEBAPI_REGISTRY_USERNAME=${WEBAPI_REGISTRY_USERNAME}
- WEBAPI_REGISTRY_PASSWORD=${WEBAPI_REGISTRY_PASSWORD}
- SPRING_CLOUD_GATEWAY_HTTPCLIENT_CONNECT_TIMEOUT=${WEBAPI_GATEWAY_SPRING_CLOUD_GATEWAY_HTTPCLIENT_CONNECT_TIMEOUT}
- SPRING_CLOUD_GATEWAY_HTTPCLIENT_RESPONSE_TIMEOUT=${WEBAPI_GATEWAY_SPRING_CLOUD_GATEWAY_HTTPCLIENT_RESPONSE_TIMEOUT}
- RETRY_BACKOFF_ENABLED=${WEBAPI_GATEWAY_RETRY_BACKOFF_ENABLED}
- RETRY_BACKOFF_INTERVAL=${WEBAPI_GATEWAY_RETRY_BACKOFF_INTERVAL}
- RETRY_EXCEPTIONS=${WEBAPI_GATEWAY_RETRY_EXCEPTIONS}
- RETRY_METHODS=${WEBAPI_GATEWAY_RETRY_METHODS}
- RETRY_STATUS=${WEBAPI_GATEWAY_RETRY_STATUS}
- RETRY_SERIES=${WEBAPI_GATEWAY_RETRY_SERIES}
- RETRY_GROUP_RETRIES=${WEBAPI_GATEWAY_RETRY_GROUP_RETRIES}
- RETRY_AGENT_RETRIES=${WEBAPI_GATEWAY_RETRY_AGENT_RETRIES}
- LOADBALANCER_CLEAN_UP_INTERVAL=${WEBAPI_GATEWAY_LOADBALANCER_CLEAN_UP_INTERVAL}
- LOADBALANCER_REQUEST_TIMEOUT=${WEBAPI_GATEWAY_LOADBALANCER_REQUEST_TIMEOUT}
- LOADBALANCER_REQUEST_INTERVAL=${WEBAPI_GATEWAY_LOADBALANCER_REQUEST_INTERVAL}
- LOADBALANCER_METRIC_STYLE=${WEBAPI_GATEWAY_LOADBALANCER_METRIC_STYLE}
- TZ=${TZ}
- SPRING_PROFILES_ACTIVE=${WEBAPI_GATEWAY_PROFILES_ACTIVE}
depends_on:
license-proxy:
condition: service_healthy
aihub-backend:
condition: service_healthy
profiles:
- aihub-webapi-gateway
networks:
panopticon-net:
platform-int-net:
aliases:
- aihub-webapi-gateway
healthcheck:
test: curl -s http://localhost:8099/system/health
interval: 60s
timeout: 30s
retries: 5
start_period: 30s
webapi-agent-1:
image: ${REGISTRY}rapidminer-scoringagent:${WEBAPI_AGENT_VERSION}
hostname: webapi-agent-1
container_name: webapi-agent-1
#cpuset: ${WEBAPI_AGENT_CPUSET_1}
restart: always
environment:
- TZ=${TZ}
- CES_VERSION=${CES_VERSION}
- INIT_SHARED_CONDA_SETTINGS=true
- SPRING_PROFILES_ACTIVE=${WEBAPI_AGENT_SPRING_PROFILES_ACTIVE}
- SCORING_AGENT_MAX_UPLOAD_SIZE=${PROXY_DATA_UPLOAD_LIMIT}
- RAPIDMINER_SCORING_AGENT_OPTS=${WEBAPIAGENT_OPTS}
- SCORING_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION=${WEBAPI_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION}
- SCORING_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE=${WEBAPI_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE}
- SCORING_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION=${WEBAPI_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION}
- SCORING_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS=${WEBAPI_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS}
- SCORING_AGENT_CORS_PATH_PATTERN=${WEBAPI_AGENT_CORS_PATH_PATTERN}
- SCORING_AGENT_CORS_ALLOWED_METHODS=${WEBAPI_AGENT_CORS_ALLOWED_METHODS}
- SCORING_AGENT_CORS_ALLOWED_HEADERS=${WEBAPI_AGENT_CORS_ALLOWED_HEADERS}
- SCORING_AGENT_CORS_ALLOWED_ORIGINS=${WEBAPI_AGENT_CORS_ALLOWED_ORIGINS}
- SCORING_AGENT_REST_CONTEXT_PATH=${WEBAPI_AGENT_REST_CONTEXT_PATH}
- SCORING_AGENT_TASK_SCHEDULER_POOL_SIZE=${WEBAPI_AGENT_TASK_SCHEDULER_POOL_SIZE}
- SCORING_AGENT_TASK_SCHEDULER_THREAD_PRIORITY=${WEBAPI_AGENT_TASK_SCHEDULER_THREAD_PRIORITY}
- SCORING_AGENT_EXECUTION_CLEANUP_ENABLED=${WEBAPI_AGENT_EXECUTION_CLEANUP_ENABLED}
- SCORING_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION=${WEBAPI_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION}
- SCORING_AGENT_EXECUTION_CLEANUP_TIMEOUT=${WEBAPI_AGENT_EXECUTION_CLEANUP_TIMEOUT}
- SCORING_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN=${WEBAPI_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN}
- SCORING_AGENT_AUDIT_ENABLED=${WEBAPI_AGENT_AUDIT_ENABLED}
#- LOGGING_LEVEL_ROOT=DEBUG
- SCORING_AGENT_AUTH_REALM=${SSO_IDP_REALM}
- SCORING_AGENT_AUTH_AUTH_SERVER_URL=${SSO_PUBLIC_URL}/auth
- SCORING_AGENT_AUTH_SERVICE_CLIENT_ID=${WEBAPI_AGENT_SSO_CLIENT_ID}
- SCORING_AGENT_AUTH_SERVICE_CLIENT_SECRET=${WEBAPI_AGENT_SSO_CLIENT_SECRET}
- LICENSE_MODE=${SCORING_AGENT_LICENSE_MODE:-$LICENSE_MODE}
# RapidMiner licensing
- WAIT_FOR_LICENSES=${WAIT_FOR_LICENSES}
- SCORING_AGENT_ENABLE_SERVER_LICENSE=${SCORING_AGENT_ENABLE_SERVER_LICENSE}
- LICENSE_LICENSE=${LICENSE}
# Altair Unit Licensing
- SCORING_AGENT_LICENSE_AGENT_PROXY_URL=${LICENSE_PROXY_INTERNAL_URL}
- SCORING_AGENT_LICENSE_AGENT_MACHINE_ID=${LICENSE_AGENT_MACHINE_ID}
- SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_ISSUER_URI=${SSO_PUBLIC_URL}/auth/realms/${SSO_IDP_REALM}
- WEBAPI_REGISTRY_HOST=${PUBLIC_DOMAIN}
- WEBAPI_REGISTRY_PROTOCOL=${PUBLIC_PROTOCOL}
- WEBAPI_REGISTRY_PORT=${PUBLIC_PORT}
- WEBAPI_REGISTRY_USERNAME=${WEBAPI_REGISTRY_USERNAME}
- WEBAPI_REGISTRY_PASSWORD=${WEBAPI_REGISTRY_PASSWORD}
- AIHUB_CONNECTION_PROTOCOL=${PUBLIC_PROTOCOL}
- AIHUB_CONNECTION_HOST=${PUBLIC_DOMAIN}
- AIHUB_CONNECTION_PORT=${PUBLIC_PORT}
- EUREKA_INSTANCE_HOSTNAME=webapi-agent-1
- EUREKA_INSTANCE_PREFER_IP_ADDRESS=false
- SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES=${SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES}
networks:
panopticon-net:
platform-int-net:
aliases:
- aihub-webapi-agent-1
profiles:
- aihub-webapi-agent-1
volumes:
# Uncomment the below line to specify the Altair license file for Scoring Agent in case of LICENSE_MODE=ALTAIR_STANDALONE
# - ${PWD}/altair_standalone.dat:/scoring-agent/home/resources/licenses/altair_standalone.dat
- webapi-agent-1:/scoring-agent/home
- coding-shared-vol:/opt/coding-shared/:ro
depends_on:
webapi-gateway:
condition: service_healthy
license-proxy:
condition: service_healthy
healthcheck:
test: curl -s http://localhost:8090/system/health
interval: 60s
timeout: 30s
retries: 5
start_period: 30s
webapi-agent-2:
image: ${REGISTRY}rapidminer-scoringagent:${WEBAPI_AGENT_VERSION}
hostname: webapi-agent-2
container_name: webapi-agent-2
#cpuset: ${WEBAPI_AGENT_CPUSET_2}
restart: always
environment:
- TZ=${TZ}
- CES_VERSION=${CES_VERSION}
- INIT_SHARED_CONDA_SETTINGS=true
- SPRING_PROFILES_ACTIVE=${WEBAPI_AGENT_SPRING_PROFILES_ACTIVE}
- SCORING_AGENT_MAX_UPLOAD_SIZE=${PROXY_DATA_UPLOAD_LIMIT}
- RAPIDMINER_SCORING_AGENT_OPTS=${WEBAPIAGENT_OPTS}
#- LOGGING_LEVEL_ROOT=DEBUG
- SCORING_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION=${WEBAPI_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION}
- SCORING_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE=${WEBAPI_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE}
- SCORING_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION=${WEBAPI_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION}
- SCORING_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS=${WEBAPI_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS}
- SCORING_AGENT_CORS_PATH_PATTERN=${WEBAPI_AGENT_CORS_PATH_PATTERN}
- SCORING_AGENT_CORS_ALLOWED_METHODS=${WEBAPI_AGENT_CORS_ALLOWED_METHODS}
- SCORING_AGENT_CORS_ALLOWED_HEADERS=${WEBAPI_AGENT_CORS_ALLOWED_HEADERS}
- SCORING_AGENT_CORS_ALLOWED_ORIGINS=${WEBAPI_AGENT_CORS_ALLOWED_ORIGINS}
- SCORING_AGENT_REST_CONTEXT_PATH=${WEBAPI_AGENT_REST_CONTEXT_PATH}
- SCORING_AGENT_TASK_SCHEDULER_POOL_SIZE=${WEBAPI_AGENT_TASK_SCHEDULER_POOL_SIZE}
- SCORING_AGENT_TASK_SCHEDULER_THREAD_PRIORITY=${WEBAPI_AGENT_TASK_SCHEDULER_THREAD_PRIORITY}
- SCORING_AGENT_EXECUTION_CLEANUP_ENABLED=${WEBAPI_AGENT_EXECUTION_CLEANUP_ENABLED}
- SCORING_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION=${WEBAPI_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION}
- SCORING_AGENT_EXECUTION_CLEANUP_TIMEOUT=${WEBAPI_AGENT_EXECUTION_CLEANUP_TIMEOUT}
- SCORING_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN=${WEBAPI_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN}
- SCORING_AGENT_AUDIT_ENABLED=${WEBAPI_AGENT_AUDIT_ENABLED}
- SCORING_AGENT_AUTH_REALM=${SSO_IDP_REALM}
- SCORING_AGENT_AUTH_AUTH_SERVER_URL=${SSO_PUBLIC_URL}/auth
- SCORING_AGENT_AUTH_SERVICE_CLIENT_ID=${WEBAPI_AGENT_SSO_CLIENT_ID}
- SCORING_AGENT_AUTH_SERVICE_CLIENT_SECRET=${WEBAPI_AGENT_SSO_CLIENT_SECRET}
- LICENSE_MODE=${SCORING_AGENT_LICENSE_MODE:-$LICENSE_MODE}
# RapidMiner licensing
- WAIT_FOR_LICENSES=${WAIT_FOR_LICENSES}
- SCORING_AGENT_ENABLE_SERVER_LICENSE=${SCORING_AGENT_ENABLE_SERVER_LICENSE}
- LICENSE_LICENSE=${LICENSE}
# Altair Unit Licensing
- SCORING_AGENT_LICENSE_AGENT_PROXY_URL=${LICENSE_PROXY_INTERNAL_URL}
- SCORING_AGENT_LICENSE_AGENT_MACHINE_ID=${LICENSE_AGENT_MACHINE_ID}
- SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_ISSUER_URI=${SSO_PUBLIC_URL}/auth/realms/${SSO_IDP_REALM}
- WEBAPI_REGISTRY_HOST=${PUBLIC_DOMAIN}
- WEBAPI_REGISTRY_PROTOCOL=${PUBLIC_PROTOCOL}
- WEBAPI_REGISTRY_PORT=${PUBLIC_PORT}
- WEBAPI_REGISTRY_USERNAME=${WEBAPI_REGISTRY_USERNAME}
- WEBAPI_REGISTRY_PASSWORD=${WEBAPI_REGISTRY_PASSWORD}
- AIHUB_CONNECTION_PROTOCOL=${PUBLIC_PROTOCOL}
- AIHUB_CONNECTION_HOST=${PUBLIC_DOMAIN}
- AIHUB_CONNECTION_PORT=${PUBLIC_PORT}
- EUREKA_INSTANCE_HOSTNAME=webapi-agent-2
- EUREKA_INSTANCE_PREFER_IP_ADDRESS=false
- SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES=${SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES}
networks:
panopticon-net:
platform-int-net:
aliases:
- aihub-webapi-agent-2
profiles:
- aihub-webapi-agent-2
volumes:
- webapi-agent-2:/scoring-agent/home
- coding-shared-vol:/opt/coding-shared/:ro
depends_on:
webapi-gateway:
condition: service_healthy
license-proxy:
condition: service_healthy
healthcheck:
test: curl -s http://localhost:8090/system/health
interval: 60s
timeout: 30s
retries: 5
start_period: 30s
scoring-agent:
image: "${REGISTRY}rapidminer-scoringagent:${SCORING_AGENT_VERSION}"
hostname: scoring-agent
restart: always
#cpuset: ${SCORING_AGENT_CPUSET}
environment:
- TZ=${TZ}
- CES_VERSION=${CES_VERSION}
- INIT_SHARED_CONDA_SETTINGS=true
- SPRING_PROFILES_ACTIVE=${SCORING_AGENT_SPRING_PROFILES_ACTIVE}
- SCORING_AGENT_MAX_UPLOAD_SIZE=${PROXY_DATA_UPLOAD_LIMIT}
- SCORING_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION=${SCORING_AGENT_CACHE_REPOSITORY_CLEAR_ON_COLLECTION}
- SCORING_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE=${SCORING_AGENT_CACHE_REPOSITORY_MAXIMUM_SIZE}
- SCORING_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION=${SCORING_AGENT_CACHE_REPOSITORY_ACCESS_EXPIRATION}
- SCORING_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS=${SCORING_AGENT_CACHE_REPOSITORY_COPY_CACHED_IOOBJECTS}
- SCORING_AGENT_CORS_PATH_PATTERN=${SCORING_AGENT_CORS_PATH_PATTERN}
- SCORING_AGENT_CORS_ALLOWED_METHODS=${SCORING_AGENT_CORS_ALLOWED_METHODS}
- SCORING_AGENT_CORS_ALLOWED_HEADERS=${SCORING_AGENT_CORS_ALLOWED_HEADERS}
- SCORING_AGENT_CORS_ALLOWED_ORIGINS=${SCORING_AGENT_CORS_ALLOWED_ORIGINS}
- SCORING_AGENT_REST_CONTEXT_PATH=${SCORING_AGENT_REST_CONTEXT_PATH}
- SCORING_AGENT_TASK_SCHEDULER_POOL_SIZE=${SCORING_AGENT_TASK_SCHEDULER_POOL_SIZE}
- SCORING_AGENT_TASK_SCHEDULER_THREAD_PRIORITY=${SCORING_AGENT_TASK_SCHEDULER_THREAD_PRIORITY}
- SCORING_AGENT_EXECUTION_CLEANUP_ENABLED=${SCORING_AGENT_EXECUTION_CLEANUP_ENABLED}
- SCORING_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION=${SCORING_AGENT_EXECUTION_CLEANUP_CRON_EXPRESSION}
- SCORING_AGENT_EXECUTION_CLEANUP_TIMEOUT=${SCORING_AGENT_EXECUTION_CLEANUP_TIMEOUT}
- SCORING_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN=${SCORING_AGENT_EXECUTION_CLEANUP_WAIT_BETWEEN}
- SCORING_AGENT_AUDIT_ENABLED=${SCORING_AGENT_AUDIT_ENABLED}
# - DEBUG=true
- SCORING_AGENT_AUTH_REALM=${SSO_IDP_REALM}
- SCORING_AGENT_AUTH_AUTH_SERVER_URL=${SSO_PUBLIC_URL}/auth
- SCORING_AGENT_AUTH_SERVICE_CLIENT_ID=${SCORING_AGENT_SSO_CLIENT_ID}
- SCORING_AGENT_AUTH_SERVICE_CLIENT_SECRET=${SCORING_AGENT_SSO_CLIENT_SECRET}
- LICENSE_MODE=${SCORING_AGENT_LICENSE_MODE:-$LICENSE_MODE}
# RapidMiner licensing
- WAIT_FOR_LICENSES=${WAIT_FOR_LICENSES}
- SCORING_AGENT_ENABLE_SERVER_LICENSE=${SCORING_AGENT_ENABLE_SERVER_LICENSE}
- LICENSE_LICENSE=${LICENSE}
# Altair Unit Licensing
- SCORING_AGENT_LICENSE_AGENT_PROXY_URL=${LICENSE_PROXY_INTERNAL_URL}
- SCORING_AGENT_LICENSE_AGENT_MACHINE_ID=${LICENSE_AGENT_MACHINE_ID}
- SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES=${SCORING_AGENT_RAPIDMINER_LOAD_USER_CERTIFICATES}
volumes:
# Uncomment the below line to specify the Altair license file for Scoring Agent in case of LICENSE_MODE=ALTAIR_STANDALONE
# - ${PWD}/altair_standalone.dat:/scoring-agent/home/resources/licenses/altair_standalone.dat
- coding-shared-vol:/opt/coding-shared/:ro
- scoring-agent-vol:/scoring-agent/home
depends_on:
# In case of LICENSE_MODE=ALTAIR_STANDALONE, set mac_address according to the Ethernet address of the docker host.
# It shall match the MAC address in your license.
# mac_address: 00:00:00:00:00:00
license-proxy:
condition: service_healthy
networks:
platform-int-net:
aliases:
- scoring-agent
profiles:
- scoring-agent
healthcheck:
test: bash -c "exec 6<> /dev/tcp/localhost/8090"
interval: 60s
timeout: 30s
retries: 5
start_period: 30s
jupyterhub-db:
image: ${REGISTRY}rapidminer-jupyterhub-postgres:${JUPYTERHUB_VERSION}
hostname: jupyterhub-db
restart: always
environment:
- POSTGRES_DB=${JUPYTERHUB_DBSCHEMA}
- POSTGRES_USER=${JUPYTERHUB_DBUSER}
- POSTGRES_PASSWORD=${JUPYTERHUB_DBPASS}
- POSTGRES_INITDB_ARGS=${JUPYTERHUB_POSTGRES_INITDB_ARGS}
- TZ=${TZ}
- PGTZ=${TZ}
volumes:
- jupyterhub-db-vol:/var/lib/postgresql/data
networks:
jupyterhub-user-net:
aliases:
- jupyterhub-db
profiles:
- jupyter
healthcheck:
test: pg_isready -d ${JUPYTERHUB_DBSCHEMA} -U ${JUPYTERHUB_DBUSER}
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
jupyternotebook:
image: ${REGISTRY}rapidminer-jupyter_notebook:${JUPYTERHUB_NOTEBOOK_VERSION}
restart: "no"
entrypoint:
- /bin/bash
- -c
- "exit 0"
jupyterhub:
image: "${REGISTRY}rapidminer-jupyterhub-jupyterhub:${JUPYTERHUB_VERSION}"
hostname: jupyterhub
restart: always
environment:
- JUPYTERHUB_VERSION=${JUPYTERHUB_VERSION}
- AIHUB_BACKEND=${AIHUB_BACKEND}
- JUPYTERHUB_DBHOST=${JUPYTERHUB_DBHOST}
- JUPYTERHUB_DBSCHEMA=${JUPYTERHUB_DBSCHEMA}
- JUPYTERHUB_DBUSER=${JUPYTERHUB_DBUSER}
- JUPYTERHUB_DBPASS=${JUPYTERHUB_DBPASS}
- JUPYTERHUB_HOSTNAME=${JUPYTERHUB_HOSTNAME}
- JUPYTERHUB_CRYPT_KEY=${JUPYTERHUB_CRYPT_KEY}
- JUPYTERHUB_DEBUG=${JUPYTERHUB_DEBUG}
- JUPYTERHUB_TOKEN_DEBUG=${JUPYTERHUB_TOKEN_DEBUG}
- JUPYTERHUB_PROXY_DEBUG=${JUPYTERHUB_PROXY_DEBUG}
- JUPYTERHUB_DB_DEBUG=${JUPYTERHUB_DB_DEBUG}
- JUPYTERHUB_SPAWNER_DEBUG=${JUPYTERHUB_SPAWNER_DEBUG}
- JUPYTERHUB_STACK_NAME=${JUPYTERHUB_STACK_NAME}
- PUBLIC_URL=${PUBLIC_URL}
- JUPYTERHUB_URL_SUFFIX=${JUPYTERHUB_URL_SUFFIX}
- SSO_PUBLIC_URL=${SSO_PUBLIC_URL}
- SSO_IDP_REALM=${SSO_IDP_REALM}
- SSO_CLIENT_ID=${JUPYTERHUB_SSO_CLIENT_ID}
- SSO_CLIENT_SECRET=${JUPYTERHUB_SSO_CLIENT_SECRET}
- JUPYTERHUB_SPAWNER=${JUPYTERHUB_SPAWNER}
- JUPYTERHUB_API_PROTOCOL=${JUPYTERHUB_API_PROTOCOL}
- JUPYTERHUB_API_HOSTNAME=${JUPYTERHUB_API_HOSTNAME}
- JUPYTERHUB_PROXY_PORT=${JUPYTERHUB_PROXY_PORT}
- JUPYTERHUB_API_PORT=${JUPYTERHUB_API_PORT}
- JUPYTERHUB_APP_PORT=${JUPYTERHUB_APP_PORT}
# - JUPYTERHUB_CUSTOM_CA_CERTS=${JUPYTERHUB_CUSTOM_CA_CERTS}
- JUPYTERHUB_DOCKER_DISABLE_NOTEBOOK_IMAGE_PULL_AT_STARTUP=${JUPYTERHUB_DOCKER_DISABLE_NOTEBOOK_IMAGE_PULL_AT_STARTUP}
- SSO_USERNAME_KEY=preferred_username
- SSO_RESOURCE_ACCESS_KEY=resource_access
- JUPYTERHUB_DEFAULT_ENV_NAME=aihub-${JUPYTERHUB_VERSION}-python
# Notebook
# kubespawner
# - JUPYTERHUB_NOTEBOOK_KUBERNETES_NAMESPACE=${JUPYTERHUB_NOTEBOOK_KUBERNETES_NAMESPACE}
# - JUPYTERHUB_NOTEBOOK_KUBERNETES_NODE_SELECTOR_NAME=${JUPYTERHUB_NOTEBOOK_KUBERNETES_NODE_SELECTOR_NAME}
# - JUPYTERHUB_NOTEBOOK_KUBERNETES_NODE_SELECTOR_VALUE=${JUPYTERHUB_NOTEBOOK_KUBERNETES_NODE_SELECTOR_VALUE}
# - JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_ACCESS_MODE=${JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_ACCESS_MODE}
# - JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_CAPACITY=${JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_CAPACITY}
# - JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_CLASS=${JUPYTERHUB_NOTEBOOK_HOME_KUBERNETES_STORAGE_CLASS}
# - JUPYTERHUB_NOTEBOOK_IMAGE_PULL_SECRET=${JUPYTERHUB_NOTEBOOK_IMAGE_PULL_SECRET}
# - JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_KUBESPAWNER=${JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_KUBESPAWNER}
# - JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_SUBPATH_KUBESPAWNER=${JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_SUBPATH_KUBESPAWNER}
- DOCKER_NOTEBOOK_IMAGE=${REGISTRY}rapidminer-jupyter_notebook:${JUPYTERHUB_NOTEBOOK_VERSION}
- JUPYTERHUB_NOTEBOOK_VERSION=${JUPYTERHUB_NOTEBOOK_VERSION}
- JUPYTERHUB_NOTEBOOK_SSO_NB_UID_KEY=${JUPYTERHUB_NOTEBOOK_SSO_NB_UID_KEY}
- JUPYTERHUB_NOTEBOOK_SSO_NB_GID_KEY=${JUPYTERHUB_NOTEBOOK_SSO_NB_GID_KEY}
- JUPYTERHUB_NOTEBOOK_SSO_CUSTOM_BIND_MOUNTS_KEY=${JUPYTERHUB_NOTEBOOK_SSO_CUSTOM_BIND_MOUNTS_KEY}
- JUPYTERHUB_NOTEBOOK_CUSTOM_BIND_MOUNTS=${JUPYTERHUB_NOTEBOOK_CUSTOM_BIND_MOUNTS}
- JUPYTERHUB_NOTEBOOK_CPU_LIMIT=${JUPYTERHUB_NOTEBOOK_CPU_LIMIT}
- JUPYTERHUB_NOTEBOOK_MEM_LIMIT=${JUPYTERHUB_NOTEBOOK_MEM_LIMIT}
- JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_DOCKERSPAWNER=${JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_DOCKERSPAWNER}
- TZ=${TZ}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:rw
depends_on:
jupyterhub-db:
condition: service_healthy
networks:
platform-int-net:
aliases:
- jupyterhub
jupyterhub-user-net:
aliases:
- jupyterhub
profiles:
- jupyter
healthcheck:
test: bash -c "exec 6<> /dev/tcp/localhost/8000"
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
coding-environment-storage:
image: "${REGISTRY}rapidminer-coding-environment-storage:${CES_VERSION}"
hostname: coding-environment-storage
restart: always
environment:
- PLATFORM_ADMIN_BACKEND=${PLATFORM_ADMIN_BACKEND}
- PLATFORM_ADMIN_SYNC_DEBUG=False
- DISABLE_DEFAULT_CHANNELS=${DISABLE_DEFAULT_CHANNELS}
- CONDA_CHANNEL_PRIORITY=${CONDA_CHANNEL_PRIORITY}
- TZ=${TZ}
networks:
coding-environment-storage-net:
aliases:
- coding-environment-storage
volumes:
- coding-shared-vol:/opt/coding-shared/
profiles:
- ces
grafana-init:
image: "${REGISTRY}rapidminer-grafana-init:${GRAFANA_UTILS_VERSION}"
restart: "no"
environment:
- TZ=${TZ}
volumes:
- grafana-home-vol:/var/lib/grafana
- grafana-provisioning:/etc/grafana/provisioning
profiles:
- grafana
grafana:
image: "${OFFICIAL_GRAFANA_IMAGE}"
hostname: grafana
restart: always
environment:
- GF_SECURITY_ANGULAR_SUPPORT_ENABLED=${GF_SECURITY_ANGULAR_SUPPORT_ENABLED:-true}
- GF_PATHS_DATA=${GF_PATHS_DATA:-/var/lib/grafana/aihub}
- GF_PATHS_PLUGINS=${GF_PATHS_PLUGINS:-/var/lib/grafana/aihub/plugins}
- GF_AUTH_GENERIC_OAUTH_AUTH_URL=${GF_AUTH_GENERIC_OAUTH_AUTH_URL}
- GF_AUTH_GENERIC_OAUTH_TOKEN_URL=${GF_AUTH_GENERIC_OAUTH_TOKEN_URL}
- GF_AUTH_GENERIC_OAUTH_API_URL=${GF_AUTH_GENERIC_OAUTH_API_URL}
- GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=${GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET}
- GF_AUTH_SIGNOUT_REDIRECT_URL=${GF_AUTH_SIGNOUT_REDIRECT_URL}
- GF_SERVER_ROOT_URL=${GF_SERVER_ROOT_URL}
- GF_USERS_DEFAULT_THEME=${GF_USERS_DEFAULT_THEME:-light}
- GF_PANELS_DISABLE_SANITIZE_HTML=${GF_PANELS_DISABLE_SANITIZE_HTML:-true}
- GF_SERVER_SERVE_FROM_SUB_PATH=${GF_SERVER_SERVE_FROM_SUB_PATH:-true}
- GF_AUTH_DISABLE_LOGIN_FORM=${GF_AUTH_DISABLE_LOGIN_FORM:-true}
- GF_AUTH_OAUTH_AUTO_LOGIN=${GF_AUTH_OAUTH_AUTO_LOGIN:-true}
- GF_AUTH_BASIC_ENABLED=${GF_AUTH_BASIC_ENABLED:-false}
- GF_AUTH_GENERIC_OAUTH_ENABLED=${GF_AUTH_GENERIC_OAUTH_ENABLED:-true}
- GF_AUTH_GENERIC_OAUTH_ALLOW_SIGN_UP=${GF_AUTH_GENERIC_OAUTH_ALLOW_SIGN_UP:-true}
- GF_AUTH_GENERIC_OAUTH_CLIENT_ID=${GF_AUTH_GENERIC_OAUTH_CLIENT_ID:-grafana}
- GF_USERS_EXTERNAL_MANAGE_LINK_NAME=${GF_USERS_EXTERNAL_MANAGE_LINK_NAME:-false}
- GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH=${GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH:-contains(grafana_roles[*], 'admin') && 'Admin' || contains(grafana_roles[*], 'editor') && 'Editor' || 'Viewer'}
- GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=${GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS:-rapidminer-aihub-datasource}
- GF_AUTH_GENERIC_OAUTH_SCOPES=${GF_AUTH_GENERIC_OAUTH_SCOPES:-email}
- TZ=${TZ}
volumes:
- grafana-home-vol:/var/lib/grafana
- grafana-provisioning:/etc/grafana/provisioning
depends_on:
grafana-proxy:
condition: service_started
grafana-init:
condition: service_completed_successfully
networks:
platform-int-net:
aliases:
- grafana
profiles:
- grafana
healthcheck:
test: curl -s localhost:3000/api/health
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
grafana-proxy:
image: "${REGISTRY}rapidminer-grafana-proxy:${GRAFANA_UTILS_VERSION}"
hostname: grafana-proxy
restart: always
environment:
- THREAD_NUMBERS=${GRAFANA_PROXY_THREAD_NUMBERS}
- AIHUB_BACKEND_INTERNAL_URL=${AIHUB_BACKEND_INTERNAL_URL}
# Comma spearated list of RTSA URLs (http://scoring-agent:8090,https://scoring-agent-2:8888)
- GRAFANA_SCORING_AGENT_BACKENDS=${GRAFANA_SCORING_AGENT_BACKENDS}
- ENPOINTS_GATEWAY_INTERNAL_URL=${REACT_APP_WEBAPI_GATEWAY_URL}
- GRAFANA_PROXY_LOGGING_LEVEL=${GRAFANA_PROXY_LOGGING_LEVEL}
- LOG_RESPONSE_DATA=${GRAFANA_PROXY_LOG_RESPONSE_DATA}
- TZ=${TZ}
depends_on:
aihub-backend:
condition: service_healthy
networks:
platform-int-net:
aliases:
- grafana-proxy
profiles:
- grafana
healthcheck:
test: exec 6<> >(nc localhost 5000)
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
landing-page:
image: "${REGISTRY}rapidminer-deployment-landing-page:${LANDING_PAGE_VERSION}"
restart: always
hostname: landing-page
environment:
- SSO_PUBLIC_URL=${SSO_PUBLIC_URL}
- SSO_IDP_REALM=${SSO_IDP_REALM}
- SSO_CLIENT_ID=${LANDING_PAGE_SSO_CLIENT_ID}
- SSO_CLIENT_SECRET=${LANDING_PAGE_SSO_CLIENT_SECRET}
- DEBUG=${LANDING_PAGE_DEBUG}
- TZ=${TZ}
volumes:
- landing-page-vol:/var/www/html/uploaded/
- deployed-services-vol:/rapidminer/deployed-services/
networks:
platform-int-net:
aliases:
- landing-page
profiles:
- landing-page
healthcheck:
test: service apache2 status
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
token-tool:
image: "${REGISTRY}rapidminer-deployment-landing-page:${LANDING_PAGE_VERSION}"
restart: always
hostname: token-tool
environment:
- PUBLIC_URL=${PUBLIC_URL}
- SSO_PUBLIC_URL=${SSO_PUBLIC_URL}
- SSO_IDP_REALM=${SSO_IDP_REALM}
- SSO_CLIENT_ID=${TOKEN_TOOL_SSO_CLIENT_ID}
- SSO_CLIENT_SECRET=${TOKEN_TOOL_SSO_CLIENT_SECRET}
- DEBUG=${TOKEN_TOOL_DEBUG}
- SSO_CUSTOM_SCOPE=openid offline_access
- CUSTOM_URL_SUFFIX=${TOKEN_TOOL_URL_SUFFIX}
- CUSTOM_CONTENT=get-token
- TZ=${TZ}
volumes:
- token-tool-vol:/var/www/html/uploaded/
networks:
platform-int-net:
aliases:
- token-tool
profiles:
- token-tool
healthcheck:
test: service apache2 status
interval: 60s
timeout: 30s
retries: 5
start_period: 5s
panopticon-vizapp:
image: "${REGISTRY}panopticonviz:${PANOPTICON_VIZAPP_VERSION}"
hostname: panopticon-vizapp
#cpuset: ${PANOPTICON_VIZAPP_CPUSET}
restart: always
# A random mac address can be generated using the following command
# This is required for Altair One licensing
# openssl rand -hex 6 | sed 's/\(..\)\(..\)\(..\)\(..\)\(..\)\(..\)/\1:\2:\3:\4:\5:\6/'
mac_address: ${PANOPTICON_VIZAPP_CONTAINER_MAC_ADDRESS}
environment:
- LOGGER_LEVEL_FILE=INFO # To turn on more verbose logging, change this to "FINE"
- PANOPTICON_DETACHED_LICENSE=${PANOPTICON_DETACHED_LICENSE}
- RM_LICENSE_MODE=${LICENSE_MODE}
- LICENSE_PROXY_MODE=${LICENSE_PROXY_MODE}
- PANO_CONTEXT_PATH=/panopticon
- PANOPTICON_ADMIN_GROUPS=admin
- PANOPTICON_DESIGNER_GROUPS=designer
- PANOPTICON_VIEWER_GROUPS=viewer
- PANOPTICON_DEFAULT_ROLES=VIEWER
- DATASTORE_CONNECTION_PASSWORD=${PANOPTICON_MONETDB_ADMIN_PASS}
- LICENSE_HWU_HOSTED_AUTHORIZATION_PASSWORD=${LICENSE_UNIT_MANAGER_PASSWORD}
- LICENSE_HWU_HOSTED_AUTHORIZATION_TOKEN=${LICENSE_UNIT_MANAGER_AUTH_CODE}
- LICENSE_HWU_HOSTED_AUTHORIZATION_USERNAME=${LICENSE_UNIT_MANAGER_USER_NAME}
- LICENSE_HWU_URI=${ALTAIR_LICENSE_PATH}
- CONNECTOR_PYTHON_MODE=fast_api
- CONNECTOR_PYTHON_PORT=80
- PANOPTICON_SSO_CLIENT_ID=${PANOPTICON_SSO_CLIENT_ID}
- PANOPTICON_SSO_CLIENT_SECRET=${PANOPTICON_SSO_CLIENT_SECRET}
- PUBLIC_URL=${PUBLIC_URL}
- SSO_PUBLIC_URL=${SSO_PUBLIC_URL}
- SSO_IDP_REALM=${SSO_IDP_REALM}
- FILE_UPLOAD_SIZE_MAX_BYTES=${PANOPTICON_FILE_UPLOAD_SIZE_MAX_BYTES}
- LMX_USE_EPOLL=${PANOPTICON_LMX_USE_EPOLL}
- CATALINA_OPTS=${PANOPTICON_CATALINA_OPTS}
- SERVER_ID=panopticon-vizapp
- TZ=${TZ}
volumes:
- panopticon-data-panoviz:/etc/panopticon/appdata
- panopticon-shared-vol:/etc/panopticon/sharedata
- panopticon-license-token-vol:/home/rapidminer/.altair_licensing/
- panopticon-logs-vol:/usr/local/tomcat/logs/
- ./panopticon/vizapp/Panopticon.properties.template:/etc/panopticon/appdata_default/Panopticon.properties.template
- ./panopticon/vizapp/logging.properties:/usr/local/tomcat/conf/logging.properties
networks:
panopticon-net:
aliases:
- panopticon-vizapp
profiles:
- panopticon
depends_on:
proxy:
condition: service_healthy
keycloak:
condition: service_healthy
healthcheck:
test: curl -s localhost:8080/server/rest/server/javascript/config?applicationName=admin
interval: 60s
timeout: 30s
retries: 5
start_period: 30s
panopticon-vizapp-python:
image: "${REGISTRY}panopticon-pyserve:${PANOPTICON_VIZAPP_PYTHON_VERSION}"
hostname: panopticon-vizapp-python
#cpuset: ${PANOPTICON_PYTHON_CPUSET}
restart: always
environment:
- MY_POD_NAME="panopticon-vizapp-python"
- TZ=${TZ}
volumes:
- ./panopticon/python/requirements.txt:/etc/panopticon/python_default/requirements.txt
- panopticon-python-ext:/etc/panopticon/python_extensions
networks:
panopticon-net:
aliases:
- panopticon-vizapp-python
profiles:
- panopticon
depends_on:
panopticon-vizapp:
condition: service_healthy
# healthcheck:
# test: bash -c "exec 6<> /dev/tcp/localhost/80"
# interval: 60s
# timeout: 30s
# retries: 5
# start_period: 15s
panopticon-monetdb:
image: "${REGISTRY}panopticon-monetdb:${PANOPTICON_MONETDB_IMAGE_VERSION}"
hostname: panopticon-monetdb
restart: always
environment:
- MDB_DB_ADMIN_PASS=${PANOPTICON_MONETDB_ADMIN_PASS}
- TZ=${TZ}
- PGTZ=${TZ}
volumes:
- panopticon-data-monetdb:/var/monetdb5/dbfarm
networks:
panopticon-net:
aliases:
- panopticon-monetdb
profiles:
- panopticon
healthcheck:
test: bash -c "exec 6<> /dev/tcp/localhost/50000"
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
panopticon-rserve:
image: "${REGISTRY}panopticon-rserve:${PANOPTICON_RSERVE_IMAGE_VERSION}"
hostname: panopticon-rserve
#cpuset: ${PANOPTICON_RSERVE_CPUSET}
restart: always
environment:
- TZ=${TZ}
volumes:
- ./panopticon/rserve/init_R_env_overide.R:/etc/panopticon/rserve_default/init_R_env_overide.R
- panopticon-rserve-ext:/etc/panopticon/rserve_extensions
networks:
panopticon-net:
aliases:
- panopticon-rserve
profiles:
- panopticon
healthcheck:
test: bash -c "exec 6<> /dev/tcp/localhost/6311"
interval: 60s
timeout: 30s
retries: 5
start_period: 15s
depends_on:
panopticon-vizapp:
condition: service_healthy
volumes:
webapi-agent-1:
webapi-agent-2:
panopticon-data-panoviz:
panopticon-shared-vol:
panopticon-license-token-vol:
panopticon-logs-vol:
panopticon-python-ext:
panopticon-data-monetdb:
panopticon-rserve-ext:
license-proxy-vol:
aihub-db-vol:
aihub-home-vol:
job-agent-vol:
job-agent-huggingface-vol:
platform-admin-uploaded-vol:
scoring-agent-vol:
jupyterhub-db-vol:
grafana-home-vol:
grafana-provisioning:
keycloak-db-vol:
keycloak-kcadm-vol:
landing-page-vol:
token-tool-vol:
deployed-services-vol:
coding-shared-vol:
name: ${JUPYTERHUB_NOTEBOOK_SHARED_ENV_VOLUME_NAME_DOCKERSPAWNER}
activemq-artemis-vol:
networks:
panopticon-net:
platform-int-net:
idp-db-net:
aihub-db-net:
coding-environment-storage-net:
jupyterhub-user-net:
name: jupyterhub-user-net-${JUPYTERHUB_STACK_NAME}