You are viewing the RapidMiner Hub documentation for version 2024.0 - Check here for latest version
Tune Memory
When managing larger deployments with multiple concurrent users and/or large workloads, resource allocation needs to be considered to avoid disruptions in daily operations.
This page lists how you can fine-tune the resources used by various components in your Altair AI Hub, and what are the typical scenarios when you need to do so.
To avoid resource starvation, please double check that the total amount of memory configured/needed for the Platform does not exceed the physical memory limits of the host machine.
Tuning Single Machine Deployments
Adjusting memory settings for Job Agents
By default, Job Agents are configured to spawn 2 job containers, each using a maximum of 2 GBs of memory.
If you wish to increase the number of parallel process executions, you should either scale up the number of Job Agents, or configure a higher number of Job Containers in your Job Agent.
If you are running processes which need a large amount of memory, you need to increase the amount of memory allocated for a Job Container.
The number of Job Containers is controlled by the JOBAGENT_CONTAINER_COUNT
environment variable of the Job Agent container. The amount of memory a Job Container can use is controlled by the JOBAGENT_CONTAINER_MEMORYLIMIT
environment variable of the Job Agent container.
To change these, you can either use Docker Deployment Manager, or use a terminal to connect to the host machine where the deployment is running. Then editing the .env
file, and finally issuing docker-compose up -d
to apply your changes.
This action will restart your Job Agent, so make sure it doesn't affect any critical operation.
Provisioning Job Agents with varying shapes and sizes
By default, all Job Agents connect to the same, DEFAULT
queue. It's a typical use-case to have a separate queue for large jobs, and in that queue a Job Agent with large Job Container memory limits is listening for jobs to run.
To do this, you need to "clone" the definition of the Job Agent service in your platform definition file. Take the rm-server-job-agent-svc
definition and duplicate it, then change the JOBAGENT_CONTAINER_COUNT
,
JOB_QUEUE
, and JOBAGENT_CONTAINER_MEMORYLIMIT
environment variables as needed.
You can either use Docker Deployment Manager, or use a terminal to connect to the host machine where the deployment is running. Then editing the .env
file and docker-compose.yml
file, and finally issuing docker-compose up -d
to apply your changes.
Afterwards you can scale this new Job Agent "flavor" as needed, separately from the other flavors.
Limiting CPU and memory for JupyterHub users
When using the JupyterHub that is shipped with Altair AI Hub, each user is configured to be limited to a single notebook container, where they can run their notebook kernels. To ensure a reasonable resource allocation, we have implemented a default maximum for used CPU cores and used memory that each notebook container can leverage.
To change the resource limit for notebook containers, you need to change the relevant Docker environment variables either manually in your docker-compose.yml
or by using Docker Deployment Manager. Make sure to restart the JupyterHub backend container after you have changed these settings.
The set memory limits will be applied to all users’ notebook containers.