Server & Container Environments
Deploying Cleo in a server or containerized environment is an ideal choice for scaling, long-running tasks, or production-level use cases. This guide walks through setting up Cleo on a server, whether on a dedicated machine, a virtual private server (VPS), or within a Docker container.
Prerequisites
Before deploying Cleo in a server or containerized environment, ensure the following:
Linux-based environment (Ubuntu, CentOS, or any UNIX-based system)
SSH access to the server (or cloud instance)
A working Python 3.10+ installation (may use virtualenv)
A Docker environment (if opting for containerization)
Optional but recommended:
Supervisor for process management
Nginx or Traefik for reverse proxy and load balancing
SSL certificates for securing agent APIs
Server Setup
1. SSH Access
If you're deploying to a remote server, ensure you have SSH access:
2. Clone Cleo on the Server
Just like local deployment:
3. Create Virtual Environment
For a clean setup:
Or use a pre-configured tool like Poetry for dependency management:
4. Install Dependencies
Install required Python libraries:
Ensure all necessary dependencies, such as PyTorch or FAISS, are installed as needed for your server configuration.
Containerized Deployment (Docker)
Running Cleo in Docker offers an isolated, portable, and reproducible environment that can be easily deployed across multiple systems. This method also simplifies scaling Cleo to handle more agents or concurrent tasks.
1. Create a Dockerfile
To containerize Cleo, create a Dockerfile
in the root of the project:
2. Build the Docker Image
Run the following command to build the image:
3. Run the Container
To run Cleo in a container:
This will start the agent in the background. Use the following to check logs:
Running in Cloud Environments
If you're deploying Cleo to a cloud provider (AWS, Azure, GCP), you’ll typically set up a virtual machine (VM) or a container service (like Kubernetes). Here's how to do it:
1. AWS EC2 Example
Launch an EC2 instance with Ubuntu (or any Linux distro).
SSH into the instance:
Follow the server setup instructions above to install Python, clone the repository, and configure the environment.
2. Kubernetes Example
You can deploy Cleo on Kubernetes to manage multiple agents across a cluster. This involves:
Creating a
Deployment
YAML file for Cleo.Creating a
Service
to expose Cleo externally or internally.Managing scaling via Horizontal Pod Autoscaling if you need to deploy multiple agent instances.
Persistent Agent Loops
In production systems, you likely want Cleo to run as a persistent background service. You can manage this using systemd or supervisor.
Using Supervisor
Install supervisor on the server:
Create a
cleo-agent.conf
file under/etc/supervisor/conf.d/
:
Update supervisor:
Configuring for High Availability
For high availability (HA), consider the following:
Load Balancing: Use Nginx or Traefik as a reverse proxy to distribute incoming requests among multiple Cleo agent instances.
Multiple Instances: Deploy multiple Cleo instances across different machines or containers to manage load.
Database Storage: Consider integrating a more robust database backend like PostgreSQL or MongoDB to handle large-scale memory storage for multiple agents.
Summary
Server and container-based deployments offer scalability, persistence, and production readiness for Cleo. Once set up on your infrastructure, you can manage agent workloads, control updates, and scale your agents based on task complexity.
Last updated