Self Hosting
Our product is distributed as a set of Docker images that can be deployed in your infrastructure.
Docker Registry Access
Contact our support team to receive access credentials to our Docker registry (harbor.integration.app
).
You'll receive a username in the format robot$core+<your-company-name>
and a password.
docker login harbor.integration.app
Image Versioning
Images are tagged with :latest
and date-based immutable tags (e.g., 2025-05-16
).
For production deployments, we recommend using the immutable tags: harbor.integration.app/core/api:2025-05-16
.
Infrastructure Requirements
Integration.app requires:
- S3-compatible storage
- MongoDB server
- Redis server
- Auth0 for authentication (free tier sufficient)
Auth0 Configuration
When configuring your Auth0 application:
- Application Type: Single-page Application
- Allowed Callback URLs: Base URL of your console service
- Allowed Web Origins: Base URL of your console service
Core Services
Integration.app consists of four essential services:
API Service
The primary engine API that stores and executes integrations.
Docker Image: harbor.integration.app/core/api
The API service operates in four distinct modes, each activated by specific environment variables:
1. API Mode
Main backend service handling incoming traffic and processing HTTP requests.
Mode-specific Environment Variables:
Variable | Example | Description |
---|---|---|
IS_API | 1 | Enables API mode |
HEADERS_TIMEOUT_MS | 61000 | Maximum time to receive request headers |
KEEPALIVE_TIMEOUT_MS | 61000 | Maximum time to keep idle connections alive |
2. Instant Tasks Worker Mode
Designed for executing semi-instant asynchronous tasks. This mode should be scaled to prevent task queuing. Each worker processes one background job at a time.
Mode-specific Environment Variables:
Variable | Example | Description |
---|---|---|
IS_INSTANT_TASKS_WORKER | 1 | Enables instant tasks worker mode |
3. Queued Tasks Worker Mode
Handles long-running tasks such as flow runs, event pulls, and external events. Each worker processes one background job at a time. When limits are enabled, tasks are queued and executed to ensure fair resource distribution among customers.
Mode-specific Environment Variables:
Variable | Example | Description |
---|---|---|
IS_QUEUED_TASKS_WORKER | 1 | Enables queued tasks worker mode |
MAX_QUEUED_TASKS_MEMORY_MB | 1024 | Memory limit for task execution (default: 1024 ) |
MAX_QUEUED_TASKS_PROCESS_TIME_SECONDS | 3000 | Time limit for task execution (default: 3000 ) |
4. Orchestrator Mode
Manages schedule triggers, handles data source syncs, and performs cleanup tasks.
Mode-specific Environment Variables:
Variable | Example | Description |
---|---|---|
IS_ORCHESTRATOR | 1 | Enables orchestrator mode |
All services scale horizontally without additional configuration (aside from load balancing for API services).
Common Environment Variables
The following variables are required for all operation modes:
Variable | Description | Example |
---|---|---|
NODE_ENV | Environment mode | production |
BASE_URI | Service deployment URL | https://api.yourdomain.com |
CUSTOM_CODE_RUNNER_URI | Custom Code Runner service URL | https://custom-code-runner.yourdomain.com |
AUTH0_DOMAIN | Auth0 domain for authentication | login.integration.yourdomain.com |
AUTH0_CLIENT_ID | Auth0 client ID | clientId |
AUTH0_CLIENT_SECRET | Auth0 client secret | clientSecret |
TMP_S3_BUCKET | Temporary storage bucket (recommend auto-expiration policy) | integration-app-tmp |
CONNECTORS_S3_BUCKET | Connectors storage bucket | integration-app-connectors |
STATIC_S3_BUCKET | Static files storage bucket | integration-app-static |
BASE_STATIC_URI | Static content base URL (files uploaded to the static bucket should be served from this URL) | https://static.integration.app |
REDIS_URI or REDIS_CLUSTER_URI_X | URL for Redis server. For Redis cluster, provide multiple: REDIS_CLUSTER_URI_1, REDIS_CLUSTER_URI_2, ... | redis://user:[email protected]:6379/ |
SECRET | JWT token signing secret | s3cr3tString |
ENCRYPTION_SECRET | Credentials encryption secret | v3rys3cr3tstring |
MONGO_URI | MongoDB connection string | mongodb+srv://login:[email protected]/integration-api |
PORT | Container listening port (default: 5000) | 5000 |
AWS_REGION | S3 region | eu-central-1 |
AWS_ACCESS_KEY_ID | S3 access key | |
AWS_SECRET_ACCESS_KEY | S3 secret key | |
ENABLE_LIMITS | Optional: Enable workspace resource limits | true |
UI Service
Provides pre-built integration user interfaces.
Docker Image: harbor.integration.app/core/ui
Environment Variables:
Variable | Description | Example |
---|---|---|
NEXT_PUBLIC_ENGINE_URI | API service URL | https://api.yourdomain.com |
PORT | Container listening port | 5000 |
Console Service
Administration interface for managing integrations.
Docker Image: harbor.integration.app/core/console
Environment Variables:
Variable | Description | Example |
---|---|---|
NEXT_PUBLIC_BASE_URI | Console access URL | https://console.integration.yourdomain.com |
NEXT_PUBLIC_AUTH0_DOMAIN | Auth0 domain | login.integration.yourdomain.com |
NEXT_PUBLIC_ENGINE_API_URI | API service URL | https://api.integrations.yourdomain.com |
NEXT_PUBLIC_ENGINE_UI_URI | UI service URL | https://ui.integrations.yourdomain.com |
NEXT_PUBLIC_AUTH0_CLIENT_ID | Auth0 client ID | clientId |
PORT | Container listening port | 5000 |
NEXT_PUBLIC_ENABLE_LIMITS | Optional: Enable limits management UI | true |
Custom Code Runner
Provides an isolated environment for executing custom code in connectors or middleware. This service should only be accessible internally from other services.
Docker Image: harbor.integration.app/core/custom-code-runner
Note: On AMD architecture (not ARM), set
CUSTOM_CODE_MEMORY_LIMIT
environment variable for the API service to at least 21474836480 (20GB) to ensure sufficient virtual memory for WebAssembly. Physical memory allocation of 2GB is typically sufficient.
Scaling Recommendations
Backend services emit custom metrics that help determine scaling conditions. The following table outlines these metrics:
Metric | Emitted by | Endpoint | Description |
---|---|---|---|
instant_tasks_jobs_active | api | /prometheus | Number of jobs currently processed by instant-tasks-workers |
instant_tasks_jobs_waiting | api | /prometheus | Number of jobs waiting for processing by instant-tasks-workers |
custom_code_runner_total_job_spaces | custom-code-runner | /api/v2/prometheus | Total number of job spaces supported by a custom-code-runner pod |
custom_code_runner_remaining_job_spaces | custom-code-runner | /api/v2/prometheus | Available number of job spaces in a custom-code-runner pod |
queued_tasks_workers | api | /prometheus | Current number of running queued-tasks-workers pods |
queued_tasks_workers_required | api | /prometheus | Maximum number of workers required to process all tasks from queued-tasks queue, respecting workspaces/connections limits |
queued_tasks_worker_busy | queued-tasks-worker | /prometheus/queued-tasks | Indicates if a worker is processing any task from queued-tasks queue. Reports 1 if is busy and 0 if is free |
All services scale horizontally. The following table outlines the scaling recommendations for each service for production workloads:
Container | Scaling Approach | Recommended Values |
---|---|---|
Console | Fixed number of instances to ensure availability and zero-downtime updates. | Instances: 2 |
UI | Fixed number of instances to ensure availability and zero-downtime updates. | Instances: 2 |
Orchestrator | Fixed number of instances to ensure availability and zero-downtime updates. | Instances: 2 |
API | Dynamic scaling based on resource usage. Monitor memory and CPU usage to determine scaling needs. | CPU Usage threshold: 50% Min instances: 2 |
Instant Tasks Worker | Dynamic scaling based on the number of active and waiting tasks, using a modifier to adjust the scaling sensitivity. Pseudocode:
| Modifier: 5 |
Queued Tasks Worker | Dynamic scaling based on the number of required pods. Pseudocode: More advanced scaling solutions involve using imperative scaling strategy and calculating workers busy rate | Modifier: 0.3 |
Custom Code Runner | Dynamic scaling based on job spaces availability. Monitor the number of total and remaining job spaces and auto scale maintain a certain rate. Pseudocode: | Threshold: 0.45 |
Connector Management
Automated Deployment
Use Membrane CLI to migrate connectors from cloud to self-hosted environments.
Manual Deployment
Upload connector .zip archives through the Console UI via Integrations > Apps > Upload Connector.
Troubleshooting
For enhanced debugging output, add DEBUG_ALL=1
to any container's environment variables.
Helm
Cloud-specific Guides
GCP Self-Hosting
Guide for deploying Integration.app on Google Cloud Platform.
AWS Self-Hosting
Guide for deploying Integration.app on AWS.
Resource Limiting
To enable resource limits by workspace and customer:
- Add
ENABLE_LIMITS=1
to the API container - Add
NEXT_PUBLIC_ENABLE_LIMITS=1
to the Console container
This enables workspace managers to set customer-level limits and platform administrators to configure workspace-level resource restrictions.
FAQ and Advanced Configuration
Resource Requirements
- Minimum Requirements: 500 millicores (0.5 CPU) and 2GB of memory per container is sufficient for most deployments.
Data Persistence and Backups
- MongoDB: Requires regular backups
- S3 Storage: All buckets should be backed up regularly
- Redis: Used only as a cache; can be safely rebooted or erased
Health Monitoring
- HTTP Health Checks: The root endpoint of each service (e.g.,
https://api.yourdomain.com/
) serves as a health check endpoint - Worker Health Checks: Workers and custom code runners also expose an HTTP server at their root endpoint for health monitoring
Security
- We monitor containers daily with the following SLAs:
- Critical issues: 1 business day
- High severity issues: 3 business days
- Other issues: 2 weeks
Logging and Error Handling
- Log Format: Services log to stdout/stderr in plain text
Updated 2 days ago