Self Hosting

Our product is distributed as a set of Docker images that can be deployed in your infrastructure.

Docker Registry Access

Contact our support team to receive access credentials to our Docker registry (harbor.integration.app).
You'll receive a username in the format robot$core+<your-company-name> and a password.

docker login harbor.integration.app

Image Versioning

Images are tagged with :latest and date-based immutable tags (e.g., 2025-05-16).
For production deployments, we recommend using the immutable tags: harbor.integration.app/core/api:2025-05-16.

Infrastructure Requirements

Integration.app requires:

  • S3-compatible storage
  • MongoDB server
  • Redis server
  • Auth0 for authentication (free tier sufficient)

Auth0 Configuration

When configuring your Auth0 application:

  • Application Type: Single-page Application
  • Allowed Callback URLs: Base URL of your console service
  • Allowed Web Origins: Base URL of your console service

Core Services

Integration.app consists of four essential services:

API Service

The primary engine API that stores and executes integrations.

Docker Image: harbor.integration.app/core/api

The API service operates in four distinct modes, each activated by specific environment variables:


1. API Mode

Main backend service handling incoming traffic and processing HTTP requests.

Mode-specific Environment Variables:

VariableExampleDescription
IS_API1Enables API mode
HEADERS_TIMEOUT_MS61000Maximum time to receive request headers
KEEPALIVE_TIMEOUT_MS61000Maximum time to keep idle connections alive

2. Instant Tasks Worker Mode

Designed for executing semi-instant asynchronous tasks. This mode should be scaled to prevent task queuing. Each worker processes one background job at a time.

Mode-specific Environment Variables:

VariableExampleDescription
IS_INSTANT_TASKS_WORKER1Enables instant tasks worker mode

3. Queued Tasks Worker Mode

Handles long-running tasks such as flow runs, event pulls, and external events. Each worker processes one background job at a time. When limits are enabled, tasks are queued and executed to ensure fair resource distribution among customers.

Mode-specific Environment Variables:

VariableExampleDescription
IS_QUEUED_TASKS_WORKER1Enables queued tasks worker mode
MAX_QUEUED_TASKS_MEMORY_MB1024Memory limit for task execution (default: 1024)
MAX_QUEUED_TASKS_PROCESS_TIME_SECONDS3000Time limit for task execution (default: 3000)

4. Orchestrator Mode

Manages schedule triggers, handles data source syncs, and performs cleanup tasks.

Mode-specific Environment Variables:

VariableExampleDescription
IS_ORCHESTRATOR1Enables orchestrator mode

All services scale horizontally without additional configuration (aside from load balancing for API services).


Common Environment Variables

The following variables are required for all operation modes:

VariableDescriptionExample
NODE_ENVEnvironment modeproduction
BASE_URIService deployment URLhttps://api.yourdomain.com
CUSTOM_CODE_RUNNER_URICustom Code Runner service URLhttps://custom-code-runner.yourdomain.com
AUTH0_DOMAINAuth0 domain for authenticationlogin.integration.yourdomain.com
AUTH0_CLIENT_IDAuth0 client IDclientId
AUTH0_CLIENT_SECRETAuth0 client secretclientSecret
TMP_S3_BUCKETTemporary storage bucket (recommend auto-expiration policy)integration-app-tmp
CONNECTORS_S3_BUCKETConnectors storage bucketintegration-app-connectors
STATIC_S3_BUCKETStatic files storage bucketintegration-app-static
BASE_STATIC_URIStatic content base URL (files uploaded to the static bucket should be served from this URL)https://static.integration.app
REDIS_URI or REDIS_CLUSTER_URI_XURL for Redis server. For Redis cluster, provide multiple: REDIS_CLUSTER_URI_1, REDIS_CLUSTER_URI_2, ...redis://user:[email protected]:6379/
SECRETJWT token signing secrets3cr3tString
ENCRYPTION_SECRETCredentials encryption secretv3rys3cr3tstring
MONGO_URIMongoDB connection stringmongodb+srv://login:[email protected]/integration-api
PORTContainer listening port (default: 5000)5000
AWS_REGIONS3 regioneu-central-1
AWS_ACCESS_KEY_IDS3 access key
AWS_SECRET_ACCESS_KEYS3 secret key
ENABLE_LIMITSOptional: Enable workspace resource limitstrue

UI Service

Provides pre-built integration user interfaces.

Docker Image: harbor.integration.app/core/ui

Environment Variables:

VariableDescriptionExample
NEXT_PUBLIC_ENGINE_URIAPI service URLhttps://api.yourdomain.com
PORTContainer listening port5000

Console Service

Administration interface for managing integrations.

Docker Image: harbor.integration.app/core/console

Environment Variables:

VariableDescriptionExample
NEXT_PUBLIC_BASE_URIConsole access URLhttps://console.integration.yourdomain.com
NEXT_PUBLIC_AUTH0_DOMAINAuth0 domainlogin.integration.yourdomain.com
NEXT_PUBLIC_ENGINE_API_URIAPI service URLhttps://api.integrations.yourdomain.com
NEXT_PUBLIC_ENGINE_UI_URIUI service URLhttps://ui.integrations.yourdomain.com
NEXT_PUBLIC_AUTH0_CLIENT_IDAuth0 client IDclientId
PORTContainer listening port5000
NEXT_PUBLIC_ENABLE_LIMITSOptional: Enable limits management UItrue

Custom Code Runner

Provides an isolated environment for executing custom code in connectors or middleware. This service should only be accessible internally from other services.

Docker Image: harbor.integration.app/core/custom-code-runner

Note: On AMD architecture (not ARM), set CUSTOM_CODE_MEMORY_LIMIT environment variable for the API service to at least 21474836480 (20GB) to ensure sufficient virtual memory for WebAssembly. Physical memory allocation of 2GB is typically sufficient.

Scaling Recommendations

Backend services emit custom metrics that help determine scaling conditions. The following table outlines these metrics:

MetricEmitted byEndpointDescription
instant_tasks_jobs_activeapi/prometheusNumber of jobs currently processed by instant-tasks-workers
instant_tasks_jobs_waitingapi/prometheusNumber of jobs waiting for processing by instant-tasks-workers
custom_code_runner_total_job_spacescustom-code-runner/api/v2/prometheusTotal number of job spaces supported by a custom-code-runner pod
custom_code_runner_remaining_job_spacescustom-code-runner/api/v2/prometheusAvailable number of job spaces in a custom-code-runner pod
queued_tasks_workersapi/prometheusCurrent number of running queued-tasks-workers pods
queued_tasks_workers_requiredapi/prometheusMaximum number of workers required to process all tasks from queued-tasks queue, respecting workspaces/connections limits
queued_tasks_worker_busyqueued-tasks-worker/prometheus/queued-tasksIndicates if a worker is processing any task from queued-tasks queue. Reports 1 if is busy and 0 if is free

All services scale horizontally. The following table outlines the scaling recommendations for each service for production workloads:

Container

Scaling Approach

Recommended Values

Console

Fixed number of instances to ensure availability and zero-downtime updates.

Instances: 2

UI

Fixed number of instances to ensure availability and zero-downtime updates.

Instances: 2

Orchestrator

Fixed number of instances to ensure availability and zero-downtime updates.

Instances: 2

API

Dynamic scaling based on resource usage. Monitor memory and CPU usage to determine scaling needs.

CPU Usage threshold: 50% Min instances: 2

Instant Tasks Worker

Dynamic scaling based on the number of active and waiting tasks, using a modifier to adjust the scaling sensitivity.

Pseudocode: instant_tasks_jobs_active + instant_tasks_jobs_waiting)/ modifier

Modifier: 5
Min instances: 2

Queued Tasks Worker

Dynamic scaling based on the number of required pods.

Pseudocode:
queued_tasks_workers_required +(queued_tasks_workers_required_max * queued_tasks_workers_required)

More advanced scaling solutions involve using imperative scaling strategy and calculating workers busy rate

Modifier: 0.3
Min instances: 2

Custom Code Runner

Dynamic scaling based on job spaces availability. Monitor the number of total and remaining job spaces and auto scale maintain a certain rate.

Pseudocode:
(custom_code_runner_total_job_spaces -custom_code_runner_remaining_job_spaces ) / custom_code_runner_total_job_spaces

Threshold: 0.45
Min instances: 2


Connector Management

Automated Deployment

Use Membrane CLI to migrate connectors from cloud to self-hosted environments.

Manual Deployment

Upload connector .zip archives through the Console UI via Integrations > Apps > Upload Connector.

Troubleshooting

For enhanced debugging output, add DEBUG_ALL=1 to any container's environment variables.

Helm

Cloud-specific Guides

Resource Limiting

To enable resource limits by workspace and customer:

  1. Add ENABLE_LIMITS=1 to the API container
  2. Add NEXT_PUBLIC_ENABLE_LIMITS=1 to the Console container

This enables workspace managers to set customer-level limits and platform administrators to configure workspace-level resource restrictions.

FAQ and Advanced Configuration

Resource Requirements

  • Minimum Requirements: 500 millicores (0.5 CPU) and 2GB of memory per container is sufficient for most deployments.

Data Persistence and Backups

  • MongoDB: Requires regular backups
  • S3 Storage: All buckets should be backed up regularly
  • Redis: Used only as a cache; can be safely rebooted or erased

Health Monitoring

  • HTTP Health Checks: The root endpoint of each service (e.g., https://api.yourdomain.com/) serves as a health check endpoint
  • Worker Health Checks: Workers and custom code runners also expose an HTTP server at their root endpoint for health monitoring

Security

  • We monitor containers daily with the following SLAs:
    • Critical issues: 1 business day
    • High severity issues: 3 business days
    • Other issues: 2 weeks

Logging and Error Handling

  • Log Format: Services log to stdout/stderr in plain text