Shared Database with Automatic Replication

Deploy a persistent, replicated database across Flux's decentralized network with automatic clustering and failover

Why Use Flux Shared DB?

Automatic Replication

Data syncs across all instances with master-replica architecture

High Availability

Automatic failover and cluster self-healing

Zero Configuration

Nodes discover each other and auto-cluster via FluxOS API

What You'll Learn

    Deploy MySQL with Flux Shared DB operator
    Configure automatic database replication
    Understand master-replica architecture
    Connect applications to the shared database

Before You Start

  • ✓ Completed the Multi-Component Applications tutorial
  • ✓ Understanding of databases (MySQL, PostgreSQL, or MongoDB)
  • ✓ Familiar with environment variables and Docker networking
  • ✓ Have FLUX tokens (~calculating... FLUX recommended)
1

Understanding Shared DB Architecture

Flux Shared DB uses a distributed operator pattern with three key interfaces:

DB Interface (Port 3307)

Acts as a database proxy - your application connects here instead of directly to the database

Internal API

Handles inter-node communication for clustering and replication

UI API

Provides cluster management and monitoring capabilities

How It Works:

  1. 1. Operator nodes discover each other using FluxOS API
  2. 2. Nodes automatically form a cluster with master-replica architecture
  3. 3. Read queries → Route directly to local database instance (fast!)
  4. 4. Write queries → Funnel to master node, which timestamps and sequences them
  5. 5. Master forwards writes to all replica nodes for synchronization
2

Create Shared DB Deployment Specification

Create a deployment spec with three components: MySQL database, Shared DB operator, and your application.

shared-db-spec.jsonjson
{
  "version": 8,
  "name": "myshareddb",
  "description": "Application with shared replicated database",
  "owner": "YOUR_ZELID_HERE",
  "instances": 3,
  "staticip": false,
  "enterprise": "",
  "compose": [
    {
      "name": "mysql",
      "description": "MySQL database engine",
      "repotag": "mysql:8.0",
      "ports": [3306],
      "containerPorts": [3306],
      "domains": [""],
      "environmentParameters": [
        "MYSQL_ROOT_PASSWORD=myRootPassword123",
        "MYSQL_DATABASE=myapp"
      ],
      "commands": [],
      "containerData": "s:/var/lib/mysql",
      "cpu": 1.0,
      "ram": 2000,
      "hdd": 10,
      "tiered": false
    },
    {
      "name": "operator",
      "description": "Flux Shared DB operator",
      "repotag": "runonflux/shared-db:latest",
      "ports": [3307, 8080],
      "containerPorts": [3307, 8080],
      "domains": [""],
      "environmentParameters": [
        "DB_COMPONENT_NAME=fluxmysql_myshareddb",
        "DB_APPNAME=myshareddb",
        "CLIENT_APPNAME=myapp",
        "DB_INIT_PASS=myRootPassword123",
        "INIT_DB_NAME=myapp",
        "DB_USER=root",
        "DB_PORT=3307",
        "API_PORT=8080"
      ],
      "commands": [],
      "containerData": "s:/app/dumps",
      "cpu": 1.0,
      "ram": 2000,
      "hdd": 10,
      "tiered": false
    },
    {
      "name": "app",
      "description": "Application using shared database",
      "repotag": "yourusername/yourapp:latest",
      "ports": [3000],
      "containerPorts": [3000],
      "domains": [""],
      "environmentParameters": [
        "DB_HOST=fluxoperator_myshareddb",
        "DB_PORT=3307",
        "DB_NAME=myapp",
        "DB_USER=root",
        "DB_PASSWORD=myRootPassword123"
      ],
      "commands": [],
      "containerData": "/appdata",
      "cpu": 2.0,
      "ram": 4000,
      "hdd": 20,
      "tiered": false
    }
  ]
}

Critical Configuration Notes:

  • DB_COMPONENT_NAME: Must match Flux internal DNS pattern flux{componentname}_{appname}
  • DB_APPNAME: Must match your Flux app name exactly
  • CLIENT_APPNAME: Identifies which app can access this database
  • containerData: Use s:/ prefix for Syncthing replication
  • Instances: Use 3+ for high availability (one master, two+ replicas)
3

Configure Environment Variables

Understanding the operator's environment variables is crucial for proper setup:

VariableDescriptionRequired
DB_COMPONENT_NAMEMySQL component hostname on Flux network
DB_APPNAMEYour Flux application name
CLIENT_APPNAMEClient application identifier
DB_INIT_PASSRoot database password (must match MySQL password)
INIT_DB_NAMEInitial database to create on first run
DB_USERAuthentication username (default: root)
DB_PORTExternal DB proxy port (default: 3307)
API_PORTExternal API port for management UI
WHITELISTComma-separated IP whitelist for security
authMasterOnlyRestrict authentication to master node only
4

Deploy to Flux Network

Deploy your shared database application to Flux:

  1. 1

    Go to home.runonflux.io and login

  2. 2

    Navigate to ApplicationsManagementRegister New App

  3. 3

    Paste your JSON spec and review the configuration

  4. 4

    Review the total cost (~calculating... FLUX/month for all 3 components)

  5. 5

    Deploy and wait 10-15 minutes for cluster formation

Initial Cluster Formation

The first deployment may take 10-15 minutes as the operator nodes discover each other via FluxOS API and elect a master. Monitor the operator logs for "Cluster formed" messages.

5

Connect Your Application to Shared DB

Connect your application to the database through the operator proxy:

app/db.jsjavascript
const mysql = require('mysql2/promise');

// Connect to Flux Shared DB operator, NOT directly to MySQL
const pool = mysql.createPool({
  host: process.env.DB_HOST,        // fluxoperator_myshareddb
  port: process.env.DB_PORT || 3307, // Operator proxy port
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  waitForConnections: true,
  connectionLimit: 10,
  queueLimit: 0
});

// Test connection
async function testConnection() {
  try {
    const connection = await pool.getConnection();
    console.log('✓ Connected to Flux Shared DB');

    // This query will be routed to local replica (fast!)
    const [rows] = await connection.query('SELECT 1 + 1 AS result');
    console.log('✓ Query test successful:', rows);

    connection.release();
  } catch (error) {
    console.error('✗ Database connection failed:', error);
  }
}

// Write operation (routed to master, then replicated)
async function createUser(name, email) {
  const [result] = await pool.execute(
    'INSERT INTO users (name, email) VALUES (?, ?)',
    [name, email]
  );
  return result.insertId;
}

// Read operation (routed to local replica)
async function getUsers() {
  const [rows] = await pool.query('SELECT * FROM users');
  return rows;
}

module.exports = { testConnection, createUser, getUsers };

Key Points:

  • • Always connect to operator component, not MySQL directly
  • • Use port 3307 (operator proxy), not 3306 (MySQL)
  • • Read queries are fast - served from local replica
  • • Write queries are coordinated through master for consistency
  • • Connection pooling is recommended for best performance
6

Monitor Cluster Health

Use the operator's management API to monitor cluster status:

Check cluster statusbash
# Get cluster information
curl https://myshareddb-operator-abc123.app.runonflux.io:8080/cluster

# Response shows master node and all replicas
{
  "master": "192.168.1.10:3307",
  "replicas": [
    "192.168.1.11:3307",
    "192.168.1.12:3307"
  ],
  "status": "healthy"
}

Cluster Operations:

  • Master Election: Automatic when master node fails
  • Node Discovery: New instances automatically join cluster
  • Data Sync: Replicas catch up automatically after downtime
  • Failover: Client connections redirect to new master seamlessly

Monitoring Best Practices:

  • • Monitor operator logs for replication lag warnings
  • • Check cluster status API regularly for node health
  • • Watch for master election events in logs
  • • Monitor database size to ensure adequate storage

Best Practices

🔒 Security

  • • Use strong database passwords (min 16 characters)
  • • Consider WHITELIST to restrict access by IP
  • • Enable authMasterOnly for stricter authentication
  • • Never expose MySQL port 3306 directly - only use operator proxy
  • • Use Enterprise mode if storing sensitive data

⚡ Performance

  • • Deploy at least 3 instances for optimal read distribution
  • • Use connection pooling in your application
  • • Batch write operations when possible to reduce master load
  • • Monitor replication lag - scale if consistently high
  • • Allocate sufficient RAM for MySQL query cache

💾 Data Management

  • • Always use s:/ volume prefix for Syncthing replication
  • • Allocate 2-3x your expected data size for growth
  • • Implement regular backup strategy outside of Flux replication
  • • Test restore procedures periodically
  • • Monitor disk usage across all instances

🔧 Deployment

  • • Test with 1 instance first, then scale to 3+
  • • Verify all environment variables before deployment
  • • Wait for cluster formation before sending queries
  • • Use semantic versioning for operator image updates
  • • Keep operator and MySQL versions in sync across instances

Troubleshooting

Cluster won't form / nodes can't discover each other

Check that:

  • DB_APPNAME exactly matches your Flux app name
  • • All instances are in "Running" state on home.runonflux.io
  • • FluxOS API is accessible from operator containers
  • • Wait 10-15 minutes for initial cluster formation

Application can't connect to database

Verify:

  • • Using operator hostname: fluxoperator_{appname}
  • • Using port 3307, not 3306
  • DB_INIT_PASS matches MySQL root password
  • • Database name exists (check INIT_DB_NAME)

Replication lag is high

Increase operator CPU/RAM allocation, or reduce write frequency. Consider scaling to more instances to distribute read load.

Data not persisting after restart

Ensure MySQL containerData uses s:/var/lib/mysql and operator uses s:/app/dumps. Check storage allocation is sufficient.

Current Limitations & Roadmap

Supported:

  • ✓ MySQL 5.7 and 8.0 (full support)
  • ✓ Master-replica replication
  • ✓ Automatic cluster formation
  • ✓ Read/write query routing

Coming Soon:

  • ○ PostgreSQL support
  • ○ MongoDB support
  • ○ Enhanced date/time function handling
  • ○ Multi-master configurations