Features

Everything you need for production data pipelines

From development to production, Interlace provides a complete toolkit for building, testing, and monitoring data transformations.

Unified @model

One decorator for Python and SQL models

Polyglot

Python via Ibis + native SQL support

Orchestration

Built-in scheduling and DAG execution

Lineage

Column-level data lineage tracking

Change Detection

Smart change detection and skip logic

Multi-backend

DuckDB, PostgreSQL, and more via Ibis

Environments

Virtual environments with shared sources

Scheduling

Cron, intervals, and API triggers

Reliable

Retry policies and dead-letter queues

Python and SQL, unified

Write models in Python with ibis or plain SQL. Mix and match freely — they share the same dependency graph, materialization, and execution engine.

Python models/enriched_orders.py
@model("enriched_orders", materialise="table")
def enriched_orders(orders: ibis.Table, users: ibis.Table) -> ibis.Table:
    return orders.join(users, orders.user_id == users.id).select(
        orders.id, orders.amount, users.name, users.email
    )
SQL models/daily_revenue.sql
-- @materialise: table
SELECT
    date_trunc('day', created_at) AS day,
    SUM(amount) AS revenue,
    COUNT(*) AS order_count
FROM enriched_orders
GROUP BY 1

Write Python, execute SQL

Interlace uses ibis for all data transformations. Write expressive Python that compiles to optimized SQL and runs in your database — not in Python memory.

Python (ibis) Compiles to SQL at runtime
@model("active_users", materialise="table")
def active_users(users: ibis.Table, events: ibis.Table) -> ibis.Table:
    recent = events.filter(events.timestamp > ibis.now() - ibis.interval(days=30))
    return (
        users.join(recent, users.id == recent.user_id)
        .group_by(users.id, users.name, users.email)
        .agg(event_count=recent.id.count())
        .filter(lambda t: t.event_count >= 5)
    )

Built-in pipeline orchestration

No external scheduler needed. Interlace builds your DAG automatically, executes in parallel, and tracks state across runs.

Automatic DAG resolution

Dependencies are inferred from function parameters and SQL references. No manual wiring.

Parallel execution

Independent models run concurrently. Downstream models wait only for their direct dependencies.

Change detection

Smart change detection skips models whose inputs have not changed, reducing unnecessary work.

See everything, miss nothing

Built-in lineage tracking, real-time monitoring, and execution history give you full visibility into your data pipelines.

Column-level lineage

Track data flow from source to destination at the column level. Understand the impact of changes before you make them.

Real-time monitoring

Watch pipeline execution in real-time via the web UI. See model status, timing, and row counts as they happen.

Run history

Full execution history with timing, row counts, and error details. Debug issues quickly with complete context.

Connect to your data

Read from files, APIs, and databases. Write to any ibis-supported backend. Interlace handles the plumbing.

Database backends

Native support with more via ibis

DuckDBPostgreSQL

File formats

Read and write natively via ibis

CSVParquetJSON

Data sources

Ingest from any source with Python models

REST APIsSFTPDatabases

Virtual environments, real isolation

Develop locally with DuckDB, deploy to production with PostgreSQL. Share source data across environments without re-fetching.

Shared source layer

Fetch source data once in production, share it across dev and staging with zero re-fetching.

Environment isolation

Each environment has its own transformations and outputs while safely reading shared sources.

Config overlays

Override connections, schedules, and settings per environment with simple YAML overlays.

One codebase, any backend

Write your models once. Interlace compiles them to the right SQL dialect for each backend via ibis — DuckDB for development, PostgreSQL for production.

DuckDB

Local development, embedded analytics

PostgreSQL

Production workloads, transactional data

Snowflake

Cloud data warehouse (via ibis)

BigQuery

Google Cloud analytics (via ibis)

MySQL

Web application databases (via ibis)

SQLite

Lightweight local storage (via ibis)

Run on your schedule

Built-in scheduling, API triggers, and incremental execution. Keep your data fresh without the overhead.

Cron scheduling

Run pipelines on a schedule with built-in cron support. No external scheduler required.

API triggers

Trigger pipeline runs via the REST API. Integrate with webhooks and CI/CD pipelines.

Incremental execution

Process only new or changed data with built-in incremental strategies and state tracking.

How Interlace compares

An honest look at how Interlace fits alongside the tools you already know. Every product has strengths — here's where each one shines and where Interlace takes a different approach.

Ready to simplify your data pipelines?

Get started with Interlace in minutes. Install, define your first model, and run.