Now in public beta

The unified data pipeline framework

Define, orchestrate, and monitor data transformations with a single @model abstraction. Python and SQL, one framework.

$ pipx install interlace

or pip install interlace / uv add interlace in a project

How Interlace compares

An honest look at how Interlace fits alongside the tools you already know. Every product has strengths — here's where each one shines and where Interlace takes a different approach.

Built-in pipeline orchestration

No external scheduler needed. Interlace builds your DAG automatically, executes in parallel, and tracks state across runs.

Automatic DAG resolution

Dependencies are inferred from function parameters and SQL references. No manual wiring.

Parallel execution

Independent models run concurrently. Downstream models wait only for their direct dependencies.

Change detection

Smart change detection skips models whose inputs have not changed, reducing unnecessary work.

See everything, miss nothing

Built-in lineage tracking, real-time monitoring, and execution history give you full visibility into your data pipelines.

Column-level lineage

Track data flow from source to destination at the column level. Understand the impact of changes before you make them.

Real-time monitoring

Watch pipeline execution in real-time via the web UI. See model status, timing, and row counts as they happen.

Run history

Full execution history with timing, row counts, and error details. Debug issues quickly with complete context.

Python and SQL, unified

Write models in Python with ibis or plain SQL. Mix and match freely — they share the same dependency graph, materialization, and execution engine.

Python models/enriched_orders.py
@model("enriched_orders", materialise="table")
def enriched_orders(orders: ibis.Table, users: ibis.Table) -> ibis.Table:
    return orders.join(users, orders.user_id == users.id).select(
        orders.id, orders.amount, users.name, users.email
    )
SQL models/daily_revenue.sql
-- @materialise: table
SELECT
    date_trunc('day', created_at) AS day,
    SUM(amount) AS revenue,
    COUNT(*) AS order_count
FROM enriched_orders
GROUP BY 1

Connect to your data

Read from files, APIs, and databases. Write to any ibis-supported backend. Interlace handles the plumbing.

Database backends

Native support with more via ibis

DuckDBPostgreSQL

File formats

Read and write natively via ibis

CSVParquetJSON

Data sources

Ingest from any source with Python models

REST APIsSFTPDatabases

Run on your schedule

Built-in scheduling, API triggers, and incremental execution. Keep your data fresh without the overhead.

Cron scheduling

Run pipelines on a schedule with built-in cron support. No external scheduler required.

API triggers

Trigger pipeline runs via the REST API. Integrate with webhooks and CI/CD pipelines.

Incremental execution

Process only new or changed data with built-in incremental strategies and state tracking.

Ready to simplify your data pipelines?

Get started with Interlace in minutes. Install, define your first model, and run.