
Airflow to Prefect: Why Modern Teams Choose Prefect
When LiveEO migrated to Prefect, they cut AWS costs by 63%. Endpoint reduced infrastructure costs by 73% and launched 78 new pipelines in their first quarter. Cash App transformed their ML operations while maintaining strict security. These teams aren't outliers - they represent what's possible when you modernize your data orchestration pipelines. While migration might seem daunting, the reality is far simpler than most teams expect.
Having worked with teams transitioning their orchestration layer, we've seen that the key to a successful migration isn't always about a "big bang" rewrite - it's about taking an incremental approach that delivers immediate benefits while minimizing risk. A gradual rollout allowed teams like Cash App, Rent the Runway, and Endpoint to gain outsized benefits for their engineers, data programs, and bottom lines by adopting more and more of what Prefect has to offer over time.
Want to see how you can save 60%+ on your cloud infrastructure bill? Join our March webinar, where we'll demonstrate how to observe your existing Airflow runs in Prefect - a great first step in modernizing your orchestration. Let’s see why teams have moved to Prefect for their orchestration needs.
Why Consider Migration?
Developer Experience: Write Python, Not Framework
Perhaps the biggest shift teams experience when moving to Prefect is returning to writing natural Python code. Rather than conforming your code to a framework's view of the world, Prefect works with your code as it exists.
This means you can test locally exactly as the code will run in production, reducing development cycle time.
Here's a simple example. In Airflow, even a basic workflow requires understanding operators, hooks, and the DAG framework:
1from airflow import DAG
2from airflow.operators.python import BranchPythonOperator
3from airflow.operators.dummy import DummyOperator
4from airflow.operators.bash import BashOperator
5from datetime import datetime
6
7def choose_branch():
8 from random import choice
9 return choice(["task_a", "task_b"]) # Randomly chooses a branch
10
11default_args = {
12 "owner": "airflow",
13 "start_date": datetime(2024, 2, 1),
14 "retries": 1
15}
16
17# Define the DAG
18with DAG(
19 dag_id="branching_example",
20 default_args=default_args,
21 schedule_interval="@daily",
22 catchup=False,
23) as dag:
24
25 start = DummyOperator(task_id="start")
26
27 branching = BranchPythonOperator(
28 task_id="branching_logic",
29 python_callable=choose_branch
30 )
31
32 task_a = BashOperator(
33 task_id="task_a",
34 bash_command="echo 'Task A is running'"
35 )
36
37 task_b = BashOperator(
38 task_id="task_b",
39 bash_command="echo 'Task B is running'"
40 )
41
42 join = DummyOperator(
43 task_id="join",
44 trigger_rule="one_success"
45 )
46
47 # DAG Dependencies
48 start >> branching
49 branching >> [task_a, task_b] >> join
With Prefect, your code stays clean and natural:
1from prefect import flow, task
2import random
3
4@task
5def choose_branch():
6 return random.choice(["task_a", "task_b"]) # Randomly chooses a branch
7
8@task
9def task_a():
10 print("Task A is running")
11
12@task
13def task_b():
14 print("Task B is running")
15
16@flow
17def branching_flow():
18 branch = choose_branch()
19
20 if branch == "task_a":
21 task_a()
22 else:
23 task_b()
24
25 print("Joining after branching logic")
26
27if __name__ == "__main__":
28 branching_flow()
Developer Experience: Development Environment
This isn't just about aesthetics - it fundamentally changes how teams work. You can run this code directly in your IDE, unit test it normally, and deploy it to production without modification. No more maintaining separate development environments that try (and often fail) to mirror production Airflow setups. No more spending days debugging why a workflow works locally but fails in production. No more training new team members on framework-specific concepts before they can contribute.
Engineers at Endpoint became three times more productive once they made this switch and no longer had to wrestle to get the framework to do what they needed. And gaining back the time your team spends wrestling with infrastructure will have an outsized impact beyond just being more Pythonic. This is what we mean when we say Prefect lets engineering teams do more engineering.
And this impact compounds over time. Teams can onboard new team members faster, perform quicker iteration on workflows, and dramatically reduce time spent on overhead and maintenance. As Tony Rahloff from LiveEO said: "After implementing Prefect, we quickly saw improved developer experience, velocity, and resilience. We were able to triple our development speed with Prefect – resulting in faster development, bug fixes, and product iterations – which also improved our time-to-customer value.”
Understanding the Shift: Framework to Library
What can account for this change? The core difference between Airflow and Prefect is simple: Airflow is a framework that forces your code to follow its rules, while Prefect is a library that adapts to your Python code. This is why migration to Prefect is far less disruptive than teams expect.

This difference goes far beyond style. Airflow demands external storage and static resources, while Prefect enables native data passing and dynamic infrastructure. Data flows directly between tasks in memory, and each workflow can be configured for the resources it needs. With Prefect, workflows can spin up and down dynamically - even using CPUs or GPUs. Even better, these decisions happen at runtime rather than being locked in a predefined graph. Your workflows adapt to real conditions, making better use of resources while keeping code clean and efficient.
Infrastructure & Operations
The most immediate impact teams see when moving to Prefect is dramatic infrastructure savings. Endpoint saw a 73% reduction in orchestration costs after migration, but this isn't just about doing the same things cheaper - it's about fundamentally smarter resource usage.
How do these savings happen? Through three key capabilities:
- First: Prefect enables dynamic infrastructure creation. Instead of maintaining always-on resources sized for peak loads, your infrastructure scales with actual demand. Each task can request exactly what it needs - from lightweight CPU instances for data validation to powerful GPU clusters for ML training - and release those resources immediately after use.
- Second: Prefect's built-in monitoring helps identify and eliminate idle infrastructure. The platform provides clear visibility into resource utilization patterns, making it easy to spot opportunities for optimization. Teams find resources that sat idle in Airflow can be automatically scaled down when not needed.
- Third: Prefect's native task communication eliminates the need for intermediate storage infrastructure. Where Airflow requires external storage systems to pass data between tasks, Prefect handles this in memory, reducing both complexity and cost.
But cost savings are just the start. Prefect includes comprehensive observability features in its core platform - no additional tools or subscriptions needed. You get real-time monitoring, detailed logging, and performance tracking by default. This built-in visibility helps teams optimize costs, performance, reliability, and development efficiency from day one.
Incremental Adoption
We know that the thought of migrating your orchestration system can feel daunting. That’s why our next post in this series will dive into how to plan and execute your migration step-by-step—from adding Prefect’s observability to your existing Airflow setup, all the way to transitioning your most critical workflows. We’ll walk you through practical strategies, share real-world case studies, and even include code snippets that show you exactly how to validate and scale your flows in Prefect.
Ready to take the first step today? Join our upcoming webinar, Observing Airflow DAGs in Prefect, where our experts will demonstrate how to monitor your current Airflow runs using Prefect. It’s the perfect opportunity to see firsthand how this incremental approach can help you test the waters and set the stage for a full migration—without disrupting your existing operations.
Stay tuned for Part 2, where we’ll build on these ideas and show you how to transform your workflows for a smoother, more efficient future. Meanwhile, join our webinar and start envisioning a more agile, cost-effective way to orchestrate your data pipelines.
Register now to secure your spot!
Related Content








