Prefect Logo
Prefect Product

Airflow Local Development Sucks

July 28, 2025
Chris White
CTO
Share

How to solve Airflow local development problems, testing challenges, and complex setup requirements with a simpler alternative

Setting up Apache Airflow 3 for local development requires at least 4 separate services, a minimum of 4GB RAM (8GB recommended), and setup times that can stretch from hours to days - before writing a single line of workflow code.If you're searching for "Airflow local development setup," "Apache Airflow testing problems," or "Airflow development environment issues," you're not alone.

Local development should be straightforward: write code, test it, iterate quickly. However, setting up Airflow 3 locally reveals significant architectural complexity that affects every developer who works with the framework. Despite being the biggest release in Airflow's history, Airflow 3 maintains design decisions that prioritize production deployment over development experience.

The challenges are systematic rather than incidental. Airflow was built for ops teams managing static workflows on dedicated servers, not for developers who want to iterate quickly. Prefect, on the other hand, was built with modern expectations in mind: teams routinely report 3x improvements in development speed and over 50% reduction in cost when migrating from Airflow to Prefect.

Airflow Local Setup Requirements: The Multi-Component Infrastructure and Configuration Problem

Airflow 3 requires multiple services for even basic local development, each demanding careful configuration. The official Airflow documentation lists the required components:

  • Webserver: For the UI interface
  • Scheduler: Responsible for triggering DAG runs and submitting tasks to executors
  • DAG Processor: A separate process for parsing DAG files
  • Database: PostgreSQL or similar for metadata storage

The quickstart documentation attempts to simplify this with airflow standalone, but this approach has limitations for realistic development scenarios. When developers need to test concurrent runs, external dependencies, or realistic configurations, they must configure the full multi-component setup through extensive airflow.cfg files, environment variables, and command-line parameters.

The official Docker Compose setup demonstrates the true scope of this complexity. The docker-compose.yaml file requires multiple separate containers with careful orchestration of startup dependencies, plus configuration management for database connections, executor selection (LocalExecutor, CeleryExecutor, or KubernetesExecutor), resource limits, timeouts, parallelism settings, and security management. The documentation recommends allocating at least 4GB of memory to Docker, with 8GB recommended for stability. Compare this with the far simpler Prefect docker compose setup that supports production work.

Because of this complexity, small configuration mistakes in Airflow can lead to silent failures, tasks hanging indefinitely, or error messages that require deep framework knowledge to decode. This configuration sprawl compounds issues when developers are attempting to debug issues with their production Airflow DAGs locally.

Much of this infrastructure complexity exists because DAGs in Airflow are not functions: they are complicated objects that the Airflow scheduler must parse and execute regularly, requiring persistent services and extensive configuration to coordinate their interactions.

In Prefect, by contrast, flows can be executed directly as functions within unit test suites, tasks can be executed individually outside of their flows, and deployments can be served locally without spinning up any Prefect services. Even schedule objects can be imported and unit tested.

Simple Alternative: Zero Infrastructure Development

Prefect eliminates these problems entirely: no scheduler, database, webserver, or Docker containers are required. Crucially, the exact same code path executes identically in local development and remote execution environments, eliminating the debugging complexity that comes from architectural differences between development and deployment. Workflows run as standard Python processes with zero additional infrastructure requirements and no configuration files, allowing you to focus on workflow logic rather than service orchestration:

1# /// script
2# dependencies = ["prefect"]
3# ///
4
5from prefect import flow, task
6
7@task
8def extract_data():
9    return {"key": "value"}
10
11@task
12def transform_data(data):
13    return data["key"].upper()
14
15@flow
16def my_workflow():
17    data = extract_data()
18    result = transform_data(data)
19    return result
20
21if __name__ == "__main__":
22    my_workflow()

💡 Try it now: Save this code as workflow.py and run uv run workflow.py - no setup, configuration, or infrastructure required!

Airflow Docker Development Issues: Windows Compatibility and Resource Problems

For Windows users these problems are further exacerbated. Airflow treats Windows as an afterthought and requires Linux-based distributions for production use. Because of this, Windows users face particular challenges as noted in community discussions: "running Airflow in Windows natively is dead in the water".

Common Airflow Docker development issues include:

  • Windows compatibility: Airflow has limited Windows support, forcing Windows developers to use WSL or Docker with their own complications
  • Resource consumption: Docker setups often consume more resources than the actual work being performed
  • Development workflow friction: Mounting volumes, managing container lifecycle, and debugging across containerized services adds overhead
  • Network complexity: Getting proper communication between scheduler, webserver, and database requires careful configuration
  • Iteration speed: Code changes require container rebuilds or volume sync delays

The Docker approach introduces architectural differences between local development environments and typical deployment scenarios, making it difficult to debug environment-specific issues.

Prefect runs natively on all platforms, including full Windows support. No Docker, WSL, or virtual machines are required for development.

Airflow Task Isolation Problems: Development Friction and Debugging Challenges

While separating orchestration from business logic is critical for workflow systems, Airflow 3 takes this to an extreme that complicates development. It encourages submitting every task through an executor, adding infrastructure overhead even for simple development scenarios. Moreover, users cannot submit tasks directly themselves - the DAG processor and scheduler must parse the code for submission on their respective loops.

Airflow's task isolation means:

  • Executor dependency: Even basic testing requires configuring and running an executor
  • Process boundaries: Tasks run in separate processes or containers, making debugging complex
  • Resource overhead: Each task submission involves scheduler communication and process spawning
  • Development friction: Simple changes require full infrastructure restarts

Prefect balances isolation with development simplicity. Tasks can run in the same process for development and testing, while still supporting distributed execution when needed. The isolation level is configurable rather than mandatory.

Additionally, Prefect supports direct submission to infrastructure without managing persistent processes. Infrastructure is provisioned on-demand, executes the workflow, and tears down automatically.

Configuration-Free Development Alternative

Prefect requires zero configuration for local development. It uses sensible defaults and configuration-as-code patterns that eliminate setup overhead. The same code that runs locally executes identically in remote environments, ensuring consistent behavior across development and deployment contexts.

As an example, the following code will reliably execute the simple workflow on a cron schedule with zero configuration:

1# /// script
2# dependencies = ["prefect"]
3# ///
4
5from prefect import flow
6
7@flow(log_prints=True)
8def hello_world():
9    print('hello world!')
10
11if __name__ == "__main__":
12    hello_world.serve(name="example-deployment", cron="* * * * *")

💡 Try it now: Save this as scheduled_workflow.py and run uv run scheduled_workflow.py - it will start running on a cron schedule immediately!

Airflow Parameter Limitations: Dynamic Workflow and Testing Constraints

Airflow 3's approach to workflow parametrization reveals a fundamental design limitation that affects both development and testing. While Airflow provides DAG Params, the implementation has significant constraints.

Airflow's parametrization challenges include:

  • Parse-time limitations: DAG parameters cannot be used in the DAG body during parsing, as demonstrated in community discussions. Parameters are only available at runtime through templating.
  • Limited dynamic behavior: Parameters cannot drive dynamic task generation or DAG structure changes, requiring complex workarounds.
  • Testing difficulties: The parametrization model makes dependency injection patterns for testing nearly impossible.

These limitations make Airflow parameters unsuitable for the dependency injection patterns common in modern software development. Testing scenarios that require different parameter values must rely on external configuration systems or complex templating approaches.

Flexible Parametrization Alternative

Prefect's parametrization enables proper dependency injection for both runtime flexibility and comprehensive testing. Parameters can drive workflow logic, task generation, and even dynamically chosen execution paths, making workflows truly testable and maintainable:

1# /// script
2# dependencies = ["prefect"]
3# ///
4
5from prefect import flow, task
6from typing import List
7
8@task
9def process_item(item: str, multiplier: int = 1) -> str:
10    return f"Processed {item} x{multiplier}"
11
12@flow
13def dynamic_workflow(items: List[str], processing_multiplier: int = 2):
14    results = []
15
16    # Parameters can drive dynamic task generation
17    for item in items:
18        result = process_item(item, processing_multiplier)
19        results.append(result)
20
21    return results
22
23if __name__ == "__main__":
24    # Test with different parameters easily
25    test_result = dynamic_workflow(["apple", "banana"], processing_multiplier=1)
26    print("Test result:", test_result)
27
28    # Run with production parameters
29    prod_result = dynamic_workflow(["data1", "data2", "data3"], processing_multiplier=5)
30    print("Production result:", prod_result)

💡 Try it now: Save as dynamic_workflow.py and run uv run dynamic_workflow.py - see how parameters drive workflow behavior instantly!

The parametrization works identically in local development and remote execution, eliminating another source of environment-specific complexity.

Solutions for Airflow Development Problems

The challenges outlined above reflect fundamental architectural decisions that affect every developer working with Airflow. While the community continues to develop workarounds and improvements, these solutions often add additional complexity rather than addressing the root causes.

For teams experiencing persistent productivity issues with local development, testing, and debugging workflows, Prefect offers a fundamentally different approach that prioritizes developer experience. Recent migration stories reveal dramatic improvements: after switching to Prefect, Endpoint increased their turnaround time by 3x while also achieving a 73% reduction in orchestration costs. LiveEO reported similar results - they tripled their development speed and cut AWS costs by 63%.

Conclusion: Streamlining Workflow Development

Airflow 3's approach to local development reflects its design heritage as a production-focused batch processing system. While version 3 introduces significant architectural improvements, it maintains the assumption that developers will work within complex infrastructure environments.

The architectural differences create measurably different development experiences. Time spent managing infrastructure and debugging environment issues becomes time available for actual development work. When infrastructure complexity is removed from the development process, teams can focus on building sophisticated data applications rather than managing deployment environments.

Prefect's architecture prioritizes developer experience as a primary design consideration. The result is faster iteration cycles, higher code quality, and development teams that can concentrate on business logic rather than operational concerns. Most importantly, the identical execution paths between local development and remote deployment eliminate a major source of environment-specific bugs and debugging complexity.

Ready to experience streamlined workflow development? Try any of the code examples above with uv run filename.py and see the difference for yourself - no configuration, setup, or infrastructure required.

To learn more about Prefect:

Happy Engineering!