Every Sunday I sit down and ask myself the same four questions: What went well? What would I do differently? What did I actually learn? What am I carrying into next week?
It’s a simple habit. But it’s changed how I grow as a data engineer more than any course or tutorial ever has. Here’s this week’s honest answer to each.
What Went Well: Automating the Unglamorous Stuff
Mid-week I finally automated a data ingestion job that had been running manually for months. Nothing fancy — a Python script wrapped in an Airflow DAG, pulling from an SFTP server, applying some transformations, and landing the data in BigQuery.
But it saved 2 hours of manual work per day. That’s 10 hours a week. Over a year, that’s 500+ hours given back to the team. The boring automations often have the highest ROI. If you have a manual task that happens daily or weekly — that’s your first automation target.
What I’d Do Differently: Ask Sooner
I spent four hours this week debugging a dbt model that was failing with a cryptic error. After four hours, I sent a Slack message to a teammate. He replied in five minutes with the fix.
Four hours. Five minutes. Same result.
The cost of asking for help is almost always lower than we think. So next week: if I’m stuck for more than 30 minutes on something that someone on my team might know, I’m asking. Full stop.
What I Actually Learned: Airflow Dynamic Task Mapping
This week I finally made peace with Airflow’s dynamic task mapping — a feature that’s been in Airflow 2.3+ for a while but that I’d been avoiding because the syntax looked intimidating.
The idea: instead of defining a fixed number of tasks at DAG write time, you let the number of tasks be determined at runtime based on your data. The key: .expand() creates one task instance per item in the list automatically.
from airflow.decorators import dag, task
from datetime import datetime
@dag(start_date=datetime(2026, 1, 1), schedule="@daily")
def dynamic_pipeline():
@task
def get_files() -> list:
return ["file_a.csv", "file_b.csv", "file_c.csv"]
@task
def process_file(filename: str):
print(f"Processing {filename}")
files = get_files()
process_file.expand(filename=files) # Creates one task per file
dynamic_pipeline()
Some tools look harder than they are. Sometimes the barrier is just the willingness to read the documentation carefully instead of skimming for a quick answer.
What I’m Carrying Into Next Week: Collaborate First
The best moment of this week wasn’t debugging a pipeline or shipping a feature. It was a 15-minute call with a teammate who’d seen a similar problem before. In 15 minutes, we covered what would have taken me another 2 hours alone.
That’s the compounding return on collaboration. So my intention for next week: collaborate first, solo-debug second.
The Habit Behind the Reflection
If you don’t already do a weekly reflection, try it — even just 10 minutes on a Sunday. Four questions: What went well? What would I do differently? What did I learn? What’s my intention for next week?
Growth as an engineer isn’t just about learning new tools. It’s about getting better at learning itself.
What’s one thing you learned this week? Drop a comment below.
— Pushpjeet Cholkar, Data Engineer