When AI Hype Meets Data Reality: A Data Engineer's Perspective
Let me share something that’s been weighing on me lately - something I suspect many of you in the data space might be experiencing too. The current office culture around AI implementation feels like it’s heading in a direction that’s creating more problems than it’s solving.
I’m working at a tech company where the founders are absolutely obsessed with AI. They want every department using Claude, expecting automation to solve everything. But here’s the ironic twist: instead of reducing workload, this AI push has created this expectation that we can accomplish so much more in the same timeframe. Developers are working late nights, and the bandwidth expectations have skyrocketed.
The management team, driven by competition fears, just wants developers to use Claude and push features out rapidly. But let me tell you about the data engineering side of this equation.
Our tech founder used a Claude agent to build a customer-centric dashboard using TypeScript and React.js directly on our OLTP database. And honestly? It’s really good. Meanwhile, I’m working in Databricks, and our Databricks AI/BI dashboard feels incredibly limiting by comparison.
Here’s the reality check: OLTP with proper indexes can outperform OLAP in many scenarios because OLTP is real-time. I can’t deliver real-time capabilities in Databricks because the cost would skyrocket - and our finance team monitors those expenses like maniacs.
So where does that leave me? My core work feels like it’s being replaced, while other developers are creating pull requests day and night, rolling out features constantly. Some developers are even stepping into data engineering territory, working on automation tools like Dagster.
This situation raises important questions about our roles as data professionals. When OLTP databases with proper indexing can outperform expensive OLAP solutions for certain use cases, what does that mean for our value proposition? When developers can quickly spin up data solutions using AI tools, where does that leave dedicated data engineers?
I’ve been thinking about this a lot, and I believe our value isn’t just in the tools we use, but in our problem-solving approach. Tech masturbation will only take us so far - what matters is solving real business problems. Our role should be about ensuring data quality, governance, and making data understandable and accessible.
Maybe the answer isn’t fighting against these changes, but adapting and finding where we can add unique value. Perhaps we need to focus on operationalizing these AI-generated pipelines, ensuring data quality, and becoming the go-to experts for data governance.
The fundamental truth remains: maintaining pipelines is painful, AI or no AI. Data quality matters. Governance matters. These are areas where human expertise still trumps AI-generated solutions.
If you’re experiencing similar challenges in your organization, I’d love to hear how you’re adapting. Are you diversifying your toolset? Focusing on different aspects of data management? Or perhaps considering whether this is the right environment for your skills?
What’s clear is that we need to communicate our value beyond just the tools we work with. We need to position ourselves as problem solvers, quality advocates, and strategic partners rather than just platform specialists.
If you enjoyed this article, please consider sharing it with other data professionals who might be facing similar challenges. And if you’d like more insights on navigating the evolving data landscape, consider subscribing for more content that’s both informative and grounded in real-world experience.


