Lessons from Early Adopters – O’Reilly

My first post made the case for what a semantic layer can bring to the modern enterprise: a single source of truth accessible to everyone who needs it—BI teams in Tableau and Power BI, Excel-loving analysts, application integrations via API, and the AI agents now proliferating across organizations—all pulling from the same governed, performant metric layer. The promise is compelling. But what happens when organizations actually build and deploy one? To find out, I interviewed several early adopters who’ve moved semantic layers from concept to production. Four themes emerged from those conversations: some surprising, some predictable, and a few that will sound familiar to anyone who’s ever shipped data infrastructure.

The first theme: Semantic layers are showing up in unexpected places. Most discussion positions them as enterprise-level infrastructure—a single location capturing all company metrics for centralized access and governance. That’s still the primary use case. But practitioners are also deploying semantic layers for narrower purposes. One organization, for example, built their semantic layer specifically to power a targeted chatbot application—letting users query data conversationally without any traditional BI tools in the mix. No Power BI, no Excel, just an AI interface pulling from governed metrics. The rationale for these smaller deployments is straightforward: Semantic layers deliver high accuracy on structured data, even with lightweight models. The core value drivers remain speed, accuracy, and access—but organizations are finding more ways to extract that value than the enterprise-wide vision suggests.

The second theme: AI is the reason organizations are moving now. The other benefits still matter—single source of truth, multitool compatibility, true self-serve access, cost reduction in cloud environments—but when I asked practitioners why they prioritized a semantic layer today rather than two years ago, the answer was consistent: AI. Whether it was a specific chatbot project or enabling AI-driven analytics at scale, AI requirements were the catalyst. This tracks with what I discussed in my first post: Structured data alone isn’t enough for reliable AI analytics. Adding semantic context—field descriptions, model definitions, object relationships—dramatically improves accuracy. The data industry has noticed. Semantic layers have moved from niche infrastructure to strategic priority: Snowflake, Databricks, dbt Labs, and Microsoft have all made significant investments in the past year.

The third theme: Semantic layers reduce work for developers while making trusted data easier to access. Multiple practitioners cited the value of maintaining metrics and business logic in a single location. Any analyst knows the pain of metric sprawl—leadership requests a change to a core KPI, and you discover it’s been defined a dozen different ways across databases, BI tools, and spreadsheets scattered through the organization. The semantic layer eliminates the chase. One engineering lead described a financial metric that had accumulated over 60 versions across the company. After deploying the semantic layer, there was one.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Access simplifies too. Instead of provisioning controls across warehouses, BI workspaces, individual dashboards, and cloud storage locations, users connect directly to the semantic layer and pull data into the tool of their choice. One organization was surprised to find that after deployment, the most common access point was Excel. But with the semantic layer, that wasn’t a problem: The data served in Excel was identical to what powered their AI tools, Power BI dashboards, and application integrations via API.

The fourth theme will sound familiar to anyone who’s shipped data infrastructure: The biggest challenge isn’t the technology—it’s the data itself. Every practitioner I spoke with identified the same bottleneck: consistency, availability, and accuracy of the underlying data. Engineers and analysts can build the semantic layer, but they can’t will clean data into existence. Success requires close collaboration with business stakeholders, clear ownership of metrics, and leadership alignment to prioritize the work. None of that is new. But despite these challenges, everyone I interviewed reached the same conclusion: The semantic layer is worth the effort.

Semantic layer technology is still early. The tools, vendors, and best practices are evolving fast—what works today may look different in a year. But these conversations revealed a clear signal beneath the noise: semantic layers are becoming critical AI infrastructure. The practitioners I spoke with aren’t experimenting anymore. They’re operationalizing. And despite the expected challenges around data quality and organizational alignment, they’re seeing real returns: fewer metric versions to maintain, simpler access controls, and AI tools that actually produce trusted answers.

My first article made the case for what a semantic layer could be. This one asked what happens when organizations actually build them. The answer: It’s hard, it’s worth it, and for companies serious about AI-driven analytics, the semantic layer is no longer a nice-to-have. It’s the foundation.

Leave a Comment