A few years into the AI shift, the gap between engineers is not talent. It’s coordination: shared norms and a shared language for how AI fits into everyday engineering work. Some teams are already getting real value. They’ve moved beyond one-off experiments and started building repeatable ways of working with AI. Others haven’t, even when the motivation is there. The reason is often simple: The cost of orientation has exploded. The landscape is saturated with tools and advice, and it’s hard to know what matters, where to start, and what “good” looks like once you care about production realities.
The missing map
What’s missing is a shared reference model. Not another tool. A map. Which engineering activities can AI responsibly support? What does quality mean for those outputs? What changes when part of the workflow becomes probabilistic? And what guardrails keep integration safe, observable, and accountable? Without that map, it’s easy to drown in novelty, and easy to confuse widespread experimentation with reliable integration. Teams with the least time, budget, and local support pay the highest price, and the gap compounds.
That gap is now visible at the organizational level. More organizations are trying to turn AI into business value, and the difference between hype and integration is showing up in practice. It’s easy to ship impressive demos. It’s much harder to make AI-assisted work reliable under real-world constraints: measurable quality, controllable failure modes, clear data boundaries, operational ownership, and predictable cost and latency. This is where engineering discipline matters most. AI does not remove the need for it; it amplifies the cost of missing it. The question is how we move from scattered experimentation to integrated practice without burning cycles on tool churn. To do that at scale, we need shared scaffolding: a public model and shared language for what “good” looks like in AI-native engineering.
We have seen why this kind of shared scaffolding matters before. In the early internet era, promise and noise moved faster than standards and shared practice. What made the internet durable was not a single vendor or methodology but a cultural infrastructure: open knowledge sharing, global collaboration, and shared language that made practices comparable and teachable. AI-native engineering needs the same kind of cultural infrastructure, because integration only scales when the industry can coordinate on what “good” means. AI does not remove the need for careful engineering. On the contrary, it punishes the absence of it.
A public scaffold for AI-native engineering
In the second half of 2025, I began to notice growing unease among engineers I worked with and friends in IT. There was a clear sense that AI would change our work in profound ways, but far less clarity on what that actually meant for a person’s role, skills, and daily practice. There was no shortage of trainings, guides, blogs, or tools, but the more resources appeared, the harder it became to judge what was relevant, what was useful, and where to begin. It felt overwhelming. How do you know which topics truly matter to you when suddenly everything is labeled AI? How do you move from hype to useful integration?
I was feeling much of that same uncertainty myself. I was trying to make sense of the shift too, and for a while I think I was waiting for a clearer structure to emerge from elsewhere. It was only when friends started reaching out to me for help and guidance that I realized I might have something meaningful to contribute. I do not consider myself an AI expert. I am finding my way through these changes just like many other engineers. But over the years, I had become known for my work in IT workforce development, skill and capability frameworks, and engineering excellence and enablement. I know how to help people navigate complexity in a practical and sustainable way, and I enjoy bringing clarity to chaos.
That is what led me to start working on the AI Flower as a hobby project in early October 2025, building on frameworks and methods I already had experience with.
When I began sharing it with friends in IT to gather feedback, I saw how much it resonated. It helped them make sense of the complexity around AI, think more clearly about their own upskilling, and begin shaping AI adoption strategies of their own. That is when I realized this casual experiment held real value, and decided I wanted to publish it so it could help empower other engineers and IT organizations in the same way it had helped my friends.
With the AI Flower, I’m offering a public scaffold for AI-native engineering work: a shared reference model that helps engineers, teams, and organizations adopt and integrate AI sustainably and reliably. It’s meant to steer and organize the conversation around AI-assisted engineering, and to invite targeted feedback on what breaks, what’s missing, and what “good” should mean in real production contexts. It’s not meant to be perfect. It’s meant to be useful, freely available, open to contribution, and shaped by the strongest resource our industry has: collective intelligence.
Open knowledge sharing and collaboration cannot be optional. If AI is becoming part of how we design, build, operate, secure, and govern systems, we need more than tools and enthusiasm. Many of us work on systems people rely on every day. When those systems fail, the impact is real. That’s why we owe it to the people who depend on these systems to do this with care, and why we won’t get there in isolation. We need the industry, globally, to converge on shared standards for dependable practice.

About the AI Flower
The AI Flower maps the core activities that make up engineering work across the main engineering disciplines. For each activity, it defines what good looks like, based on practices that should already feel familiar to engineers. It then helps people explore how AI can support those activities in practice, providing guidance on how to begin using AI in that work, sharing links to useful learning resources, and outlining the main risks, trade-offs, and mitigations.
But the AI landscape is changing quickly. This activity-based approach helps engineers understand how AI can support core engineering tasks, where risks may arise, and how to start building practical experience. But on its own, it isn’t enough as a long-term model for AI adoption.
As AI capabilities evolve, many engineering activities will become more abstracted, more automated, or absorbed into the infrastructure layer. That means engineers will need to do more than learn how to use AI within today’s activities. They will also need to work with emerging approaches such as context engineering and agentic workflows, which are already reshaping what we consider core engineering work. A concept I call the Skill Fossilization Model captures that progression. It shows how both engineering skills and AI-related skills evolve over time, and how some of them become less visible as work moves to a higher level of abstraction. Together, the AI Flower and the Skill Fossilization Model are meant to help engineers stay adaptable as the field continues to shift.
The main purpose of the AI Flower is to help engineers find their way through these rapid changes and grow with them. While I provide content for each section and activity, the real value lies in the framework and structure itself. To become truly valuable, it will need the insight, care, and contribution of engineers across disciplines, perspectives, and regions.
I genuinely believe the AI Flower, as an open and freely available framework, can serve as a scaffold for that work. This is my contribution to a changing industry. But it will only be useful—it will only “bloom”—if the community tests it, challenges it, and improves it over time.
And if any industry can turn open critique and contribution into shared standards at a global scale, it’s ours, isn’t it?
Join me at AI Codecon to learn more
If the AI Flower resonates and you want the full walkthrough, I’ll be presenting it at O’Reilly’s upcoming AI Codecon. (Registration is free and open to all.)
If you’re concerned about how quickly AI engineering patterns are evolving, that concern is valid. We’ve already seen the center of gravity shift from ad hoc prompt work, to context engineering, to increasingly agentic workflows, and there is more coming. A core design goal of the AI Flower is to stay stable across those shifts by focusing on underlying capabilities rather than specific techniques. I’ll go deeper on that stability principle, including the Skill Fossilization model, at AI Codecon as well.