UK & Ireland Director of Intelligence Enterprise at GlobalLogic, Tim Hatton, explores how principles of control theory, exemplified by SpaceX’s Starship, apply to the design of effective enterprise agentic AI systems.
Reaching for the stars has always been the pinnacle of human ingenuity. The relentless desire to push beyond known boundaries is what drives innovation and advancement all around the globe. The recent example of SpaceX’s latest Starship spacecraft soaring into the skies and returning with precision isn’t just a milestone in aerospace engineering—it’s a vivid illustration of what’s possible when our boundless creativity fuels cutting-edge technologies.
SpaceX’s success demonstrates that autonomous software can effectively control a sophisticated system and steer it toward defined goals. This seamless blend of autonomy, awareness, intelligent adaptability, and results-driven decision-making offers a compelling analogy for enterprises. It’s a beacon for a future where agentic AI systems revolutionise workflows, drive innovation, and transform industries.
Control theory: A proven framework
Control theory underpins self-regulating systems that balance performance and adaptability. It dates from the 19th century when Scottish physicist and mathematician James Clerk Maxwell first described the operation of centrifugal ‘governors’. Its core principles—feedback loops, stability, controllability, and predictability—brought humanity into the industrial age. Starting with stabilising windmill velocity, up to today’s spaceflights, nuclear stations and nation-spanning electricity grids.
We see control theory in action when landing a rocket, for example. The manoeuvre relies on sensors to measure actual parameters, controllers to adjust based on feedback, and the system to execute corrections. Comparing real-time data to desired outcomes minimises errors, ensuring precision and safety.
It’s a framework that extends to enterprise workflows. Employees function as systems, supervisors as controllers, and tasks as objectives. A seasoned worker might self-correct without managerial input, paralleling autonomous systems’ ability to adapt dynamically.
Challenges in agentic AI
Agentic AI systems combine traditional control frameworks’ precision with advanced AI models’ generative power. However, while rockets rely on the time-tested principles of control theory, AI-driven systems are powered by large language models (LLMs). This introduces new layers of complexity that make designing resilient AI agents that deliver precision, adaptability, and trustworthiness uniquely challenging.
Computational irreducibility: LLMs like GPT-4 defy simplified modelling. They are so complex and their internal workings so intricate that we cannot predict their exact outputs without actually running them. Predicting outputs requires executing each computational step, complicating reliability and optimisation. A single prompt tweak can disrupt workflows, making iterative testing essential, yet time-consuming.
Nonlinearity and high dimensionality: Operating in high-dimensional vector spaces, with millions of input elements, LLMs process data in nonlinear ways. This means outputs are sensitive to minor changes. Testing and optimising the performance of single components of complex workflows, like text-to-SQL queries, under these parameters, becomes a monumental task.
Blurring code and data: Traditional systems separate code and data. In contrast, LLMs embed instructions within prompts, mixing the two. This variability introduces a host of testing, reliability, and security issues. This blurring of ever-growing data sets with the prompts introduces variability that is difficult to model and predict, which also compounds the dimensionality problem described above.
Stochastic behaviour: LLMs may produce different outputs for the same input due to factors like sampling methods during generation. This means they introduce randomness—an asset for creativity but a hurdle for repeatability. This stochastic nature contrasts with the deterministic outputs expected from traditional control and software systems. Adjusting parameters does reduce randomness, but it also degrades system performance.
Feedback loop instability: Classic feedback loops thrive on predictable outputs. LLM-based systems, however, resist straightforward corrections. The lack of a simplified model described as computational irreducibility above makes designing feedback mechanisms that can consistently guide the system toward desired outcomes particularly difficult. This, in turn, complicates system scalability, stability and adaptability.
Nine essential strategies for building resilient agentic AI
As enterprises aim to harness the power of LLMs to boost productivity and innovation, they must confront the challenges of agentic AI. At GlobalLogic, we proffer nine practical strategies to overcome these hurdles, equipping organisations for the future of agentic AI:
Use domain-specific languages (DSLs)
Human language is not the best for agentic AI’s inner workings. DSLs minimise dimensionality while preserving the richness of the actions and environments. Structured formats like JSON are the simplest way to ensure reliability. Training LLMs on specialised DSLs enhances accuracy without sacrificing performance.
Reserve natural language for human interactions
LLMs are good at understanding natural language and intent. They can translate conversational requests into structured data that systems can understand. For agentic AI-to-AI communication, it’s more efficient and reliable for them to exchange this structured data directly, avoiding the complexity and ambiguity of natural language.
Fuse traditional AI/ML with GenAI
LLMs exhibit an improving, if somewhat overestimated, ability to undertake complex reasoning. It is, therefore, prudent to combine deterministic machine learning with GenAI’s stochastic capabilities. This hybrid approach both grounds agentic AI in logic and leverages its generative strengths.
Prioritise safety and reliability
The lack of predictability rightly raises concerns about the safety of agentic AI systems. Prioritising robust safety protocols and implementing fail-safes is crucial. To enhance reliability, specific tolerances to incorrect decisions should be nuanced and use-case specific.
Ethical alignment by design
LLMs in their current form lack comprehensive ways to interpret and understand human values. Incorporating guardrails at the input and output levels ensures ethical alignment without jeopardising the system’s efficiency or performance.
Introduce effective feedback loops
Robust agentic AI rely on adaptive control mechanisms that can adjust to the dynamic behaviour of LLMs. LLM-as-a-judge is one proven way to continuously validate and refine agentic AI workflows, ensuring reliability over time.
Implement adaptive human-in-the-loop (HITL)
Critical decisions require human validation. HITL frameworks ensure that when inputs fall outside expected ranges, humans can intervene quickly and effectively.
Build in observability and logging
Where HITL is not needed, comprehensive journaling of agentic AI decision-making enables debugging, quality improvement, and regulatory compliance. These logs support AI governance, addressing mandates like the EU’s AI Act.
Embrace modular architectures
Structuring systems into distinct agents with well-defined interfaces means each architecture component can perform specific tasks independently while communicating seamlessly with others. This modularity facilitates independent testing, fixing, and scaling of components, ensuring maintainability and resilience.
The new frontier of agentic AI demands a collaborative effort between disciplines—melding the rigour of control theory and the best practices of software engineering with the advances of artificial intelligence and machine learning. By understanding the limitations and potentials, we can design agentic systems that are not only powerful but also safe, reliable, and aligned with human values.
Bridging control theory and agentic AI
The successful flight of SpaceX’s Starship symbolises not just a triumph in aerospace engineering but also a convergence point where traditional control theory meets the emerging complexities of agentic AI. The journey to the stars teaches us that progress is often born from navigating the unknown.
As we stand at the cusp of integrating advanced AI into the fabric of our enterprises, we embrace both the challenges and the possibilities, charting a course toward a future where agentic AI systems propel us to new heights, both in space and here on Earth.
To learn more, download GlobalLogic’s ebook ‘Agentic AI’s Coming of Age: From Rocket Landings to Intelligent Enterprises: Understanding the Complexity of Agentic AI’
About Tim Hatton, Director of Intelligence Enterprise (UK & Ireland)
Tim is responsible for shaping GlobalLogic’s data & AI offerings for clients. He brings 30 years experience in data & digital technology leadership across a variety of industries and clients including British Airways, Lloyds Banking Group, News UK, and Halfords.
Tim comes to GlobalLogic after 9 years at AND Digital, where he created the data consulting practice and, latterly, designed and delivered the go-to-market strategy for AND’s data services. Prior to AND, he worked at Accenture for a few years and before that ran his own digital marketing agency for a decade.
Tim has also been involved in dot.com startups – none of which made him a millionaire, but all of which taught him a great deal about running a business. Tim is a regular public speaker on data & digital topics, and has been featured in radio & magazines on topics as diverse as data governance, digital media formats and search engine optimisation. He has a BSc in Human Cybernetics from the University of Reading, which included the study of control systems theory.
About GlobalLogic
GlobalLogic (www.globallogic.com) is a leader in digital engineering. We help brands across the globe design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise—we help our clients imagine what’s possible and accelerate their transition into tomorrow’s digital businesses.
Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers around the world, extending our deep expertise to customers in the Mobility, Communications, Financial Services, Healthcare and Life sciences, Manufacturing, Media and Entertainment, Semiconductor, and Technology industries.
GlobalLogic is a Hitachi Group Company operating under Hitachi, Ltd. (TSE: 6501), which contributes to a sustainable society with a higher quality of life by driving innovation through data and technology as the Social Innovation Business.
See more stories here.