Tomorrow’s application users may look quite different than what we know today — and we’re not just talking about more GenZers. Many users may actually be autonomous AI agents.
That’s the word from a new set of predictions for the decade ahead issued by Accenture, which highlights how our future is being shaped by AI-powered autonomy. By 2030, agents — not people — will be the “primary users of most enterprises’ internal digital systems,” the study’s co-authors state. By 2032, “interacting with agents surpasses apps in average consumer time spent on smart devices.”
Also: In a machine-led economy, relational intelligence is key to success
This heralds a moment of transition, what the report’s primary author, Accenture CTO Karthik Narain, calls the Binary Big Bang. “When foundation models cracked the natural language barrier,” writes Narain, “they kickstarted a shift in our technology systems: how we design them, use them, and how they operate.”
These new developments are “pushing the limits of software and programming, multiplying companies’ digital output, and laying the foundation for cognitive digital brains that infuse AI deeply into enterprises’ DNA,” Narain adds.
The emerging technology development landscape will focus on three areas, he states: agentic systems, digital core, and generative user interfaces. These will be deployed on highly composable and modular building blocks.
Agentic systems
Agentic systems currently “show great promise with small pieces of code, and given documentation and examples, can call functions and APIs with high accuracy,” he reports. “They can create functions and APIs to use later. Companies are rapidly integrating these capabilities into new models to accelerate engineering velocity.”
The Accenture team added this notation:
“One of the leading agentic systems for software engineering today is Anthropic’s Claude 3.5 Sonnet. “When tested on SWE-Bench Verified, a software engineering benchmark of real-world issues from GitHub, it achieved a remarkable 49% resolved rate.38 In 2023, agents had a rate of less than 5%.”
Digital core
The digital core is the technological architecture and infrastructure that runs the AI-powered enterprise. Agents will rely on a digital core that enables them to “connect data sources with analytical platforms that can use that data to drive decision-making and useful actions.” Today’s agentic systems can’t build and maintain the entire digital core — “but they’re tackling pieces of it,” Narain points out.
Also: Is prompt engineering a ‘fad’ hindering AI progress?
About half of executives responding to Accenture’s survey, 48%, report they soon expect agents to be able to upgrade and modernize functions and integrations. At least 46% said agents will soon be able to assure the quality of digital functions and systems, and 45% anticipate agents will access functions from internal systems.
Accessing functions from third-party systems is still a ways off, however — only 29% see this on the near horizon. Only 38% see their agents capable of accessing data from across the organization.
Generative UI
Another interesting development Narain and his co-authors see emerging with the rise of AI agents is generative UI, which involves leveraging AI techniques to generate highly personalized user interfaces. “For decades, the high cost of software development and the low cost of software distribution have driven the idea of creating a single UI that must work for every user. But now, as agentic systems advance and begin to take more actions on our behalf in the digital world, they’re driving a new software paradigm where cheaper code and language-first interfaces make dynamically generated, custom UI components increasingly feasible.”
To get started, the Accenture co-authors urge teams to experiment with agents internally. “A good way to begin is to create task-specific internal agents. After starting small, you can move modularly, over time expanding the functions and data your internal agents can access and using them to learn and prepare for building external-facing agents in the future.”
Also: Make room for RAG: How Gen AI’s balance of power is shifting
As autonomous agents proliferate, maintaining consistency and trust becomes crucial. “Companies will need to closely surveil them and ensure guardrails are in place,” the report continues. “What data are these systems accessing, who is directing them, what is the quality of their outputs, and more? Transparency here will help to increase employees’ trust in the systems. As you create a monitoring system, lay out a governance and technological roadmap for implementation. Also, develop communication and maintenance plans so your organization understands how the monitoring works and your guardrails keep up with advances.”
Finally, to keep things grounded, Narain and his co-authors caution that “AI agents are amazing technical feats but are by no means perfect. They are computationally expensive, non-deterministic, and can lack explainability. But just as retrieval augmented generation (RAG) can ground an LLM, so can code and functions ground an agent, making them more explainable and increasing trust in them.”