Managing AI agents as employees is the challenge of 2025, says Goldman Sachs CIO


digital workers

Xie GengHong/Getty Images

This year, artificial intelligence will be dominated by the maturation of AI code as corporate “workers” that can take over corporate processes and be managed just like employees, according to a year-outlook blog post disseminated by investment bank Goldman Sachs featuring its chief information officer, Marco Argenti.

Human and machine resources

“The capabilities of AI models to plan and execute complex, long-running tasks on humans’ behalf will begin to mature,” writes Argenti. “This will create the conditions for companies to eventually ’employ’ and train AI workers to be part of hybrid teams of humans and AIs working together.”

Also: Autonomous businesses will be powered by AI agents

m-argenti-headshot

“There’s a great opportunity for capital to move towards the application layer, the toolset layer,” says Goldman Sachs CIO Marco Argenti. “I think we will see that shift happening, most likely as early as next year.”

Goldman Sachs

Argenti predicts that corporate HR offices will have to manage “human and machine resources,” and there may even be AI “layoffs” as programs are replaced by more highly capable versions.

Argenti’s broad prediction echoes comments last week in the CES keynote by Nvidia CEO Jensen Huang. Huang said onstage that “in the future, these AI agents are essentially digital workforce that work beside your employees, and do things on your behalf.”

“The way you’ll bring these AI agents into your company is to onboard them,” said Huang. “In a lot of ways, the IT department of every company is going to be the HR department of AI agents in the future. Your IT department is going to become like AI agent HR.”

AI models will be like PhD graduates

Among other predictions offered by Argenti is that the most-capable AI models will be like PhD graduates — so-called expert AI systems that have “industry-specific knowledge” for finance, medicine, etc.

The advanced AI models will be the result of retrieval-augmented generation — the process of hooking up AI models to external resources such as databases and API function calls — and also “fine-tuning,” the practice of training AI models a second time, subsequent to their initial pre-training, with additional data specific to a domain.

Also: AI agents might be the new workforce, but they still need a manager

Argenti references another development that might be right out of Huang’s keynote: robots training in “world models” that simulate the environment. “The intersection of LLMs and robotics will increasingly bring AI into, and enable it to experience, the physical world, which will help enable reasoning capabilities for AI,” he writes.

Argenti sees “responsible AI” increasing in importance as a board-room priority in 2025, and, in something of a repeat of last year’s predictions, he expects that the largest generative AI models — the “frontier” models of OpenAI and others — will become the province of only a handful of institutions with budgets large enough to pursue their enormous training costs.

 That is the “Formula One” version of AI, where the “engines” of AI are made by a handful of powerful providers. Everyone else will work on smaller-model development, Argenti predicts.





Source link