AI ethicists are cautioning that the rise of AI may bring with it the commodification of even one’s motivations. From a report: Researchers from the University of Cambridge’s Leverhulme Center for the Future of Intelligence say — in a paper published Monday in the Harvard Data Science Review journal — the rise of generative AI, such as chatbots and virtual assistants, comes with the increasing opportunity for persuasive technologies to gain a strong foothold.
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” Yaqub Chaudhary, a visiting scholar at the Center for Future of Intelligence, said in a statement. When interacting even causally with AI chatbots — which can range from digital tutors to assistants to even romantic partners — users share intimate information that gives the technology access to personal “intentions” like psychological and behavioral data, the researcher said.
“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary added. In fact, AI is already subtly manipulating and influencing motivations by mimicking the way a user talks or anticipating the way they are likely to respond, the authors argue. Those conversations, as innocuous as they may seem, leave the door open for the technology to forecast and influence decisions before they are made. “We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes,” Chaudhary said.