Google is Using Anthropic’s Claude To Improve Its Gemini AI


Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, TechCrunch reported Tuesday, citing internal correspondence. From the report: Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.

As tech companies race to build better AI models, the performance of these models are often evaluated against competitors, typically by running their own models through industry benchmarks rather than having contractors painstakingly evaluate their competitors’ AI responses. The contractors working on Gemini tasked with rating the accuracy of the model’s outputs must score each response that they see according to multiple criteria, like truthfulness and verbosity. The contractors are given up to 30 minutes per prompt to determine whose answer is better, Gemini’s or Claude’s, according to the correspondence seen by TechCrunch.



Source link