The Razr Ultra fumbles this key feature, but Google and Samsung can and should do it better


There are many things I like about the new Razr Ultra 2025, and one of the latest additions I was most excited about was the AI Key. In my mind, this would open up a world of possibilities, in addition to being an easy way to access the host of AI features Motorola has packed into its Moto AI suite. However, the implementation leaves a lot to be desired, and frankly, the AI Key is often an afterthought when it easily could have been Motorola’s Action Button.

Its faults become even more apparent when we live in a world where AI is becoming more prevalent. Motorola’s implementation of it doesn’t really hold a candle to Google or Samsung, but it’s a decent first step. With that said, I think the AI Key is a great idea when done right, and both Google and Samsung would be insane not to consider implementing something like it in future phones.

Moto AI limitations

Moto AI running on a Motorola Edge 2025

(Image credit: Nicholas Sutrich / Android Central)

Motorola is trying to put its own spin on AI, a strategy that includes its own Moto AI chatbot. You can think of it as a conversational, but somewhat less functional, Gemini or even Bixby. It can answer questions and respond in a fairly natural way, but it can’t really do much on the phone, which sort of defeats the purpose of having an AI assistant/chatbot. Moto AI is a decent offering, but it still pales in comparison to Google’s AI or Samsung’s Galaxy AI.

On the Razr Ultra 2025, you can customize the AI Key to trigger the Moto AI overlay with a long press. Unfortunately, that’s the only option for the long press, which feels like a glaring oversight. You can’t set it to another digital assistant, despite Motorola literally filling the Razr Ultra 2025 with various options. It’s just Moto AI or nothing.

The Razr Ultra 2025 AI key

(Image credit: Derrek Lee / Android Central)

Okay, fine, so I’ll use Moto AI. No big deal. The problem then is that when I press the AI Key to trigger Moto AI, I have to then press the mic button to talk to Moto AI, which is usually the case. This differs from how Gemini functions when set to trigger using the power button or corner swipe of the display; it automatically starts listening to you. The fact that I have to press another button to talk to Moto AI adds another seemingly unnecessary step, which feels cumbersome.

This probably has something to do with Moto AI automatically analyzing what’s on your screen when you trigger it, which allows it to provide suggestions on what you should do. However, I don’t see why it can’t do that, and let me talk to it at the same time.

Moto AI running on a Motorola Edge 2025

(Image credit: Nicholas Sutrich / Android Central)

When I activate Gemini with a YouTube video open, for example, it will know that I’m watching a video and populate options such as “Ask about video” or “Talk Live about video,” but I still have the option to begin speaking to it when triggered automatically. This alone makes Gemini much more accessible than, and why the “AI Key” should be expanded to just more than Motorola’s AI features.



Source link