# Rabbit in the headlights > Apple’s research on large language models looks set to ruin any chances the Rabbit R1 may have. > Last updated: 2024-05-08 ## Rabbit in the headlights You might have seen [Marques Brownlee’s review of the Rabbit R1](https://youtu.be/ddTV12hErTc?si=Fqr7ahpnUMpGOhak) but if you haven’t, I’ll break it down: - On day 1, you can integrate with four apps: Spotify, Uber, DoorDash and Midjourney - You can ask it questions and get it to describe things around you, but the interaction design problems with engineering prompts [I guessed would exist](/2024/01/22/universal-interfaces-and-semantic-agents/) do exist (it’s clunky) - It runs the battery down super, super quick - The Large Action Model, though a cool idea, will need tons of training data – therefore tons of users, tons of interactions, and tons of people being OK with it not performing well for a while It’s not looking good. The week before last, [I thought about getting one to play with](https://bsky.app/profile/visitmy.website/post/3kraxcwbvql2p), but it’d be a waste of $200. I don’t use Spotify, Uber, DoorDash or Midjourney. And it looks like Siri really will kill it. [The Verge reviewed Apple’s research papers on AI](https://www.theverge.com/2024/5/5/24147995/apple-siri-ai-research-chatbot-creativity) and found a few important details: - Apple is researching how to compress large language models and use less power-hungry components, making on-device AI more efficient overall - One paper looked at how an LLM could understand app UIs and whatever else is on your screen, helping you navigate apps and your phone That’s a win for accessibility and better VoiceOver controls. But here’s where Siri catches the Rabbit in its headlights: > A Siri that can understand what you want, paired with a device that can see and understand everything that’s happening on your display, is a phone that can literally use itself. Apple wouldn’t need deep integrations with everything; it could simply run the apps and tap the right buttons automatically. If Rabbit has [sold 100,000 units of the R1](https://www.statista.com/statistics/1452333/rabbit-r1-unit-sales/), that pales in comparison to the [~2 billion active iPhone devices](https://www.theverge.com/2023/2/2/23583501/apple-iphone-ipad-active-2-billion-devices-q1-2023) and [~3 billion active Android devices](https://www.theverge.com/2021/5/18/22440813/android-devices-active-number-smartphones-google-2021) globally. How can they begin to compete? Which brings me to another conclusion for [monetising the Rabbit R1](/2024/02/02/monetising-the-rabbit-r1/): acquisition or deep integration. If Rabbit’s Large Action Model could compete enough with Apple’s Ferret research, they might be in prime position for being acquired. Surely that’s the goal, I don’t see how can they compete. A Siri you can actually use will run them over, surely? 8 May 2024 · [Artificial intelligence](/tag/artificial-intelligence) Related posts: [Monetising the Rabbit R1](/2024/02/02/monetising-the-rabbit-r1/), [Universal interfaces and Semantic Agents](/2024/01/22/universal-interfaces-and-semantic-agents/), [This idea of AI surpassing human ability is silly](/2023/03/25/this-idea-of-ai-surpassing-human-ability-is-silly/) 1 replies, 1 reposts, 1 likes