Glimpses of Tomorrow - Feb 2024
Hey reader,
I started this newsletter to share my perspective on where artificial intelligence and human-computer interaction are headed in the near future: not a cloud-based dystopia of omniscient chatbots, but an enchanted world of embodied intelligence that brings our surroundings to life.
But don’t just take my word for it! There are products and concepts that give early hints of what the future has in store. In addition to my long-form writing, I’ll be sharing an occasional list of the intriguing things I’ve found.
Here’s my selection for the last couple of months. I hope you have fun exploring.
— Daniel Situnayake
AI binoculars
As reported by designboom, Swarovski and Marc Newson have created a pair of smart binoculars that can identify birds and mammals using on-device AI.
I found these a wonderful demonstration of intelligence embodied in tools: a single purpose device imbued with single-purpose AI, unencumbered by an Internet connection. These binoculars will still work in a hundred years’ time, impossible if they relied on cloud AI.
At a current price of $4,799, they are still more of a luxury concept than a mass market product—but the underlying technology is simple and well-proven, so we can expect it to materialize in less extravagant forms.
Mosquito monitoring
This recent paper by researchers at the University of Castilla-La Mancha, Spain, introduces a system for monitoring mosquito populations using embedded AI.
While I’m a big proponent of disconnected systems, the paper shows how on-device AI pairs beautifully with the Internet of Things. Autonomous devices use AI to count mosquito eggs, with the resulting statistics aggregated via low-power wireless networks. The concept can work in remote locations, where connectivity is poor—helping authorities respond immediately to public health threats.
AI interactions with IDEO
My friend Savannah Kunovsky is Global Head of Emerging Technology at IDEO, and she’s one of the people who really gets it. Savannah’s recent article, 7 AI interactions we’d like to see in the world, shares her team’s lovely concepts for AI-enabled human computer interaction. It contains some of my embodied AI favorites, like this never-ending storybook:
I’m excited about Savannah’s work, and I hope this piece inspires others to think about the potential of artificial intelligence in interaction design. Not every experience needs to be a chatbot, and the form of a product really matters.
Rabbit R1
If you’re going to build a chatbot, at least give it a body. The gorgeous Rabbit R1 is a cloud-based LLM channeled through a custom device designed by Teenage Engineering, famed for incredible audio hardware I’ve always dreamed of owning.
Designed as a constant companion, the R1 supplements your cellphone with access to a cloud AI service that can automate online chores and help you look up information. This would be a million times better if it happened on-device, but it will take a couple years for the tech to catch up. While we wait, I’ve ordered a Rabbit—if mostly for the dazzling design.
Poem/1
There’s a theme emerging with these names. I suppose the “1” must signify a new beginning: the first crop of hardware in the age of AI. The Poem/1 is a clock that tells the time in whimsical rhymes, like a court jester for the home office. I’ve ordered one of these, too.
I love the concept here: given the infinite possibilities of machine intelligence, the decadent becomes trivial. Each clock gets a unique poem, once per minute. It’s an eye-catching token of the age we inhabit, where creativity is available behind an API. It encodes the limitations of cloud AI: some day we’ll be writing poetry with a local CPU, but this clock only ticks while its servers are online. Buy one while it lasts.
Vision Pro hits the streets
The Apple Vision Pro (what a name) has inspired a mixture of anxiety and delight, and nothing captures that spirit better than this video from Casey Neistat. Neistat captures the lunacy of chasing digital butterflies in a crowded coffee shop, his rapturous experience cut with scenes of bemused patrons.
Inside the hyper-corporate virtual world of Krispy Kreme’s New York flagship, where the layering of digital and physical reality feels entirely natural, Neistat experiences a personal revelation: “Holy shit. This is it. This is the future of computing that everyone’s been promising for, like, the last fifteen years”.
My own takeaway from Neistat’s riotous video: augmented reality influencers will definitely be a thing.
Liquid City’s Wisp
Wisp is “an ornament and companion for Apple Vision Pro”. It’s a mossy terrarium that hovers gently above your desk, and a friendly floating orb that you can chase around with a hand.
As a character-based experience—an interactive digital creature—I love what Wisp suggests. Our computers are panes of glass, but we could make them anything: animals, people, or spirits of the woods. The Vision Pro combines the physical and the virtual; it’s an edge AI device that draws a layer on top of reality. It gives us a preview of the world we might build.
Beautiful prosthetics
The Economist suggests prosthetic limbs need not look like real ones, with an exploration of the work of several designers who are replacing human limbs with dynamic alternatives.
The work of artist Sophie de Oliveira Barata and engineer Dani Clode, these novel appendages suggest a different way of adopting technology: in full integration with our bodies. Our tools are extensions of our hands, and our hands manifest thoughts in the physical world. After centuries of dualism, will the future bring us closer to the matter we are made from?
Multimodal glasses
It’s the top trend of early 2024: AI products that are mostly thin clients, depending on the cloud for their brains.
The Frame AI glasses from Brilliant Labs are a fancy example. This concept hardware packs AR screens and sensors into hipster glasses frames that surface contextual information on-demand—but they depend on OpenAI and Perplexity for their core capabilities.
There’s some irony here. Apple Vision Pro is an edge compute experience that mediates reality: the computation happens on-device, but the user sees the world through a camera. The Frame AI glasses let you see with your eyes, but your conceptual world is routed through the cloud.
Would you outsource your eyes to a camera, or your memory to the web? That’s the choice we face today.
With love,
Daniel
p.s. I’d love to hear your discoveries that continue on this theme. Share them in the comments and I’ll feature my highlights next time!