You can't escape GenAI anymore

Note: Writing down the below felt very dystopian, as it’s one of those things that makes me think, “Yeah, this could very well be the case in the future.” If this shift happens, it raises more questions than it answers.


From sticks to spears, bows to guns, bonfires to ovens, and so on, throughout time, humans have been building new tools to improve their daily lives and make everything more efficient. We are still missing something or someone to fold our clothes, but I could hold my breath because that too is right around the corner, too.

Everything is geared up towards being more productive and efficient, removing every step from the way when we try to achieve our goals. We already experienced this by offloading simple but time consuming tasks to LLM/GPT systems to modify our files, structure data, create ideas and so on.

If you have not been catching up with the latest on the Generative AI, there are models that are running locally / on-device, widely available for everyone, removing all possible friction and making, yet again, a thing or two much more efficient. It's worth pointing out that this is also a much more secure way of utilizing these.

Apple did just recently announce their Apple Intelligence and Google is teasing out their Gemini Nano that will be baked right into the Chrome Browser and devices.

While some of these AI features are innovative, some seem implemented simply because the technology allows it, without clear benefits to the user. I think it is important to distinguish between technological capabilities and genuine usefulness of a feature.

Smaller models are not that capable yet, and are prone to errors, but for some tasks does work wonderfully. We can only anticipate that these too, will improve over time and can fulfill much more complex tasks than just summarizing. On-device/local models are the pathway to having our very own personal assistants in our pockets. This opens up many new possibilities and challenges for service providers, as well as for legislation.

Couple examples below, running completely offline using the Dev version of Google Chrome. Implementation of this is only few lines of code.




First steps in AI-to-AI communication

As this technology evolves, we are witnessing what could be the early stages of AI-to-AI communication. This development raises intriguing questions about the future of service interactions and how these are build.

One possibility we might face soon is the need to build services that are accessible not just to human customers, but also to customers own personal AI assistants.

If we start designing services with virtual agents in mind, we could end up creating systems where AI does the heavy lifting on both ends of the interaction.

Imagine virtual assistants negotiating with company chatbots, or troubleshooting complex issues without human intervention on your behalf

This can also lead to a scenario where virtual agents handle most of heavy lifting on both sides of the service.


Consider how this might change our approach to build new services:

  • Navigation: How do we create navigation systems that both humans and AI can efficiently use?
  • Information Architecture: What are the best ways to structure information so that it's easy for humans and assistants to read?
  • Requests: (considering a system knows when the requester is not human) How do we handle requests and queries that come from an AI agent rather than a human?
  • Protocols: What rules should be in place for "AI-to-AI" communication? At what point and why to allow this? when should we not?

Just thinking about services that are built for virtual agents sounds weird, but it is starting to make much more sense when thinking about how accessible and widely reaching these services can soon be, and how the customers might want to use their own personal assistants to interact with service providers.

This fast and somewhat frictionless access to virtual assistant interactions could change how we approach building services and it has huge implications. It's clear that there is an increase in efficiency but on the other side of the coin is a set of new problems and challenges to be solved.

... or, this all will end up giving weekly allowance to our own assistants and then you start getting random stuff from Temu 🙈



Thanks for making it to the very bottom 👋

I am creating these somewhat interactive blogs every once in a while to practice and develop my frontend and design skills.