What's your role in the future of AI?
Achraf Ait Sidi Hammou
My friend and I have been completely blown away by ChatGPT over the past couple days. As insiders — I’ve been doing AI research for the past 2+ years — I’m both excited and a bit concerned. Seeing the power of this new technology can’t leave you unmoved. It just makes any knowledge task trivial:
- marketing? I can do it.
- sales? for sure!
- legal? I’ll take care of that too.
- medical? why not 🤷♂️
- education? ez!
I’m mostly surprised by how people fail to think beyond the surface-level AI writer use case. Sure you can write better cold emails in seconds, but if you really think about it, emails don’t even make sense anymore. Why would you still write emails when you can let your AIs talk to each other to figure things out on their own.
How about content? Why would you generate content when people are going to ask their AI for information directly. Your generated content is just synthetic data to train the next version of the model.
This new ChatGPT demo makes me feel uncomfortable because it’s highlighting how limited our imagination is when it comes to rethinking how society works if an AI were to replace every knowledge worker. Up until today, it was a science-fiction exercise. But now, it’s coming faster than we can prepare for.
The most concerning aspect of this new society is how relationships as we know them might entirely disappear. We’ve already seen that even though social media were meant to connect us, it actually made relationships shallow. But with apps like Character.AI it gets even worse. Why would you chat with a normie when you can have a direct conversation with Elon Musk, Steve Jobs, Einstein or heck, even your favorite manga character?
Studies show that the number of people we can rely on when life gets tough is decreasing rapidly, but with something like Character.AI we have a solution to go cold turkey overnight.
On the surface it might sound great. Anybody now can have access to brilliant scientists or psychologists to talk about anything without social pressure. But when I hear people like Chamath suggesting that Meta should invest their R&D into figuring out emotions, it gets crippy real fast. I mean we’ve already seen how a simple feed algorithm can manipulate political opinions, so what will be the limit of an AI that can understand and manipulate emotions?
The question I’m asking myself is, where do I want to stand in this new paradigm?
Do I want to be a provider, a contributor, a user or a spectator?
Being a provider means going on a fight against the giants: OpenAI, Google, Microsoft, AWS, Meta? It’s fighting a losing battle.
But I definitely can’t stay passive. So what would it mean to be a contributor?
I think there’s going to be several layers to this new ecosystem:
- Layer 1: providers — building the infrastructure, the baremetal.
- Layer 2: fine-tuners — building user-friendly interfaces on top of the infrastructure.
- Layer 3: user facing — building the final brick, consumers of the models, but still in the loop by sanitizing outputs, combining models, collecting data, etc.
To me, Layer 2 is where AI practicioners have to focus on. Unless you want to work at one of the big providers, you’ll never have enough resources (brain, time and GPUs) to keep up with the latest models. But if you know enough about the underlying technologies, and are interested in specific verticals like health, law, finance or e-commerce, there are infinite ways to contribute to the stack by fine-tuning models and building user-friendly interfaces for non-initiated builders and entrepreneurs.
With this new paradigm, 99% of us should focus on building the next Vercel or the next Shopify.