The Mainstream AI Vision & Its Discontents
The vision of AI that we often hear from a lot of big AI companies and people in Silicon Valley often focuses on AI automating everything as fast as possible on homogenised scale, creating cubic-kilometre data centres that are copies of the same thing and probably running the same thing. And people often excitedly even talk about the idea that we'll all be unemployed and won't have work, and AI will do everything for us.
I noticed that a lot of people feel very uneasy about it. And I think I definitely understand why. There is this vision that we can just not be needed for work and we all get UBI. And one of the challenges there is that, of course, if you assume the political system is benevolent and that it's working for the public benefit, you're definitely going to get it.
And also, I think there's something deep in human psychology that we don't just want to be consumers. We don't just want to constantly receive things and watch things and enjoy things. We want to actually have agency in the world. We want to have a role. We want the actions that we take to have meaning, to have impact on the people and the things that we care about.
AI as a "Bicycle for the Mind"
It's AI that does more to maximise human–AI collaboration — the set of things that AI can do with us rather than completely separately from us. It's AI that is much more realistic. It's AI that is much more customised to the preferences people have, to the existing things that people are doing, to all of the interesting things that are going on in the world at the edges. And I think it's something that is fundamentally harder to scale in the sense of scaling to $10 trillion within two years. But I think it's a model that most people would find much more agreeable.
One of the types of AI that I'm most interested in is AI that's really optimised for working with the user and basically trying to empower human–AI collaborations as much as possible — being a bicycle for the mind instead of being a car that you're supposed to just go to sleep inside.
Bounding Authority & Delegation
A lot of the time when people work with agents, we think of them as these little spirits that we give instructions to and then they run off and do their own thing. And if you just give one of them your private key, even if you tell it to pay for your coffee, it might get tricked by something on the internet.
Thinking about bounding authority, thinking about knowing what situations you wanted to do things by itself and in what situations you want human confirmation, what kinds of human confirmation, possibly even what kinds of confirmation from an independent model — is really important.
Privacy & "Shadow LLMs"
Money is one type of object where it's really natural to think about limited delegation. Another important one is privacy.
One type of AI civic tech we don't even talk about enough is the idea that if you can do private compute over multiple people's information, you could imagine a sort of LLM shadow of yourself that then gets put into a box and it gets to see all kinds of secret information from other people, from the organisation. And it's able to output a decision — just that one decision — based on what that secret information is, and it can't leak anything else.
So you could imagine by doing that, you actually solve one of the big problems with distributed decision-making, which is that the decision-makers can't know secrets because there's too many of them. And maybe with the right cryptography, now they can. And this can enable much more high-bandwidth interaction even in large groups of people.
Community Translation
One of the ways that I found AI can be helpful is enabling this kind of much deeper translation between different communities that come up with things and say things that are valuable to each other, but that have very different styles of speaking and comprehending. What I can do now is I can basically take that and put it into a bot and have the bot explain it to me in the style that I understand. Then if I have questions, it can answer those questions for me. And suddenly I'm much better able to understand all of these algorithms that they are coming up with, and I'm able to skip straight through to things that I consider important.
I think we can do a lot more to make AI that's really well designed for basically taking the insights, work, digital and informational things that one community generates and really making them much more useful for other people and other communities that will be able to interact with them on their own terms. To me, that's definitely one of the more positive things that is coming out of all this technology.
Resisting AI Monoculture
One of the challenges of AI, the way that it's being developed and rolled out right now, is it's very homogenising. You have a few models that everyone speaks through, and even the open-source models are generally all trained by distilling the outputs of those models or previous models. Everyone uses the same data sets. And there is a risk of this kind of flattening the world in a big way — that you have these tendencies that are in a very small amount of AI code and AI data that start exerting a huge amount of power on the whole world and that start pulling the whole world in the same kind of direction.
A Future of Diverse Collaboration
The kind of world that I want to see is one where we continue to see a diversity, where you continue to see people being free to express different ideas in different directions and at the same time cooperate with each other despite that. And I think the way that AI would look in that kind of world is one where, instead of there being a few single things that everyone is talking through, you see much more hybrid AI–human works that actually feel like an expression of a lot of different people's different ideas of where they want themselves to go, where they want their communities to go, or the world to go.
One thing I hope for is for much more software to be written that really makes human–AI collaboration a very easy and natural thing. We have this one monolithic model, or maybe at most like twenty of them that really are more like three of them. And you can talk to it with a chatbot, you can make it an agent, and it goes and does things. It's all basically the same kind of work in the same interface.
So this could mean, for example, while the AI is working, making it easy for you to continuously express what you wanted to do differently. This could be an AI that updates in real time in response to feedback that you provide. This could be much better tools for fine-tuning, to make that much more normal. I think it looks like basically making a lot of the things that currently are hard rails that everyone expects to be in the exact same place look like something that's much more adapted to different people.