Artificial intelligence is an emerging power. Who gets to control it?
The dominant narrative of technology discourse provides cartoon depictions of several groups, then asks us to pick a side:
Executives, who unleash innovation but seek personal power.
Investors, who push for mega profits and don’t care about the consequences.
Academics, who call for public safety while gripped by apocalyptic visions.
And governments, who demand all the benefits with none of the drawbacks—and some killer drones to sweeten the deal.
Our choice, it seems, is between four groups of elites; each with ulterior motives, and none with public interest at heart.
The reality is not so stark. These groups are composed of human beings, as kind and intelligent as anybody else. But each group has its own biases, and it’s inevitable that the interests of any small group will differ from those of the public.
In a culture that aspires to be democratic, our goal should be for power to belong to the people. Our politicians like to think themselves the avatars of public good, but in truth they are shaped by other forces: their need to stay elected, their personal ambitions, and the ideologies of their generous donors.
In reality, democratic government is a messy proxy for direct public action. A democratic state is a social contract: we, the public, grant you temporary power—but we reserve the right to take it away. It’s not practical for affairs of state to be conducted by a mob, so we elect representatives to conduct them on our behalf. But in a healthy democracy, true power remains with the public.
Democracy starts with workers, and it always has. In every industry, the demands of organized workers help build and shape the world in which we live. When it comes to technology, workers have insight and experience that is essential to consider. Their opinions reflect both the impact of technologies and the wishes of a large group of well-informed people.
Film and television workers have felt the direct impact of technology: their income has been disrupted by the business model of streaming, and their craft is potentially threatened by generative AI. Collectively, these workers have a unique understanding of the benefits and risks of applied technology and its impact on the long-term sustainability of the industry they adore.
The recent industrial action by writers and actors has brought that insight center stage: it has forced studio executives to incorporate the perspectives of workers into their decisions. With luck, this pooling of information will lead to a healthier and more successful film industry that can adapt to technology without being overwhelmed by it. This is one huge benefit of liberal democracy: with no single group in charge, our decisions are informed by collective wisdom. Our insights are pooled through our struggle.
There’s no group with more insight about the risk and potential of AI than the people who build and use it. Technology workers, along with the end users of AI products, have precious information about the way that AI works and the impact it may have on their domains of expertise. If our society is to make the best decisions, it needs to factor in this knowledge.
But tech workers are funny animals. Many knowledge workers have professional organizations (think the American Medical Association, the American Bar Association, or the Institute of Electrical and Electronics Engineers), allowing them to speak with a common voice. But very few tech workers are part of such a group. And the competitive job market of the past decade has reduced the financial motive for collective bargaining.
With no overt collective power, most technology workers are left out of the conversation about how their work should be applied. Entrepreneurs, investors, academics, and governments each have a seat at the table—but there’s no place set for the people who build things.
As we decide how to use our new capabilities, it is a grave mistake to exclude the insights of the workers who know the most about the technologies in question. Workers are not automatically right, but their understanding should be reflected in the ongoing discussion. As a society, our goal should be to make smart and informed decisions about the way our world works. We need all the insights we can get.
It’s been a dramatic week at OpenAI. Sam Altman, the CEO, was removed from his role by a small and unrepresentative board. The ensuing battle between executives, investors, and board members has been claimed as an example of how the vast power of AI1 lies in the hands of a few.
I personally observed a different story. In the end, the outcome was not decided by a few board members or investors. Instead, the collective action of more than 700 employees—nearly the entire company staff—forced the hands of the board and led to a resolution. The result is potentially the best of all worlds: a dynamic leadership balanced by a rebuilt and representative board, demanding and engaged investors, and a passionate group of organized workers—regulated by a responsible government.
However things turn out for OpenAI, the organized action of the company’s employees is an inspiration. Denied a role in the conversation, they came to recognize their power and responsibility and then stepped up to the plate to make their wishes known. Along with their demands they brought crucial information: their deep knowledge of the technology and organization became part of the decision-making process, to the benefit of all.
Collective action isn’t all about improving working conditions. It’s about pooling insight and information in ways that lead to better decisions and more effective organizations. Organized workers are an important balance to the other interest groups—such as political parties, business elites, and religious communities—that wrestle for power in our societies. Each worker is an expert in one corner of the economy, and workers’ collective knowledge is crucial if we aim to make informed decisions.
Silicon Valley orthodoxy tilts against organized labor, and some companies will go to great lengths to discourage it. But this is deeply short sighted. As the OpenAI story shows, employees can be a founder’s greatest ally. There is no law that pits workers inevitably against their leadership: instead, there’s an interplay of perspectives that interact to form a consensus, invariably enriched by the pooling of information. There are few founders who would claim to be as smart as the collective intellect of their employees.
Many highly trained workers, such as doctors, lawyers, and engineers, find professional organizations a valuable source of best practices, ethical standards, and collective representation. It is sensible for leadership to make the voice of workers an explicit part of any conversation, as opposed to attempting to infer their needs indirectly. But doesn’t making workers heard mean yielding significant power?
Corporate leaders have many responsibilities, but perhaps the most important is to define a company culture that allows employees to thrive. This is a hard task, but not a mysterious one. I worked at Sam’s first company, Loopt, and there’s no secret to why his staff are loyal. He listens, he’s thoughtful, he hires well, and he goes out of his way to help people. That is all it takes to be loved, and to get employees on your side.
The AI era feels exciting, but it’s no different from any other in human history. As ever, multiple groups wrestle for control of society’s direction. In most eras, the voice of workers has been instrumental in choosing the right path. But today, the workers with the most insight are barely aware of their power. We are ignored, marginalized, and drowned out by others.
Nobody knows more about AI than the experts who build and use it. We deserve a voice, and a seat at the table, when the tools of our trade are discussed. Without even a professional organization, we have no way to speak with a collective voice. The future is at stake, and we have a responsibility to work together and make ourselves heard—or someone else will do the speaking for us.
I personally believe the power of today’s AI is overstated and, like all human invention, its theoretical capabilities will be strongly moderated by the practical constraints of its implementation and deployment. I’m still waiting for my atomic car.
Totally agreed! There’s no way we can build effective AI if it is only built by an unrepresentative subset of people (and their data).
My friend Sara leads Cohere For AI, which is an open source research group who are trying to bring together researchers and scholars from all over the world without the typical prerequisites that act as a barrier to entry. It’s really cool:
https://txt.cohere.com/c4ai-scholars-program/
I think that what is powerful about AI today, especially for the most visible and talked about use cases, is that is a force multiplier and amplifier of the current inputs and the values embedded in them.
I think LLMs are largely fed by western cultures and western value systems. And reinforce those. Because these are places where the investment flows. And because of that it’s largely shaped but those views and biases - intentional or not. For me that is a bit scary.
How do we make sure the builders, the regulators, and the inputs are diverse and varied and reflect human values and not just the values of the most resourced humans.