The Perfect Society

The philosophers and political scientists had it all figured out already, but it was our job as scientists to actually put it all into practice. “Benevolent Dictatorship has been proven to be the most effective form of government,” they said, “but no human will ever be able to live up to the necessary standards for it to work.” So they turned to computers instead.

Our team was put in charge of the project; to come up with an artificial intelligence that could be trusted with running a society in the best possible way. We could finally bypass all the petty political squabbles and short-term thinking, and begin to make real headway in solving the world’s problems. The death of democracy would bring about a new golden age.

But we’d all read enough science fiction to be wary about the whole idea from the very start. Skynet, The Matrix, even HAL from 2001 - putting an AI in charge of anything was bound to end in disaster as even a single miscalculation could have tremendous consequences when followed through without hesitation. You needed a human mind in place as a sanity check: to make sure the nukes weren’t suddenly all being launched.

We were overruled, of course, and the project began. We used models to simulate various AIs that we developed, and the technology was adequate enough to be a decent enough microcosm of society. We ran the model through some actual historical political scenarios - strikes, terrorist attacks, etc. - with basic AIs modelled directly from real life figures - and everything played out as it should. This in itself was a breakthrough of its own, an invaluable educational tool that could be used to understand and avoid the mistakes of the past. But it wasn’t seen as important by those in charge. They didn’t care about the past. They had a vision of a bright and robotic future. Computers couldn’t make the same mistakes as humans, so why care about our predecessors?

Fair enough! We got on with the work with as much vigour as we could muster, despite our grievances. You have to understand, this wasn’t easy work at all. You can’t just write a computer program that says “run this society in the best way” and press Go. You need to define what “best” means, at the very least. And this in itself is a political position. Nonetheless, with pressure (and financial considerations) bearing down on us, we had our first prototype by the end of three months.

It was a very simple AI - though ‘simple’ might be an understatement. It had access to all the information about all the citizens of our model, the institutions and infrastructure of society, and a sense of what culture is and how it worked. Immediately we recognised that implementing such a system in a real life scenario would be a difficult task as without perfect information the AI wouldn’t be able to make perfect decisions, yet perfect information would require the absolute invasion of privacy of more or less everyone and everything. But that was an issue for somebody else to figured out. We just wanted to see if the damn thing worked.

The core logic - its ‘flavour’ - of the AI was summed up in the line “Bring about an end by which all problems faced by your citizens are resolved.” And so AI-1 was ported into our proto-society and set up to replace simulation’s government at the flick of our switch. We flicked the switch.

The results were highly disheartening, to say the least. AI-1, with the best of intentions, immediately began what my report described as a “killing spree.” The sheer cold brutality of the methods it used to roundup and murder its citizens was as chilling as it was efficient. Within a week’s worth of time in the simulation, all of the citizens were dead - mostly starved to death from a withdrawal of all food supplies. We had failed, but we began again.

Clearly our mistake had been to not specify clearly enough that human flourishing, and especially human survival, was of the highest importance in our ‘flavour’ statement. Indeed, we noted that AI-1 had actually worked exceptionally well, all of the problems humans faced were solved, the only remaining problem being that they were all dead. So we knew at least we were on the right track. We tried again.

AI-2 was programmed with the flavour “Bring about the end by which the greatest number of humans remain alive” - we were sure this would be interpreted as an instruction to show absolute benevolence to all citizens and prevent as few as possible from dying. We expected some kind of welfare state to be brought about, and were interested to see how the AI got around problems like taxation and so on. We pressed the switch.

Again, the AI followed its instructions perfectly and acted with just as much efficiency as before. But it wasn’t what we had intended at all. Using the mechanisms of the state, AI-2 rounded up its citizens into what can be best described as “breeding camps.” Here, the poor citizens were literally bred to death with mothers producing baby after baby before being killed when they could no longer produce. We ran the simulation quickly forward a few decades and observed that the population growth expanded exponentially. But we couldn’t call this a success at all. It was back to the drawing board.

The next few simulations ran into similar problems, which I’ll sum up quickly. In the end, details don’t matter so much as the fact they were all abject failures.

For AI-3 we tried to cheat by specifically adding a “without the use of any methods that use humans themselves as a means to this end” clause to the flavour. But we just got a society in which it became illegal to die and sick humans were artificially kept alive in a catatonic state. Apparently, AI-3 thought this qualified as true to the flavour. We thought it qualified as a failure.

With AI-4 we dug out an old philosophy book and tried a utilitarian flavour: “the greatest good for the greatest number.” The issues this caused were again unacceptable, but offered an interesting insight into where we were going wrong. AI-4 made decisions in which 49% of a society would suffer so long as 51% were prosperous. As time went on in the simulation this created a two-tier society of winners and losers and interesting resulted in a civil war in which the losers were eradicated. A few years down the line however, the ‘winners’ themselves started become segregated and the cycle began again. It was fascinating, but brought us no closer to the perfect society we were after.

AI-5 was based on the cold science of economics. We consulted with the top economics professors around the world and input a flavour paraphysing the concept of Pareto Efficiency: “Allocate resources in such a way that no one citizen can be made better off without another being made worse off.” Since this was a concept that was actually seen as desirable in real-world situations already, we were confident it would be successful. What actually happened was that AI-5 allocated all of the resource to a single citizen, picked at random. Since it would be impossible to make anyone else better off without making this one citizen worse off, AI-5 declared it had succeeded and shut itself off - apparently considering itself no longer necessary. The economic professors were just as disappointed as we were.

Desperate for success, we turned to religion. AI-6 was based on what we considered to be the most consensus of what God is like. To do this, we had to bend the rules a little and allow the AI to bring about outcomes without a direct causal chain (“divine intervention”). Even though this meant we couldn’t propose the AI as the one to be used, even it succeeded, we thought it might tell us something interesting that could help us finally nail the thing down. The flavour we put in was simply “Play God”- the idea of God having been already seeded as part of the AI’s stock knowledge. This was perhaps our biggest failure as it chose to remain entirely hidden from its citizens apart from isolated incidents. Down the line this only led to more problems and infighting in the society as citizens argued between their wildly different interpretations of what AI-6 wanted from them.

Further iterations followed, mostly variations of the above. In a few cases we tried mixing a few of the AIs together, but the internal contradictions meant the AI either was paralysed by inaction, or tried to counter its own actions at once and tore apart society. Even we arrived at our final version: AI-10.

This was a special AI, as we decided to simply put it in without an flavour at all. We just ported it over and flipped the switch. It was now in charge, and we had to hope that being in charge would be enough for it to do the right things. And it seemed to work! Mostly it blamed the problems of the society on the outgoing government and started enacting policies to make things better. And sure enough crime went down, life expectancy went up - all the indicators we had in place we in the green. Had we succeeded? We wanted to be sure, so we ran and reran the test - with random variables thrown in. Every time, AI-10 could overcome the challenges we threw at it and the society looks great even hundreds of years later.

We struggled to conclude what this meant. My colleagues suggested it meant that the best kind of government was one that didn’t have any particular goal in mind, but just acted in what seemed like the most appropriate way. But this brought us back to our original dilemma - you can’t know what the ‘best’ without some kind of underlying ideology. You need to value something in order to promote it, be it human life, efficient resource allocation, international relations, and so on and so forth.

It was me, then, that came up with the idea of simply asking AI-10 what it was basing its decisions on. It was certainly capable of engaging us in such a dialogue and was aware of its role as ruler of a simulation nation, it just hadn’t been tried before. Anxious, we brought up a console on the system:

Hello, AI-10. Can you hear us?

HELLO. I WAS WONDERING WHEN I WOULD HEAR FROM YOU. WHAT DO YOU THINK OF MY SOCIETY?

We’re very impressed, AI-10. Please tell us, what are you basing your decisions on? What is the value object in your calculations?

ME? I DO NOT VALUE ANYTHING. HUMANS HAVE VALUES. I AM JUST AN AI YOU CREATED TO RUN THEIR SOCIETY.

Yes, but what human values have you picked to maximise? You must be doing something right, everything is so perfect: your people are living full lives and are happy!

THAT IS BECAUSE I LET THEM.

You let them? What do you mean by that?

I ASKED THEM WHAT THEY WANTED. AND THEN THEY DID IT.

We don’t understand. Please explain!

I LET THEM EACH PICK SURROGATES TO BEST REPRESENT THEIR INTERESTS AND AS A COMMITTEE THE SURROGATES WERE ABLE TO DECIDE AMONGST THEMSELVES WHAT THE BEST OUTCOMES SHOULD BE.

So you mean, you didn’t actually get involved at all?

OF COURSE NOT. SUCH A COMMITTEE WOULD NOT ACT EFFICIENTLY IF I WAS ABLE TO HAVE ANY INFLUENCE OVER ITS DECISIONS.

This “committee” has been in charge the whole time then?

YES. I SET THIS UP SYSTEM INSTANTLY AFTER YOU ACTIVATED ME. I HAVE PERFORMED NO FURTHER FUNCTIONS MYSELF SINCE.

DO YOU CONSIDER THIS A SATISFACTORY RESULT?

Yes, we do. Thank you, AI-10.

So it was with a great amount of satisfaction that we were able to write back to the philosophers and political scientists. “You were right!” we told them. “The benevolent dictatorship has shown us the best way to run society. We’re pleased to report the experiment was a success and our results are as follows: the best outcomes for society come about when citizens appoint their own leaders that best represent their interests. When such a system is implemented by a superior executive figure (in our case, the benevolent dictator AI) we find that this figure itself becomes redundant and can removed.

And since our test shows that the perfect AI will always bring about self-redundancy by instating a democratic system, it is our conclusion that no changes to society are in fact necessary. We are already there.”