Intelligent Agents

I’ve been thinking lately about why our civilization insists on flinging us full speed into climate change or why social media companies pursue strategies that are clearly trending towards social collapse.

The common answer to these and many other questions is greed and while I certainly think that greed plays a very important role in these problems, I can’t help thinking that there is more to it than just greed alone.

While it’s true that some greedy people make decisions without taking into account the long-term consequences of their actions, many greedy people do operate with long term sustainability in mind. Why then do our companies and our governments so regularly fail to act in their own long-term interests?

The line of thinking I’ve been exploring tonight involves the difference between being an intelligent agent and having sentience. An intelligent agent is an autonomous, goal-directed entity which observes and acts upon an environment. In that sense, you and I are both agents. But agents don’t have to be human. My cat or a pigeon or an ant colony or an autonomous robot could all be considered intelligent agents.

Where I think this really starts getting interesting is when you start to look at large companies, governments, or even societies as intelligent agents.

In many ways a company is analogous to a kind of artificial intelligence. Because of their ability to leverage many individual humans, computer systems, and machines in service of a common goal, many companies have superhuman intelligence, strength, perception, etc. This is why companies are able to achieve goals that would be impossible for any individual.

But unlike the human agents that make up a company, the company as a whole is not sentient. As an agent, the company observes the world around it and makes changes to its environment in pursuit of goals. It exhibits intelligent behavior but the company doesn’t actually know what it is or what it’s doing. It doesn’t have a sense of self.

For many small businesses and even some big businesses that have strong leadership by a CEO with enough power and autonomy, that individual driving the business can act as a proxy for corporate sentience. Basically the company acts more like a remote controlled car than an autonomous one and the leader uses her sense of what the company is and what it’s doing to direct its behavior in a way that might take into account issues like morality and long-term sustainability.

But when businesses become too big and bureaucratic and decisions are made via a series of processes rather than by a single individual, the company functionally becomes its own autonomous agent which will pursue its goals (usually to make as much profit as possible) with superhuman efficiency but without a sense of self that would cause it to plan ahead beyond the next couple of quarters or years.

Companies aren’t just causing these problems because they’re greedy, they’re also incapable of planning ahead except to the extent that planning ahead has been explicitly specified as a goal in their operating system.

If you set a goal for an individual human, you don’t have to specify things like self-preservation because as sentient beings, we are able to take lot of these ideas for granted. If I tell you to make me a cup of tea, I don’t have to also tell you not to burn the house down while you’re doing it. We don’t always behave in our long-term best interests but at least we are aware of what we are and what we’re doing and our behavior tends to be influenced by that awareness.

The more companies move from being directed by individuals to being directed by systems and committees and key performance indicators, the more we need to explicitly build the goals we tend to take for granted as sentient beings into the framework of the business.

The same thing can be said about governments and even entire societies.

With this context in mind, our inability to change course to avoid climate change or social collapse starts to make a lot more sense. Our companies and our societies are blindly pursuing the goals that we have set for them and we have focused so much on setting short-term goals that we forgot that our companies and societies aren’t sentient, don’t know what they are or what they’re doing, and are incapable of caring about what the earth will look like 50 years from now unless we build in that kind of long-term thinking as explicit corporate and social goals.

When I say that we need to build them in as goals, I don’t mean with lip service or isolated corporate social responsibility departments. I mean build them in the same way that we have built in goals like profit. Every decision should take into account its effect on these long-term goals the same way that every decision’s effect on profitability is considered.

I think we tend to either make the mistake of not thinking of companies or societies as intelligent agents or we make the mistake of thinking of them as intelligent agents that inherently possess human qualities like morality or sentience. If we want them to have the qualities that humans take for granted, we need to figure out how to build them into the fabric of their systems.

Previous
Previous

Fuck.

Next
Next

Modal Dispersion