20 minute read

Marc Andreessen is a smart guy and prolific writer. I liked reading his latest article is ‘Why AI Will Save The World’ and I’d like to humbly comment on a few parts.

My Views

Before we get in - my views on this topic is that AI is potentially the solution to many problems that humans face, but has very real risks both in the short-term (before AGI) and long term, that need to be protected against through regulation/testing/coordination and so on. I feel that’s pretty uncontroversial, but if Marc can see any risks - he’s not acknowledging them here.

Disclaimer

Marc has vested interests. I am sure he is an investor in many companies that use AI as a core part of their business model, be it an actual component or latching onto the current AI hype cycle. Here, we don’t assume the points are motivated by this. In reality, everyone has a vested interest in something as profound as AI being beneficial to humanity and not harmful.

Secondly, this is a difficult topic. I think there are many better commentators to listen to on this topic than Marc. For instance, the recent Munk Debate on AI Safety sees some people at the forefront of ML Research, such as Yann Lecun, Yoshua Bengio, Max Tegmark and Melanie Mitchell is an excellent watch.

With that said, let’s jump in.

Response

First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

We’re off to a bad start. The assertion is that AI is a tool that is completely controlled by the user. This is obviously ridiculous; even if we put aside troublesome terminology confusion (many people now equate AI with Artificial General Intelligence - human level intelligence with, potentially, it’s own goals or atleast a great degree of freedom to accomplish human goals), this is clearly untrue. Much of the power of current Machine Learning (ML, what I take he means by AI here) software is that it has learned, from data, what the human user will likely want, be it a new image, decision, etc - and produces that. There is no human inherently in control here, though there is also no sentient artificial mind in control either. Rather, it’s a human delegating control or responsibliity of some task to a program that we hope will do what we want. However, as we will see, he is referring to AI as it approaches human levels of competence which one would assume also entails a great degree of control and power.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years.

True enough. It should not go without saying that the potential upside of AI techniques are very real and very profound.

Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.

Now we seemed to have squarely moved on to a ‘human-level’ definition of AI. Which is fine, that is what most people in the ‘risk’ camp are concerned about, though even the simple ML techniques we have today can indeed be harmful to society.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Back to the original, non-human level definition of AI? What is going on here? I am not sure how to square the lack of goals and intent he posits here with the ‘AI tutor’ above having infinite knowledge, being infinitely helpful, and so on. Surely qualities like compassion are very closely related awareness of the human condition and that is very close to self awareness? One could say that it is possible to have extremely capable ‘zombies’ machines that are not self aware and have no goals - but without a knowlege of what it might even take to develop that, how can you assert that is the case? You could then argue that we don’t need regulation or gaurd rails today, but you need to atleast call out what developments in the field or capabilities these agents need to exibit to then say, okay - it’s time to take this seriously.

First, recall that John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons – which helped end World War II and prevent World War III – with, “Some people confess guilt to claim credit for the sin.” What is the most dramatic way one can claim credit for the importance of one’s work without sounding overtly boastful? This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI – watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.”)

Most risk people are not doing cutting edge ML research, with due to respect to the many smart and nice people in the field of AI safety.

Will AI Kill Us All?

Second, some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.

Great time to disclose your own interestes, Marc. Now, most people will know Marc is a VC and likely has AI interests - but I don’t like this reliance on assumptions of interests that seems common in US media. These declarations go front and centre, not at the end of an article.

Will AI Ruin Our Society?

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.

We have a saying in Australia, play the ball, not the man. Let the argument for or against AI be based on its merits. (You’ll get nuts on both sides, surely)

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

As above - this is arguing against the idea that AI could be bad for society, say because it sees bots flooding the internet with misleading, biased and otherwise garbage comments / content / spam, by saying the people that are calling this out are priviledged. Don’t let the ‘thought police’ suppress AI is a bad argument, epecially when it’s the only one presented. Does Marc not think this is a real risk at all? We know that people are influenced by content they see on social media, such as during the 2016 US Election - now imagine having AI-accelerated tools to push disinformation to adverseries.

On AI Taking all the Jobs:

To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends. For, as Milton Friedman observed, “Human wants and needs are endless” – we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there. And that is why technology doesn’t destroy jobs and never will.

I don’t know about that. You can’t say that AI ‘could save humanity’ and not say that AI is qualatatively different to previous advances in automation - things like electricity, machinery and so on. I actually do think that, in the short term, new jobs will be found; most jobs today are already bullshit, so we can surely create some more BS jobs. However, I don’t think that’s infinite as Marc proposes - AI surely reaches some level of capability that means that hiring humans is just not the way to go. When that happens - a lot of people are at great risk of being in poverty unless there are explicity plans to share the weath that AI creates and not perpetuate inequality. Secondly - we cannot overlook the harm done to good, honest, hardworking people between the time that old jobs are taken and new ones are created.

Will AI Lead To Inequality?

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible.

I am not an economist, but I think you need to separate income and spending here. I think consumer goods will be greatly improved by AI and AI-assisted manufacturing techniques, and so on. It’s a great thing that I can buy a smartphone for 300 bucks that is somewhat as capable as what Elon Musk could get. On the income side, I cannot see how the benefits of cheap AI-based labour and products do not to equity holders, i.e. the rich. How would the benefits of income arising from AI services and products flow to workers? At best, there might be marginal benefits as productivity improves - but employment is fundamentally supply and demand, and AI pushes supply first with demand needing to catch up.

On The Risk Of Not Continuing AI Research with Speed:

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

I am not sure it’s the biggest risk, but yeah, I agree - there can be no global accord that restricts the development of AI technology, at least not until something bad happens.

Conclusion

Lots of smart people have very different views on this topic. I think that alone is enough for pause. If someone like Sam Altman, at the very forefront of these developments, is concerned about this and is championing regulation, while explicitly acknowledging the benefits OpenAI could gain from regulation - then it’s worth taking this seriously. However, I feel that some of the arguments here are very flimsy.

And that’s it for this post? Found it interesting? Please share or tell me about it on Twitter or Threads if that becomes a thing.