“We’ll do better stuff” — Sam Altman is Cautiously Optimistic that Humanity Won’t Eat Itself  

“We’ll do better stuff” — Sam Altman is Cautiously Optimistic that Humanity  Won’t Eat Itself  

With great power comes great responsibility. Yet we all know that getting tech founders of globally-adopted platforms to acknowledge the dark sides of their businesses has so far proven near impossible, even when they are hauled before Congress.

Not so Sam Altman, co-founder of OpenAI, the creator of ChatGPT. He knows the power he wields. On May 30 he signed an open letter ominously stating that, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This letter was also signed by Bill Gates, Geoffrey Hinton, considered the godfather of AI, and Demis Hassabis, CEO of Google Deepmind, among many other notables. 

This warning is the stuff of our dystopian nightmares; the moment in time when the development of artificial general intelligence (AGI) surpasses the ability of humanity to control it. This point of no return for human civilization is referred to as “the singularity.” 

Skynet became self aware at 2:14 a.m., EDT, on August 29, 1997

Understanding the gravitas of this, Sam Altman embarked on a global tour four weeks ago to listen to the feedback and concerns of the users, founders and developers of OpenAI’s suite of offerings. Yesterday, Altman, alongside his colleague Rachel Lim, a member of OpenAI’s technical staff, responded to audience questions at Singapore Management University, where I was a member of the audience. 

Altman believes that AGI will be the most transformative technology humanity has ever seen. He added that he believes that it can be developed, so the goal is now to figure out how to get to AGI safely.

Iterative deployment of new versions gives humanity the chance to evolve alongside AI. It is a far more responsible option than keeping development secret and suddenly dropping artificial general intelligence on society at some future point in time. We need to evolve together with these tools and systems and help shape them, he said. 

AI will mostly create large gains in productivity and creativity. Having more time freed from mundane tasks means “we’ll do better stuff.  We will create more. We will do entirely new things that are difficult to imagine,” Altman said. 

He believes that issues of governance, power, decision making, limitations, sharing and configuration of these systems will become “some of the most hotly debated issues of our time.” Tools and systems will continue to evolve and get better, building new knowledge that everyone can share, pushing humanity forward. It’s also imperative to ensure adequate representation, diversity and equity, meaning AI models need to be trained on all cultures, languages and values. 

AI and Blockchain

An audience member asked about Altman’s involvement with blockchain (via his crypto project Worldcoin) and if he sees an intersection with these two evolving technologies.

He said that while the intersection has not been a focus, one example he would give is “that as generated content becomes better, to know what is generated by a human, I can see a world where if you have a high stakes message, you cryptographically sign it and put it on the blockchain to say, ‘hey, this was really me’. So I could see something like that. I can see other ways where you really want to verify something at the time that it happens.” 

The issue of the ease with which prompts can now be used to generate music in the style of well-known artists, and its potential threat for established musicians, was raised. Altman asserted his belief that these tools will likely empower artists to reach exciting new levels of creativity, and acknowledged that blockchain can play a role here to ensure benefit back to the artists. 

It was apt that one of the final questions was an existential one, seeking Altman’s perspective on what he feels will remain the differentiating factor between human intelligence and artificial intelligence.

Altman concluded that humans “tend to be very focused on what other people do and what they care about, what they want and what we can do for them. And I would bet that doesn't change too much.”