This article is a submission for the Budding Writers Scholarship Contest Instructions- Personal Essay.I won the Personal Essay category.
How will AI integrate with society, what are its most important applications in society, or how will the pursuit of superintelligence manifest. This essay is intended to be less formal and writers are encouraged to use a narrative style to express their creativity and opinions.
An Unlikely Beginning
The screen faded to black, and soft piano music started playing, I blinked back tears as I pondered what I had just watched. I was barely ten years old yet, and I didn’t even really understand what I was watching but I knew I had just seen something that would impact me for the rest of my life.
It was the early 2000s and I had just finished watching Steven Spielberg’s A.I. Artificial Intelligence. Officially the movie plotline is about “a highly advanced robotic boy [who] longs to become a human child so that he can regain the love of his foster mother who abandoned him. He soon embarks on a journey to make his dreams come true.”1
However, I actually think that a Youtube comment2 brilliant explains what the movie is really about and why I have decided to use this movie to start my personal essay about Artificial Intelligence: “Many people don’t realize how this ending brings the film to full-circle. In the beginning, David was created to fill a need…and in the end, Monica is created, to fill a need. The creations, have become the creators.”
In this personal essay about artificial intelligence I will make three bold assertions:
- Computers have gotten smarter, while humans have stayed the same
- Humans and computers will become synonymous
- The creations will become the creators
Computers have Gotten Smarter, Humans Have Stayed the Same
In the last 50 years, computers have gotten exponentially smarter but humans have basically stayed the same. A running joke in the artificial intelligence industry could be that whenever a new “breakthrough” in artificial intelligence is published it turns out that the algorithm is at least 25 years old, it’s just that we finally got a computer powerful enough to tell us what we already knew. It’s like when ancient astronomers calculated calculated the location of celestial bodies, but we still had to wait for the invention of the telescope to allow us to finally see with our eyes what our minds had already envisioned.
For example, algorithms to detect handwriting have existed since 19893 when Yan Lecunn developed a neural network that could recognize handwritten zip codes. It wasn’t until industrial-scale production of GPUs became affordable that Lecunn’s algorithms could be applied in practice.
A large part of this is due to Moore’s law4, which states that overall processing power for computers will double every two years. This means that most of the quality of life and technological improvements in society are not a result of us “smarter” humans but “smarter” tools. We progress not because of human ingenuity but simply due to us riding the wave of the laws of physics and hanging on for dear life.
I believe that in order for us to reach the next frontier of growth in artificial intelligence, cognitive gains achieved by computers will now flow back to humans. I call this phase: “your younger cousin finally realizes that their video game controller is unplugged”.
Humans and Computers will Become Synonymous
Most of our predictions about the future of artificial intelligence operate under the assumption of what I call the steel-metal assumption. This assumption is that artificial intelligence will be manifested as a computer inside a data center in an undisclosed location with exabyte scale compute power.
However, I think the future of artificial intelligence will be more similar to what is called the “moist Robot” scenario5. Once we have developed a sufficiently complex artificial intelligence, we will then try to directly integrate the computation power of such a computer into the natural, biological cognitive process of a human.
For example, instead of typing into Google “weather for tomorrow”, a microchip would be surgically implanted into your left hemisphere (the side of the brian that handles language and logic) and you would simply “think” to yourself “Hmm, I wonder if it’s going to rain tomorrow” and the chip would send the request to a server in the cloud and return a request back to you in nanoseconds6.
It currently takes 900 milliseconds7 for the brain to think of a word, formulate it and send it to the mouth for speech. This means that we can think of a question and the microchip in our brain can send and receive the question, faster than if we thought about it naturally ourselves.
If this sounds like science fiction or magic to you, I remind you of Arthur Clarke who said that: “any sufficiently advanced technology is indistinguishable from magic”8. Companies such as Elon Musk’s Neuralink9 have already started developing products towards this goal.
Once we have an interface between computers and humans the next step is storing a digital copy of the human brain. An excellent book on what this would look like is Robin Hanson’s The Age of Em10. I believe that rather than true artificial general intelligence, Hanson’s scenario is more likely to happen, which involves copying the brains of society’s most productive humans, integrating a high bandwidth connection to an artificial intelligence and mass producing their brains to perform various tasks at scale.
For example, take an elite scientist like Jennifer Doudna11, the inventor of CRISPR gene-editing, widely considered one of the most significant discoveries in human biology12. Scientists like Dr. Doudna are extremely rare and even world-class institutions only have one or two scientists of Dr. Doudna’s calibre in a given department. Imagine if you could emulate the brain of someone like Dr. Doudna and every lab in every department consisted of world-class scientists. The rate of scientific discovery would increase by at least a factor of ten, and now we can start to compete with the computers. But at that point, maybe we will have become like the computers and the computers would have become like us.
The Created will become the Creators
To bring it back, full circle to that original Youtube comment: “Many people don’t realize how this ending brings the film to full-circle. In the beginning, David was created to fill a need…and in the end, Monica is created, to fill a need. The creations, have become the creators.”
The icing on the proverbial cake of our pursuit of superintelligence will be the creation of human simulations that are indistinguishable from our “reality”. Nick Bostrom explains it best in his paper on the simulation theory13 and in his book, Superintelligence: Paths, Dangers, Strategies.14 The core thesis is that given the current rate of technological progress, eventually humanity will be capable of developing simulations that are indistinguishable from our current reality.
A really cool benefit of this will be in the fields of decision making using randomized controlled trials and A/B tests.
For example, suppose that you are trying to determine whether the tax rate should increase by 20% or decrease by 20%. Currently, situations like this usually result in circular debates, centered around personal, subjective ideologies. Worst of all, the various opinions can’t be objectively falsified.
In a simulated environment, you pick a metric which you want to optimize for such as total human happiness. You would measure the four base rate chemical quantities associated with happiness of each human in each simulation: endorphins, dopamine, oxytocin, serotonin. You run half of the simulations at a 30% tax rate and the other half at a 50% tax rate. The tax rate with the highest increase in average human chemical happiness would be the one that your country adopts.
However, this is also when you’ll realize that the human timeline is more like a time-circle, and as you all know, circles don’t have an objective start or end point. So while some people would classify it as the end of civilization and others would describe it as the beginning of the next species of humans, the post homo sapiens. A more accurate description might be that this is just another cycle in the circle of life. Grab your popcorn, fasten your seatbelts and enjoy the ride, because it never ends.
3. LeCun et al., “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, 1, pp. 541–551, 1989.
4. Moore, Gordon E. (1965-04-19). “Cramming more components onto integrated circuits”. Electronics. Retrieved 2016-07-01.
7. Sahin, Ned T et al. “Sequential processing of lexical, grammatical, and phonological information within Broca’s area.” Science (New York, N.Y.) vol. 326,5951 (2009): 445-9. doi:10.1126/science.1174481
8. “Hazards of Prophecy: The Failure of Imagination” in the collection Profiles of the Future: An Enquiry into the Limits of the Possible (1962, rev. 1973),
10. Hanson, Robin. The age of em : work, love, and life when robots rule the Earth. Oxford: Oxford University Press, 2016. Print.
12. Pollack, Andrew (May 11, 2015). “Jennifer Doudna, a Pioneer Who Helped Simplify Genome Editing”. New York Times. Retrieved May 12, 2015.
13. Bostrom, Nick (19 January 2010). “Are You Living in a Computer Simulation?”
14. Bostrom, Nick. Superintelligence : paths, dangers, strategies. Oxford, United Kingdom: Oxford University Press, 2014. Print.