Why Artificial Intelligence is further off than you think

Table of Contents

Approximately 7:00 minutes of reading time.

It's not, really; It might or might not be. Or to be more precise: We're not in a position to tell. But there are quite a few people around which would like you to believe otherwise; Not due to ulterior motives, but simply due to age-old cultural and social mechanisms – such as an individual's need for purpose and for a sense of belonging – enforcing themselves onto a new topic.

This article came to be due to observations – 'in the field', so to say – on the sensationalist manner in which certain parts of the internet and general society approach current and past technological developments in the digital partitioning and automatisation of repetitive tasks. In particular, this article was inspired by the article "The Artificial Intelligence Revolution", published in two parts by 'Wait But Why' back in 2015. As this article is somewhat of a response to Tim Urban's article, it might be worthwhile to read it first (The Artificial Intelligence Revolution). Most of the arguments herein should however be general and simple enough that this article is accessible even without knowledge of this inspiration. If you do not care for my thoughts on the arguments presented in the 'Wait But Why' article, just skip the corresponding section.

1 A Technocracy's Apotheosis: Futurism and the Cultural Context of Artificial Superintelligence

As a certain optimism towards the advancing progress and accelerative development of technology is currently en vogue, and recent advances in automating repetetive tasks through the use of self-adjusting algorithms ('Machine Learning') are communicated to the general public in a simplified manner, a belief in the soon-to-occur technological 'singularity' is relatively common in tech-focused circles.

Details vary wildly, obviously. For some, it arises 'naturally' out of our evident desire to automate menial tasks of the mind by employing self-learning algorithms, and for those people the 'singularity' often is the "point of no return", where our current understanding of technological and economical forces and developments fails as the self-improving machines begin to surpass their former masters, and where our dependency on technology beyond our comprehension becomes absolute.

For others, the central point lies within this self-improvement of algorithms, which – they propose – will at some point in the future automagically lead to the rise of conscious Artifical Intelligences at (called AGI – Artificial General Intelligence) or beyond (called ASI – Artificial Superintelligence) human comprehension. As they generally insist that the quickening of technological advance is a one-way street, they essentially fear that humaniy is at danger of becoming 'obsolete' and being left behind.

2 A Response to 'The Artificial Intelligence Revolution'

2.1 Death by Progress?

Argumentatively, Tim Urban certainly knows what he wishes to impress unto his reader. By employing strong assertions and the power of allusions, rather than arguments or proofs, he tries to catch you on your weak side – emotions. Invoking imagery and common tropes dating back to the origins of time travel stories, he urges you to use your power of imagination and projection in order to make his introduction of a child's understanding of a scientific unit more palatable. In an attempt to give his heartfelt essay a more 'academic', 'scientific' touch, he defines the grammatically questionable 'Die Progress Unit' (DPU) as a large enough passage of time that someone would 'die from the level of shock they'd experience'. He then employs this mockery of scientific methods to assert not only the possibility of this less than plausible way to die, but to reinforce his preconceived notion of exponential technological progress through circular reinforcement ("Human progress is so enormous, direct confrontation with it could be lethal!" -> "If it could be lethal, the quickening pace of technological progress would shorten 'DPUs' over time!" -> "Therefore, technological progress accelerates! Exponential growth!1").

This pseudo-argument is hidden by a twicefold attack on your ability to reason, which resurface throughout the entire essay:

  1. A pseudoscientific front to his beliefs, and
  2. Mere shock value.

In keeping with his excited, sensationalist tone he of course does not explain in any form just what kind of mechanism this 'Death by Progress' consists of. If we want to be generous, we could assume he means a kind of 'excitement-induced' failure of the circulatory system (i.e. a 'heart attack' due to shock).

While excitement certainly is not unable to cause higher blood pressure and heart rate, this would most certainly not be a problem for most able-bodied, healthy humans. Here his emotion-based argument follows a linear understanding of the human psyche, severely understating the upper limit of what humans are able to grasp, and grotesquely overstating the brain's ability to experience emotions. While it is clear that a shock severe enough to cause heart attacks will likely shake even 'grounded' personalities, beyond a certain point there just is not a lot of emotional leeway, and the most basic way of coping with extreme situations would kick in: Emotional 'Blank Out' and a readjustment of normality.

In addition to this questionable assertion, Urban obviously chose the dates for his examples of 'DPUs' very cleverly. While his initial assertion, while cinematic in effect, sounds somewhat plausible ("Someone from the 18th century in today's world? Definitely a huge shock. Perhaps he's right, that could kill someone!"), he glosses over the fact that his other examples don't hold up to any scrutiny at all. While there are some inventions humanity had achieved by the 1750s which most certainly would shock or at least deeply impress someone unfamiliar to them (Gunpowder, Steel, etc.), the impact of their impression does not grow with greater chronological distance. In an attempt to be clever, he vainly asserts that someone from a pure hunterer-gatherer society would be sufficiently disturbed by the technological difference to die.

Besides the – at least somewhat entertainable – notion that the effect of gunpowder and other explosives would be a near-lethal shock to pre-historic humans, he hereby effectively wants you to believe that mesolithic people would be unable to cope with seeing post-neolithic inventions such as clothes, buildings and vehicles to such an extent that they would immediately collapse and die. And if this doesn't stretch credibility far enough for you, you might want to consider his final assertion that an early hominid could not possibly survive contact with primitive tribes living off the land. Here Tim Urban's arguments prey on your inability to set these huge numbers into context correctly, and closer examination of his specific "dataset" reveals that his theory of exponential progress, on which he grounds the rest of his essay, is on very shaky footing at best.

If you are sufficiently proficient with plotting and data fitting, or at least have a general proficiency with spreadsheets, try visualizing the data yourself and try to see for yourself how well an exponential growth fits his idea. You'll be surprised just how badly the two extremes of his dataset fit; he's effectively trying to sell you a ridiculously fast exponential growth with his '18th century'–'today'-comparison by obscuring the fact that this would also require far more 'DPUs' – and far closer to us – in the recent past than he would care to argue for. His other two datapoints serve as an effective distraction of this.

His tangent on 'small-scale effects' of the severity of technological differentials by referencing the legendary movie "Back to the Future" effectively functions as a distraction due to its nostalgic appeal; Though likely not the intended effect, it hides the fact that Tim Urban hardly goes into what actually would make a 2015 kid stand out more in 1985 than Marty McFly did in the 1950s in 'Back to the Future'. We as a reader can – again – try to finish the argument Tim Urban started. But first we should note at this point that Tim Urban again shys away from expressing either predictions or falsifiable statements and instead relies on allusions, half-finished arguments, and unelaborated examples to imply the point he wishes to make – a clever method of making his text future-proof.

Back to "Back to the Future": By this point of the essay it is clear that for Tim Urban, both the central driving force and best measure of human progress is technological prowess. While he correctly identifies non-technological 'marks of progress' that lead to Marty's disorientation after being misplaced in time ("[Marty] was caught off-guard by […] the prices of soda, the lack of love for shrill electric guitar, and the variation in slang"), due to his personal focus on technology he fails to find literally anything besides technology that would make a '90s kid' less adapted to the 80s than Marty was to the 50s. This unmasks Tim Urban's ignorance of the cultural, economic, and linguistic dimensions of human progress – if it isn't running on electricity or at least gasoline, Urban does not seem to care for it.

His initial assertion concerning "Back to the Future" that a 90s kid would have a harder time adjusting to a life without the internet and portable computing than a 70s kid would have while adjusting to a life before ZIP codes, the 911 emergency hotline, video games, and widespread television sets is not entirely baseless. His main argument towards its truth however lies in not researching what actually was invented between 1955 and 1985, and not trying to understand what a 70s kid's life would have actually been focused on.

2.2 From ANI to ASI – A Long and Winding Road

In his 2015 article, Tim Urban repeats an old complaint by John McCarthy concerning Artificial Intelligence that apparently, "as soon as it works, no one calls it AI anymore."

The general trend he wishes to express, that we sometimes take our tools for granted, and do – at times – not appreciate the complexities they are expressive of, is of course true.

But this shallow reading glosses over the fact that its literal reading obviously is wrong. No one has stopped calling the algorithms run even by such 'simple' machines as Deep Blue 'AI', in fact in general discourse the prevalence of the expression 'Artificial Intelligence' rather seems to be at an all-time high. In fact, among tech enthusiasts the public's simplification of all kinds of algorithms in the easily marketable phrase 'Artificial Intelligence' has become kind of a running joke.

2.2.1 Artificial Narrow Intelligence

Author: rov (trymonv@cock.li)

Date: 2020-11-17 Di 00:00

Emacs 26.3 (Org mode 9.1.9)