Singularity skeptic Mark Plus drew my attention to the following blog post. The author writes that:
Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence).
He ends his post by the following observation:
It is nice to see philosophers taking a serious interest in science and bringing their discipline’s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention.
There are several arguments that can be made against simple extrapolations of past trends and the way many transhumanists think about the progress of science. Some of these arguments have been made in my own piece Scientific Optimism and Progress in Cryonics. It is striking that when futurists have to estimate a timescale for important breakthroughs these events almost invariably are projected to happen within their lifetime, and even if they do not, there is some way to be a part of them. This tendency itself is indicative of how rationalism, wishful thinking, and self-interest can shape our ideas about the future.