I came across an interesting paper written in 2004 about innovation. It looked at the innovations from 1453 (which the author calls the end of the Dark Ages). It can be found here. What it basically says is that innovation per head of population peaked in 1873 and is now being limited by lack of human brain power or constrained by economic considerations.
Yes I did write 1873 – when Queen Victoria was still on the throne. That was the year Jules Verne published “Around the World in Eighty Days”. His arguably most famous novel, “Twenty Thousand Leagues under the Sea” had only been published three years earlier, in 1870.
There have since been other papers published in 2010 and 2012 that have confirmed Jonathan Huebner’s finding on innovation trends. Huebner went on to predict that the level on innovation would reach the level of the Dark Ages in 2024 – only eight years away from now.
So what does this mean for science fiction?
If you look at an outline history of science fiction – its use of science ca be traced back the Mary Shelley’s “Frankenstein”. But it science-based science fiction did not really take off until the pulp magazines introduced science fiction to a wider audience – from 1926 onwards. It had be particularly good run post World War II when there were a lot juvenile novels written to help teach physics. But from the 1960s onwards the science based science fiction tended to take more a back seat compared to other types of science fiction. So we have the peak of science-based science fiction very much lagging behind the peak of innovation productivity – by about 70 to 80 years.
I would tentatively suggest that it takes about 70 to 80 years to make innovation, or at least the understanding of innovations, readily available to Joe Bloggs in the street.
It’s rather depressing to think that science-based science fiction will continue to decline in line with the lag in the innovation trend.
But there is a twist – or least there can be – if the science fiction publishing world are brave enough to take on the challenge. Before I go on to explain why, let me explain a little bit about science.
Science has one fundamental assumption behind it, namely that if the right set of conditions are set up, then the results are repeatable over and over again. This need for guaranteed repeatability has led to the theories being the minimal possible to describe what is going on. You only need to look at the history of the development of the laws of planetary motion to see this (i.e. going away from an Earth-centric theory to a Sun-centric theory, to a mass-centric theory). The theories describe what would happen and only what is guaranteed to happen.
What science does not do, is describe would could happen, but we can’t notice (for those familiar with the theory, look at the gauge theory in electromagnetism). Here multiple theories can abound and all of them be right!
What this means for science fiction is there is a plethora of worlds we cannot sense waiting to be explored (e.g. the different gauges in the electromagnetic theory). Whilst we can’t sense them directly, there is no reason why we can’t invent ways of sensing them and writing a story about this.
Whilst I have cited one example in physics where there is may be possible, there are a lot of others. Furthermore, this type of science fiction, whilst science-based, does not rely on innovation in itself to fuel the stories.
In the meantime, there will still be innovation happening worldwide, albeit at a reduced rate. So there is still material to write science-based science fiction.
Of course, this whole argument changes when humans can get access to more resources. Innovation trends will only change if we can get into space at a cheap enough cost, which is why the development spacecraft such as the Skylon spaceplane is so important.
Which is why I am rather pleased to see that the next stage of funding for its development has been recently released.
This whole argument would in itself be a good basis for a science fiction story…. over to you….