Thinking in advance

Early warning so legislation predates crises

Governments must have far better early-warning of potential High-Tech crises so that legislation can anticipate technological trends rather than react to them


Governments need to be advised (far more and far earlier than they currently are) of the forms of Pre-emptive Recovery they must ensure are in place – relating to everything from the escalating power of the internet to research on life extension. Government legislation must then anticipate those changes. Privacy laws, for instance, must evolve ahead of computer processing and archiving technologies, and yet also be sufficiently well defined that there is not an over-correction causing excessive applications for the equivalent of ‘super-injunctions’ (which force the media to keep private even the fact that there is something being kept private).

To take another example, that is far-less-extreme than many politicians assume, it is well-recognized amongst advanced-computing scientists that systems capable of Machine Intelligence that will seem comparable to human-levels of intellect are likely to emerge by 2040 – and more-specific versions may appear significantly before that. Indeed, for defense and security applications the most advanced governments are themselves active in such research. The advantages, as well as many of the conceivable threats that such research brings, are well documented and the need for suitable forms of Pre-emptive Recovery are obvious.

However, long before advanced research systems begin to look like serious contenders to win the Loebner Prize (indicating that a computer system has to all intents and purposes demonstrated human-level intelligence) a major ethics issue needs to have been solved. At what stage does deleting an earlier-version of some machine-intelligence software constitute deleting what under different circumstances would be viewed as a defenseless form of ‘life’? At what level of sophistication are basic animal-rights relevant?

Over the last several decades, it has usually been assumed that if a machine ever claims that it has consciousness then it probably must be treated differently. But many mentally-handicapped humans could not communicate such a claim, yet are nevertheless protected by rigorous legislation. In the absence of responsible legislative decision-making in advance of its need, the risk is that – fairly or unfairly – the general public will later turn on those governments perceived as having had a conflict of interest in regulating the very AI research in which they were themselves heavily involved.