AI firms warned to calculate threat of super intelligence or risk it escaping human control
Summary
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems. Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control. “The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said. The Singapore Consensus on Global AI Safety Research Priorities report was produced by Tegmark, the world-leading computer scientist Yoshua Bengio and employees at leading AI companies such as OpenAI and Google DeepMind. It set out three broad areas to prioritise in AI safety research: developing methods to measure the impact of current and future AI systems; specifying how an AI should behave and designing a system to achieve that; and managing and controlling a system’s behaviour.