The AI Safety Summit – Will it Help Prevent AI’s Future "Existential Risk to Humanity"?
Amid safety concerns following the rapid evolution of Artificial Intelligence (AI), certain AI creators are advocating for a pause of its development - with Elon Musk describing it as posing an ‘existential risk to humanity’.[1] Attempting to engage in discussion about the risks associated with AI and potential steps moving forward, Rishi Sunak is holding a two-day global AI Safety Summit between 1-2 November.
On the one hand, limiting the development of AI could mean we are not able to realise its full potential for important projects. For example, projects concerned with the creation of new vital medicines, or towards preventing climate change.[2] However, the Summit is mainly centred around Frontier AI systems, which despite having the ability to bring opportunities across a wide range of sectors, also pose key uncertainties and potential future risks. Whilst Sunak has been an advocate of the future benefits that AI can bring, its potential dangers have triggered these discussions on future planning and regulation in order to ensure the safe development of this technology moving forward.
Chinese AI scientists alongside western academics are of those who are adamant for stronger technology regulations to be put in place compared with the current propositions of the UK, US and EU.[3] The favouring of tighter regulations is likely to spur policymakers to act on this. Given the fast-moving nature of AI’s development, I expect that the opportunities (coupled with the risks) from Frontier AI are likely to increase in reach and scope in the coming years. Therefore, leaders and policymakers will need to increase regulation whilst ensuring the responsible scale up of AI - where this AI Safety Summit appears to be a move in the right direction for this to occur.
The extent of how well future harms are mitigated may depend on the level of public engagement with AI. This is also dependent on how the public’s understanding of AI evolves, of which the media will likely play a big role in promoting. However, perhaps Sunak’s Summit concerns may be too focused on future dangers without addressing the more imminent risks that are likely to arise[4]. For example, the increased development and adoption of AI may disproportionately impact marginalized communities. AI tools can also increase the spread of misinformation, as well as having an impact on jobs (of which we are already witnessing), whilst also posing environmental concerns by consuming large amounts of energy in the process.[5] The question is whether we should be focusing more on AI posing a threat to people, our communities, and the planet overall, or whether advanced attention should be made in pre-empting and mitigating more extreme future potential threats.
In sum, perhaps the AI Safety Summit can act as a beacon beginning to guide us to the innovative yet safe development of AI. However, a nuanced approach balancing more imminent and relevant worries amongst the more extreme future risk concerns is undoubtedly needed to be taken by leaders and policymakers going forward from this.
[1] https://www.independent.co.uk/news/uk/politics/elon-musk-rishi-sunak-government-spacex-tesla-b2439715.html
[2] https://www.bbc.co.uk/news/technology-67172230
[3] https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections
[4] https://www.ft.com/content/c7f8b6dc-e742-4094-9ee7-3178dd4b597f#myft:my-news:page
[5] https://www.bbc.co.uk/news/technology-67172230