How will we unite globally against an existential threat?
Building upon my experience at the AFS Youth Assembly 2025 in New York to discuss the existential threats that AI and other emerging technologies pose to society.

The existential questions that thrive in the intersection between philosophy and technology are not new to this publication.
Consequently, I’ve used terms before such as sociopathic intelligence and unethical AI to highlight the profound topics that are central to this publication.
Existential questions on ethics in AI, accountability and responsibility in tech systems, bias in algorithms, data privacy, and the environmental impacts of emerging technologies are all staples of the discussions taking place.
A previous article on the possibilities of testing AI consciousness discusses the ethical, social and economic implications of artificial intelligence qualifying as conscious - a prospect that may fill many with equal amounts of dread as well as awe.
Inevitably and understandably, when navigating what feels to be an almost lawless land where companies can develop and release any technological system, individuals look to governance to provide appropriate and effective AI-specific regulation.
However, while attending as a delegate at the International Affairs Academy as part of the AFS Youth Assembly 2025 in New York, we received a presentation from Dr Purnaka Lohendra de Silva, a professor of Political Science, International Relations, and Diplomacy, and experienced in the role of diplomacy within the United Nations system.
Dr de Silva’s presentation focused on how AI is shaping the world, international affairs, and humanity, emphasising that as young people, and as youth leaders and activists, AI will hold a very large presence in our future and we need to be aware and informed on it.
Moreover not only do we need to be aware, but understand how we can retain control of it and use it for the betterment of society and in a ‘humane’ way.
Following the presentation, we watched the 2024 documentary The End of Humanity. The film touches upon more theological and humanistic concepts, including the contentious topic of transhumanism. However, the documentary questions the view that humankind is doomed by the advent of systems ‘more intelligent’ than ourselves, instead emphasising the need for a global effort to make choices now that will allow for a humane future.
As someone who spends large amounts of their time pondering what future will be forged from the opportunities and risks these technologies present, seeing this gloomy view of humans as inefficient and replaceable in comparison to AI fueled my own existential concerns.
Nonetheless, as young leaders we took a proactive approach. Our takeaway from the presentation and the documentary: an extended session where we discussed a number of strategies on what could be done to make this humane future not just a possibility but a certainty.
As I mentioned previously, emerging technology and AI is a tough beast to tackle from a regulatory point of view. However it is an issue that should be tackled now, so to stop inevitable future issues such as ethical debt, misuse of AI systems, and the implementation of technology without appropriate ethical frameworks.
A joined-up approach to AI governance requires broad engagement. It must be undertaken with a global point of view, with international organisations holding a large responsibility in these discussions.
Initial strategies need to face the problems face on: global AI governance frameworks, ensuring both countries and private companies are transparent about the development and implementation of systems.
This approach of increasing the responsibility of organisations stretches to components such as algorithmic accountability, and ensuring responsible military uses of AI.
Furthermore, increasing AI literacy and making sure the public have a better understanding of these technologies is essential to the success of ethical AI governance. Individuals applying their own critical analysis and thought to topics of global importance would ensure the framework of governance heeded by greater numbers.
Continuing this focus on the public would be to support technology development that has a focus on the greater good and supporting poeple across society. Technology has the potential to effectively mitigate the consequences of other global issues.
Moreover when facing these issues with existential implications, the necessity of the creation of early warning systems for crises which are a result of monitored risk factors, disinformation, or escalation.
Lastly, embedding AI ethics in multilateral systems and ensuring it understands the ethical viewpoints on topics such as climate, health, and education. With so much AI development focusing on profit maximisation, it is important to identify ways in which it can be an innovator focused on the betterment of humanity.
Concentrating on this side of the discussion would allow for understanding of how we can solve our existential challenges - for example, through more not-for-profit incentives at an individual or company level.
Having these discussions with technology developers, thought leaders, policy makers and youth activists are essential to suitable and effective technology and AI governance. Seeing how these conversations develop, and how international organisations come together, will certainly shape the future.
Read more articles from ‘A Philosophy student’s take on Ethics of AI’ by subscribing to the Substack here!
About the Creator
Allegra Cuomo
Interested in Ethics of AI, Technology Ethics and Computational Linguistics
Subscribe to my Substack ‘A philosophy student’s take on Ethics of AI’: https://acuomoai.substack.com
Also interested in music journalism, interviews and gig reviews



Comments (1)
Increasing number of people asking these questions. Will be interesting to see how they work together to get solutions. Critical!