Expert Speak Young Voices
Published on Mar 06, 2025

From crisis prediction to aid distribution, AI is reshaping humanitarian efforts, but challenges like bias, privacy, and accountability remain

AI in humanitarian missions: Opportunities and challenges

Image Source: Getty

Humanitarian crises are becoming increasingly complex, driven by factors such as protracted conflicts, climate change, global pandemics, and mass displacement. These challenges have burdened humanitarian mechanisms, necessitating innovative approaches to address urgent needs. Technology offers hope here. Its integration in humanitarian action has led to transformative changes, enabling faster responses, improved resource allocation, and data-driven decision-making. In this context, Artificial Intelligence (AI) has emerged as a game-changer with diverse applications in the humanitarian sector.

Machine learning (ML) algorithms can process real-time information with great accuracy from satellites, social media and ground sensors to predict disasters and identify vulnerable populations.

AI has the potential to completely revolutionise humanitarian missions by enhancing disaster preparedness, optimising resource allocation, and improving response and recovery efforts. Its ability to analyse vast datasets, forecast crises, and streamline operations offers unmatched scope to deal with global crises. Machine learning (ML) algorithms can process real-time information with great accuracy from satellites, social media and ground sensors to predict disasters and identify vulnerable populations. These systems can rapidly assess damage in affected areas, track population movements, and coordinate relief efforts. Through natural language processing (NLP), AI can analyse social media posts and aerial imagery to create real-time maps of disaster zones, helping emergency responders locate those in need. By leveraging deep learning algorithms and facial recognition technology, AI has aided in tracking and locating missing individuals in crisis zones.

Applications

During the COVID-19 pandemic, AI played a vital role in healthcare, Machine learning was utilised to predict infection hotspots, and data-driven, AI-powered diagnostics improved testing capacity, speed, and accuracy. AI can streamline humanitarian efforts using ML and predictive analytics to improve resource allocation, preparedness, and recovery efforts, replicating its success in healthcare..

One of the most significant applications of AI is in disaster preparedness and response. AI systems can analyse vast amounts of data to provide crucial information about possible risks to affected communities. Predictive analytics, driven by statistical models and data-driven learning, can forecast natural disasters, refugee movements, global health crises, and famines. For instance, the Project Jetson initiative by the  Office of the United Nations High Commissioner for Refugees (UNHCR) uses predictive analytics to forecast forced displacement, contributing to the escalation of violence in Somalia. The project draws on many data sources to train its ML algorithm, including market prices, river water levels, rainfall patterns, remittance data, and data collected by the UNHCR. Predictive analytics also enable early warning systems that forecast natural disasters like floods and earthquakes, allowing humanitarian organisations to act pre-emptively to save lives and resources.

NLP can analyse vast amounts of social media data to detect distress signals, track unfolding events, and understand the needs of affected communities in real time.

Technological advances in deep learning, NLP, and image processing have significantly enhanced the capability of AI systems to support humanitarian responses during crises. These advancements have enabled rapid classification of social media messages and satellite imagery, allowing humanitarian organisations to assess situations as they unfold and identify areas where aid is urgently needed. NLP can analyse vast amounts of social media data to detect distress signals, track unfolding events, and understand the needs of affected communities in real time.

Similarly, deep learning techniques applied to satellite imagery have enabled detailed mapping of disaster-affected areas, providing critical insights into the extent of damage. A notable example is the Rapid Mapping Service, a joint initiative by the UN Institute for Training and Research (UNITAR), the UN Satellite Centre (UNOSAT), and the UN Global Pulse. The project applies AI-powered tools to satellite imagery to map areas impacted by floods, earthquakes, landslides, and conflicts rapidly. By providing accurate real-time damage assessments, the service informs humanitarian actors on the ground, enabling streamlined resource allocation and aid delivery. When the tropical cyclone Eloise hit Mozambique in 2021, UNOSAT used AI to monitor floods and inform aid delivery efficiently.

AI is also transforming humanitarian recovery missions by helping track, locate, and reunite separated families and rebuild communities. The International Committee of the Red Cross’s ‘Trace the Face’ initiative uses facial recognition technology to help refugees and migrants reunite with families separated by conflict or disasters.

Challenges

With humanitarian actors increasingly relying on AI for disaster preparedness, response, and recovery efforts, ethical, social, and operational questions emerge. Concerns such as data quality, algorithmic bias, data privacy, and overreliance on automated systems pose serious risks, particularly when dealing with vulnerable populations, much like the AI system applications in other domains..

Data quality is critical as AI systems rely on large datasets for training. Poor data quality can affect outcomes, and obtaining high-quality data during conflicts or crises is often constrained. For instance, during the 2010 Haiti earthquake, initial AI-powered damage assessment systems struggled with accuracy because they were trained primarily on data from earthquakes in developed nations possessing different building structures and urban layouts. AI systems trained with inaccurate, biased, or incomplete data are likely to perpetuate these inaccuracies. Algorithmic bias arising from poor data quality is another concern. Bias in the design and development of AI systems reflects the stereotypes and prejudices of developers, potentially leading to unequal outcomes and discrimination. For example, an AI tool designed to allocate aid might disproportionately allocate resources to areas with better data, marginalising others.

Bias in the design and development of AI systems reflects the stereotypes and prejudices of developers, potentially leading to unequal outcomes and discrimination.

Data privacy presents serious ethical dilemmas, especially when handling sensitive information from vulnerable populations. Breaches or misuse of data can lead to exploitation. For instance, in 2017, the Rohingya refugee crisis highlighted these risks when biometric data collected for aid distribution by the UNHCR was shared with the governments of Bangladesh and Myanmar without the consent of the refugees. Moreover, individuals may find it difficult to provide informed consent for data use, as their information could be repurposed for AI development. Critics have also raised concerns about ‘surveillance humanitarianism,’ where the increasing use of technology could paradoxically put those in need at greater risk by exposing their information. This may create new adversities for those seeking assistance.

Resource constraints and the high costs associated with AI technologies can hinder adoption, especially by small organisations. Infrastructure limitations in crisis-affected regions, such as unreliable internet and electricity, further complicate deployment.

Way forward

The ethical deployment of AI in humanitarian missions requires a comprehensive approach to maximise benefits while mitigating associated risks. Central to this is the humanitarian principle of ‘Primum Non Nocere’, which translates to ‘First, Do No Harm’.  The principle entails that humanitarian actors carefully evaluate how their interventions or lack thereof could unintentionally cause harm or create new risks for the very communities they aim to help.

Data privacy is another vital area requiring stringent measures to protect sensitive information. Adopting frameworks like the European Union’s General Data Protection Regulation (GDPR) can provide a robust model for safeguarding data. Its key principles, including informed consent, data minimisation, and the right to be forgotten, ensure that individuals retain control over their private information. Such data protection regulation can be particularly relevant when applied to humanitarian settings, where protecting sensitive information is essential to maintaining community trust and preventing exploitation.

Co-developing AI tools with inputs from local populations would ensure that solutions are ethically appropriate and suited to local needs.

An overarching framework for transparency and accountability is essential to build confidence in AI systems. As the central coordinating body for humanitarian response, the UN Office for the Coordination of Humanitarian Affairs must establish clear guidelines requiring organisations to commit to openness about their data sources, disclose how AI tools function, and ensure transparency in their decision-making processes. This should be accompanied by mechanisms to raise accountability such as grievance redressal and independent audits. Co-developing AI tools with inputs from local populations would ensure that solutions are ethically appropriate and suited to local needs. Embedding these recommendations into AI design and deployment can balance innovation with ethical responsibility, ensuring AI-powered solutions align with basic humanitarian principles such as impartiality and equity.

Conclusion

To fully realise the benefits of AI in humanitarian action, it is essential to adopt a balanced approach that prioritises ethical considerations along with technological innovation. This includes adopting robust data protection measures, fostering collaboration among stakeholders, and ensuring the transparency and accountability of AI systems. Stakeholders must also invest in responsible AI practices which align with humanitarian principles. This can include co-developing solutions with local communities and addressing structural barriers to AI access. By doing so, AI can be a powerful tool in enhancing humanitarian missions and delivering impactful solutions that uphold values of fairness, justice, and humanity.


Samar Jai Singh Jaswal is a Research Intern at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.