Personally identifiable data in humanitarian contexts is like nuclear fuel – it has immense power to do good, allowing us to reach millions of people with life-saving aid, but it is inherently dangerous. As with nuclear materials, once they leak, the damage is irreversible. And in humanitarian work, we must operate under the assumption that one day leaks will occur.
Take the example of Yemen. The crisis there presents a perfect storm:
- The reduction in humanitarian funding,
- Complex identification challenges across conflict lines,
- Millions of people are in desperate need of immediate help.
Artificial intelligence could offer powerful solutions to guarantee a fair and effective distribution of aid, we must nevertheless face an impossible ethical choice in a context where consent cannot be truly informed:
Does consent make ethical sense when the alternative is starvation, or must we compromise our principles of informed consent to save lives?
Reality in Yemen: too many identity documents
AI solutions are not currently deployed in this way in Yemen, but the country’s situation provides a compelling case study for discussing these ethical dilemmas.
In Yemen, humanitarian agencies must navigate a Byzantine landscape of 26 different forms of functional identifiers – issued by pre-war authorities, current local administrations, warring factions and various administrative bodies. Everyone claims legitimacy, and many beneficiaries hold multiple, sometimes contradictory, identity documents.
Traditional matching methods are collapsing under the weight of different Arabic name variations, inconsistent household definitions and programs operating across conflict lines. Add to that a large population lacking any formal identification, and you begin to understand why the promise of effective identification from AI is so tempting – and so ethically complex.
Deduplication Identification Overhead
Humanitarian actors in Yemen face an impossible daily task: ensuring that the same person is not registered multiple times in different programs, especially when they may present different identity documents each time, constitutes a major challenge and traditional matching methods are often insufficient.
Traditional matching methods for external and internal deduplication struggle under the weight of operational realities:
- Programs run simultaneously across conflict lines with different check-in points
- Multiple Implementation Partners Use Incompatible Systems
- Data collection methods vary widely
- Arabic names can have multiple valid spellings and variations
- Even basic location data becomes unreliable due to inconsistent transliterations
- Household composition changes as families seek security
The result? In a context where humanitarian funding has plummeted, aid workers face an impossible choice: risk excluding people in desperate need through overly strict vetting, or risk depleting scarce resources through double registration. When every dollar counts and every late decision can mean life or death, we need better solutions.
AI opportunity for credential deduplication
Although not yet implemented in Yemen, recent advances in AI and machine learning could theoretically propose solutions for humanitarian contexts as:
- Probabilistic matching algorithms could compare name variations across Arabic spellings
- Machine learning models could identify household composition patterns
- Natural language processing could standardize location data despite spelling differences
- Advanced entity resolution could link different functional identifiers
- Risk scoring could flag potential duplicates for review
Like effective social welfare systems and private sector programs, any AI solution would require careful human supervision throughout these four steps:
- AI flags potential matches based on probability scores
- Program staff reviews high-confidence matches
- Local staff verify cultural naming patterns
- Field teams perform physical verification if necessary
Humanitarian contexts pose unique challenges around data sovereignty, conflict sensitivity, and the urgency of life-saving assistance. Responsible AI solutions must be adapted to these realities.
Ethical trap of responsible AI
Now imagine an aid worker in Yemen using an AI solution to solve the problem of ID deduplication. Every day in Yemen, they greet families who arrive at registration points with multiple working IDs issued by different authorities, and who are in desperate need of immediate food assistance.
They should say something like:
“You are obviously a person in Yemen who is suffering from famine. I want you to give me your informed consent to share your personal information with an AI model that I didn’t create, which is on a server in America, to do something that I don’t really understand, to help me do my job. If you do not give me this informed consent, you will not receive food or life support.
They – and we – are faced with an impossible choice:
- Require consent for AI processing that they (and we!) cannot meaningfully understand
- Maintaining parallel systems without AI that we can’t afford
- Excluding them from life support
The central dilemma for obtaining consent in emergencies: Do we deny life-saving assistance to those who cannot meaningfully consent to AI-powered identification systems, when the alternative is starvation?
This forces us to confront uncomfortable questions about power, privilege and paternalism in humanitarian aid.
- Do we let the perfect be the enemy of the good?
- What level of consent is truly ethical in the face of urgent needs with limited resources?
NetHope points out that Humanitarian AI must “do no harm” while ensuring effective governance and accountability. Meanwhile, organizations like GiveDirectly are pioneering the ethical implementation of AI in cash assistance programs, demonstrating how tiered consent models can help balance urgent needs with ethical considerations .
The widespread availability of AI tools is forcing us to directly confront challenges that have always existed in humanitarian aid – but were previously theoretical or easier to ignore. When every field worker has access to powerful AI tools on their phone, abstract ethical discussions become daily operational decisions.
Moving forward together
The humanitarian sector cannot expect perfect solutions while urgent needs go unmet. We must begin to address these ethical challenges now, learning and adapting as we go. Your experiences and ideas matter in this conversation.
- Have you faced similar ethical dilemmas in your humanitarian work?
- How to reconcile technological innovation and responsible implementation?
While we may not have all the answers today, the decisions we make today will shape how AI meets the humanitarian needs of tomorrow. Let’s make sure we make these decisions together, keeping ethics and human dignity at the center of our innovation.
By Thomas Byrnes and originally published under the title The ethics of AI identity matching in humanitarian aid: when perfect consent meets impossible choices