The past decade has seen a surge in the use of mobile telecommunications, messaging apps and social media. As they become more accessible around the world, these technologies are also being used by the humanitarian sector to coordinate responses, communicate with staff and volunteers, and engage with the people they serve.
Humanitarian organisations collect and generate growing amounts of metadata: data about other data. In their most common form, metadata are the data that are generated around a message, but not the content of the message.
For example:
- Through their exchanges internally and with people affected by crises, such as sharing “info-as-aid” over messaging apps and/or via SMS and social media;
- Their programmes, including cash-transfer programmes that use mobile cash or smartcards;
- Their monitoring and evaluation systems, which use data analytics on programme data to show impact and detect fraud.
To reconcile these actions with the “do no harm” principle, the humanitarian community must better understand the risks associated with the generation, exposure and processing of metadata. This is particularly important for organisations that enjoy certain privileges and immunities but that are not able to counter these risks alone.
The Humanitarian Metadata Problem: Doing no harm in the digital era, commissioned by International Committee of the Red Cross and Privacy International, notes that humanitarian organisations need to better understand how data and metadata collected or generated by their programmes, for humanitarian purposes, can be accessed and used by other parties for non-humanitarian purposes (e.g. by profiling individuals and using these profiles for ad targeting, commercial exploitation, surveillance, and/or repression).
7 Humanitarian Metadata Risks
Below are seven risks associated with the use of traditional telecommunication services (including voice and SMS), messaging applications, cash-transfer programming and social media. While each type of service is discussed separately, they may overlap where financial companies are also telecommunication companies or where social media providers also own messaging applications.
Telecommunications and messaging
Today’s 2G, 3G and 4G mobile networks actually describe a series of protocols that operate over different frequencies, using different encryption algorithms, and allowing different speeds. Even with the gradual rollout of 4G or LTE for data connections, most mobile network operators still revert to the much less secure 2G protocol for voice and SMS communications.
This means that the metadata and content of telecommunications are still at risk of being intercepted between a given phone and the phone tower routing the communications. Moreover, telecommunication networks were not designed to deliver emergency-scale loads of SMS traffic – hence the high failure rates when the network is saturated. This calls into question the recommended use of SMS campaigns in crisis situations.
Risks.
When using telecommunications, humanitarian organisations put all parties involved at risk of their telecommunication data (message or call content) being intercepted and the associated metadata (sender/ recipient, time and location) being accessed.
Even when calls or messages are not being exchanged, mobile phones regularly “ping” nearby cell towers to ensure the best possible continuous service. As a result, users can be tracked through their phones’ location service. This tracking continues even when the phone is not being used, is in sleep mode, or is turned off.
Mitigation.
End-to-end encrypted, secure communication methods should be used instead of voice or SMS even if they do not always prevent metadata from being accessed. Even with the stronger encryption of VoLTE (Voice over LTE), downgrade attacks (which force the device to switch to a less secure encryption method) are possible. These less secure encryption methods only work between the phone and the tower.
Until there is more widespread and routine use of end-to-end encrypted communications that minimise metadata, humanitarian organisations should conduct advance risk assessments for all telecommunications exchanges; here, they should always plan for scenarios in which third parties are able to gain access to the content, time and location of all exchanges.
Messaging Apps
Messaging apps use a number of different encryption algorithms with varying levels of transparency as to how that encryption is integrated in the app. In many cases, encryption is only applied to specific types of communications on the app (e.g. when communications are set to private mode).
Encryption methods include end-to-end encryption: if SMS messages are like postcards, where everything can be read, messaging apps are like envelopes where only the destination and sender can be seen by the local provider. They also include SSL/TLS tunnels, a rough equivalent to putting another envelope around the first, and marking the messaging platform – e.g. WhatsApp – as the destination.
Finally, there is also a now defunct method called domain fronting: if the messaging application was banned in a particular location, domain fronting allowed the app provider to put a third envelope around the first two and write the name of a permitted domain on it. These domains were often deemed too large to ban outright (e.g. Google or Amazon).
However, some of these methods are still susceptible to attacks – like a man-in-the-middle attack, where a third party poses as the messaging platform to the user and as the user to the messaging platform, in order to intercept exchanges. Whilst many messaging apps will automatically prompt users if another party’s encryption key changes (which could indicate a man-in-the-middle attack or, alternatively, the user using a new phone), most users have been trained to click “Ok” to prompts and error messages. This entirely bypasses the added security that this would otherwise provide.
Risks.
While some messaging apps encrypt message content during communications, they also commonly ask the user to reveal more data, share more data than the user may realise (such as the device and SIM identifiers – IMSI and IMEI – and information on the phone), or ask the user to give the app permission to access other information on their device such as location, photos and contacts.
This allows the messaging app provider to gather extensive information on the user over time. For instance, a messaging app could infer – from the frequency of your calls or SMS communications – when you wake up, go to sleep, what time zone you’re in, and who your closest friends are.
Mitigation.
Humanitarian organisations could discuss how to increase data and tech literacy among staff, volunteers and crisis-affected people when messaging apps are used to communicate. This would allow these users to make informed decisions about what information they share on what platforms.
Risk assessments for the use of messaging apps should also take into account not only what data has to be declared by the user, but what can be inferred over time, depending on the device information that apps can access. Messaging apps can also share information among themselves if they are run by the same provider or in the same app library.
This makes it all the more important to map out who has access to what data, under which jurisdiction. Finally, the humanitarian community could explore what leverage they have to negotiate greater protection or discretion from messaging app providers in certain situations.
Cash Transfer Programmes
In cash-transfer programmes (CTP), humanitarian organisations provide cash or vouchers directly to crisis-affected people. CTP’s growing use of digital and telecommunication technologies has enabled greater financial inclusion. However, these third-party technologies also make it easier for the individuals registered to be identified. Their increased digital visibility creates risks of discrimination and persecution.
Mobile money refers to the use of mobile wallets, where funds can be transferred using a mobile-phone-based system. This CTP delivery method does not require a bank account, but it does rely on third-party domestic telecommunications companies.
Risks.
Mobile money transaction details are often reported to the recipient via an unencrypted SMS. Thus, even when the electronic transfer is encrypted, the details of the transaction are not and can be intercepted directly or by other apps on the recipient’s phone.
Moreover, the domestic telecommunications company may be obliged (e.g. by Know Your Customer regulations) or inclined (e.g. for their commercial partnerships) to share data collected or inferred from the CTP. These data can be used to financially profile a person, and this may restrict their access to financial services in the future.
Mitigation.
The use of mobile money should be preceded by the same type of risk assessment proposed for telecommunications. Because the use of CTPs is strongly associated with humanitarian programmes, organisations should take steps to ensure that persons registered in these programmes are not automatically associated with specific identity factors.
For instance, in a situation where a minority group is being perse- cuted, humanitarian organisations should be wary of launching a CTP that they know will only attract people from that group. Rather, a wide variety of people should be registered, as this will prevent the CTP participant list from becoming an indirect census of that group.
Finally, humanitarian organisations should also check who owns/controls the telecommunica- tions operations involved in a CTP. This may reveal useful information on how the company operates and what additional threats or risks there may be regarding data sharing (e.g. if the company has an incentive to share data with the host government, which could be undesirable).
Banking
Some CTPs require that individuals set up a bank account or use an existing one. The involvement of the banking sector means that access to personal information can be extended to third parties like national anti-corruption and financial intelligence bodies, other banks from the same banking group, intermediary banks, credit bureaus and credit rating agencies.
Moreover, banks usually require a significant amount of information to set up an account (e.g. under Know Your Customer regulations). Using these data along with transactional metadata, they are able to infer a large amount of information about their clients (such as periods of informal employment and political and religious leanings).
Risks.
As mentioned above, depending on the bank’s regulatory frame- work and broader partnerships, individual data collected through a CTP can be shared with other parties, both domestic and international.
These data can be used to create and monitor an individual’s credit profile, with potential repercussions on their access to credit; to track their movements across borders (e.g. in the case of international banking groups); or to discriminate against them on the basis of inferred political or religious affiliations.
Mitigation.
When selecting the bank for a CTP, humanitarian organisa- tions should map the country’s data-sharing laws and practices as well as the bank’s ownership, partnerships and information-sharing agreements. They should also try to negotiate a “no sharing” agreement for CTP data and limit the data retention period to ensure that CTP data will not be automatically stored for decades after the programme has ended.
Smartcards
Smartcards are similar to electronic wallets in that they can be used to transfer and spend cash. Their electronic chip links the wallet to a specific owner and keeps track of the account balance.
Each smartcard transaction generates a record that is geo-located and time-stamped and that includes the transaction amount. It also keeps a record of the payment terminal used to process the transaction, the shop itself, and, in some cases, the items purchased.
Risks.
Smartcard metadata are usually sufficient to identify an individual with a high degree of precision. Behavioural patterns, physical move- ments, and purchasing habits can then all be inferred and attributed to the identified individual(s). Should these data become accessible to a third party, e.g. when shared with an external firm for programme evaluation, they can be used to track and persecute vulnerable groups (e.g. refugees participating in a CTP).
Mitigation.
When designing a smartcard-based CTP, humanitarian organisations should map out all the entities involved in the process (e.g. smartcard provider and bank) and any other partners or entities that can access their data.
Organisations should also try to negotiate a limit on the amount of data needed to set up the programme and whether the meta- data involved (e.g. geo-location) can be excluded from the data-handling process. Finally, they should discuss the retention period and the ability of third parties to access these data.
Social media
Social media have become a ubiquitous tool of user engagement. Their expanding functions now include services specifically tailored to crisis situations, such as Facebook’s Disaster Maps. However, social media providers’ business model still relies on the monetisation of user data (e.g. for ads targeting). This means that social media data, even if they are gathered for humanitarian purposes, are vulnerable to the same level of commercial exploitation as any other data on Facebook, Twitter, etc.
This issue is further complicated by the ever-changing nature of social media providers’ privacy and data protection policies. Users often have little or no say in accepting these updates (i.e. they must either accept the update or deactivate/delete the account).
The abundance of information that can be obtained, inferred or derived from social media data has generated great interest in social media intelligence (SOCMINT). Indeed, SOCMINT has become increasingly popular with both private and public parties for surveillance and other non-humanitarian objectives.
Meanwhile, it is very difficult for users to know which data are being generated and processed by the platforms they use; which actors have access to these data (each social media platform has its own policy on transparency reporting); and what the regulatory environment is.
Risks.
Using the large amount of data and metadata generated on social media, it is possible to very accurately predict people’s behaviour, preferences, and other personal details (e.g. ethnicity, sexual orientation and political and religious affiliations).
But it can also lead to erroneous inferences, if the original data or any other data used for correlation purposes were inaccurate or biased. Users’ data and metadata are usually saved in a “shadow profile” that can be accessed, sold, and freely shared with third parties. These profiles can be exploited for surveillance purposes and to attempt to influence users’ behaviour (as suggested by the 2018 Cambridge Analytica controversy).
Often, even if a user deletes a given social media account, limits the number of apps that can access it, or never had an account in the first place, their shadow profile exists and is fed by information gleaned from other social media accounts or websites they use and even from their contacts’ social media accounts (e.g. their Facebook friends).
Mitigation.
To appreciate the risks involved in social media metadata, humanitarian organisations should increase the digital literacy of their staff and volunteers and of the people they serve. Emphasis should be placed on the business model employed by the various social media platforms in order to then asses their threat model and risk appetite.
They should also carry out risk assessments to understand what individual or group vulnerabilities may be exposed if the organisation uses social media for a particular activity. Finally, the sector as a whole could jointly negotiate with major social media platforms (e.g. Facebook and Twitter) in order to secure specific safeguards across their services and in particular for humanitarian metadata.
Sorry, the comment form is closed at this time.