The policy of “great borders”

The introduction of digital technologies, within the framework of managing the so-called “migration crisis,” has made these populations “objects of experimentation” (populations of data experimentation). At the same time, they are also the most surveilled populations, while their struggles and experiences remain the most invisible. The massive interoperable databases, digital recording processes, collection of biometric data, identification through social networks, and other forms of risk management and assessment based on data, have now become part of the European border regime for migrants.

The issue of borders has become particularly intense in the public sphere in recent years, especially in Europe where the increasing violence at borders signals the geopolitics of the European project. At the same time, the mechanisms through which borders are established are significantly connected to new technological developments. Thus, with “big data” come “big borders,” as the increase in datafication leads to increased regulation of borders, including the greater concentration of personal data. This data includes behaviors related to physical movement across various regions as well as daily activities, including financial transactions, social media usage, etc. Therefore, data significantly expands borders; both in the management of external borders and in the “dispersion” of borders within societies.

The term “iBorder” (Pötzsch -2015) is a useful analytical tool for understanding datafied borders. It describes technologically enhanced border control, which includes databases for migration, such as EURODAC or SIS II, and information systems for trusted travelers such as Registered Traveller and e-Borders. These databases are used as a means of identifying, categorizing, and monitoring migration within Europe, as well as promoting efforts to develop a “functional” common European asylum system (CEAS), capable of policing external borders and making decisions on asylum applications among member states. Such a “common” framework is an important feature of EU policy following the rise of anti-migration rhetoric and increasing political pressure to control migration, after the increase in arrivals in 2015. The usefulness of a term such as “iBorder,” which depicts the entire socio-technical ensemble, is that it includes the impact of both human and non-human methods on the processes of sorting, categorizing, and filtering individuals-on-the-move. Within a datafied remote border control system, people are followed by their own “data trail,” consisting of data points from a wide range of sources—social networks, meal preferences, economic transactions, previous travels, as well as more traditional data, such as place and date of birth.

Particularly relevant to developments in migration management, towards and within Europe in recent years, is the collection of biometric data. Stored in databases, this collection is part of creating an individual identity that can be shared across all European countries. Although EURODAC, the earliest biometric database in Europe, was established in 2003 as a central database for processing asylum applications among member states, the growing importance of biometric collection can be seen in recent changes to EURODAC that allow the collection of biometric information from a younger age, from 14 years down to as young as six years, and retention for a longer period, from 18 months up to five years. Additionally, the collection of fingerprints in Greece has increased significantly, from 8% of arrivals recorded in September 2015 to 78% in January 2016.

At the same time, the use of biometrics is a key element in providing humanitarian aid to migrants, from the use of IrisGuard in Jordan, to the development of PRIMES in many regions of Africa. Although biometrics is not yet used for humanitarian aid in Europe, the development of centralized interoperable databases for aid management has been used to address the increased number of arrivals. For example, humanitarian aid through “cash cards”, with the GCA (Greek Cash Alliance) program, provided in Greece by the UNHCR, is “harmonized” throughout the mainland and the islands through the use of the ProGresV4 database, which tracks the data of “beneficiaries” through monthly meetings and document verification, which are then cross-checked with the database of the Greek Asylum Service (GAS), ALKIONI.

The ProGresV4 database does not only contain basic information such as name and age, but also data related to vulnerabilities, family status and geographical location of a person, which is shared throughout the UNHCR ESTIA program, which also provides housing for asylum seekers in Greece.

An important aspect of the “hotspot card” is that geographical restrictions can be imposed on asylum seekers. When a person enters Greece via the Aegean route, they are restricted to remaining on the island throughout their asylum application process, unless there are explicit reasons to the contrary. If someone with such a “hotspot card” leaves the island voluntarily, they will no longer be entitled to assistance. What is striking is the reinforcement of geographical restrictions within the same country by the humanitarian organization. The restrictions of the “hotspot card” become an extension of the hotspot policy—which contradicts the 1951 Refugee Convention, which states in Article 26 that a refugee has “the right to choose his place of residence and to move freely within the territory.”

These processes, which involve the collection of personal and biometric data as forms of geographical restrictions, constitute the “internalization” of borders, where the increasing focus on the human body as the ultimate form of identification means that we carry borders with us wherever we go, without being able to escape them.

Data systems also lead to the “externalization” of borders, through remote border control. Developments in digital surveillance technologies, such as cameras, drones, and GIS-based risk analysis methods have allowed changes both in the way border controls are conducted and in the way people experience attempts to cross borders. EUROSUR uses unmanned aerial vehicles (UAVs or drones), among other surveillance tools, to better detect individuals attempting to enter European territory, creating a “pre-border” that allows control beyond traditional territorial borders. Initiatives under the EU’s “Horizon 2020” program include projects such as RANGER and SafeShore, for further monitoring and digitization of EU borders. The combination of these technologies, which include unmanned aerial systems, satellites, biometric data, data mining techniques, profiling, and population registries, is part of a “persistent surveillance” that operates on the basis of continuous and ongoing intervention.

The emerging field of “digital migration studies” has begun to advance discussions on these multifaceted developments that often incorporate paradoxical technological uses. Technologies used to control cyber borders create new forms of discrimination and criminalization based on “data,” while simultaneously building a “new digital infrastructure for global mobility.” The mobile phone, for example, is increasingly used as a means of identification, risk prediction, and surveillance, alongside biometric databases. In Germany and Austria, legislation has already been enacted allowing the temporary seizure of mobile phones for metadata extraction in cases where a passport or ID is missing. Legislations such as this clearly show how asylum policy seeks to incorporate mobile phones as a means of surveillance, identification, and categorization. This is particularly important, as the digital infrastructure, just like the borders themselves, is multi-sign and multi-layered. They constitute processes within migration routes, but also open up even more pathways for surveillance and exploitation. Therefore, we are faced with a complex and comprehensive border regime that includes a range of technologies, locations, and practices that cannot be “simplified” into a single factor or a “directing gaze.” They involve both the “internalization” and “externalization” of borders, of both private and public bodies, and of both personal devices and interoperable databases. Also, they are never a “completed” project, but rather, they relatively rely on the “technological work” of many sites, continuously creating spaces of refusals and mediations of border practices, “on the move.”

[…]

The interoperability of data in this “ubiquitous” presence of borders can be clearly seen in the forms of recognition and categorization created based on mass data collection. Data systems produce “measurable types,” such as “risk,” which are “analyzable structures of classification significance, based exclusively on what is available for measurement.” We see this, for example, in the way fingerprints and EURODAC function as fundamental elements of border and asylum control in Europe. The use of mandatory fingerprints, usually associated in Europe with criminality, is an integral part of the Dublin Agreement and EURODAC, and demonstrates the increasing tendency of digital governance in what has been described as the “general criminalization and social sorting of displaced people” – what Aas (2011) refers to as the creation of “crimmigrant bodies.” This process is based on constructing a “second identity” from data, which is imposed on individuals and used in databases and bureaucracies based on practices of suspicion and control. For EURODAC, when a person’s fingerprint is taken, it is placed into one of three categories. Category 1 defines an individual as an applicant for international protection, Category 2 defines that an individual crossed or attempted to cross the border illegally, and Category 3 defines someone as a potential illegal migrant who failed to obtain asylum status, is undocumented, and has been found in a Member State. Within these categories, the latter two immediately impose an illegal status on the individual, although the first category also has the potential to create an illegal body if the individual moves to a second state using illegal means and applies for asylum there. Regardless of justifications, the productive character of such mechanisms creates what Aas refers to not only as a “sedentary global underclass,” but also as an “illegal global underclass,” which becomes both the subject and the target of intensive surveillance.

The practice of fingerprinting is therefore a “significant trademark” of criminalization in Europe, and signals a significant form of “function creep” in relation to the EU’s migration databases, where data are used for purposes beyond their original intent. As the association of migrants with criminality is reinforced through the interoperability of SIS II, EURODAC, and Europol’s information system, it also relies on exploiting an unintended “added value” of EURODAC and the mandatory nature of fingerprinting. Moreover, as legal pathways to Europe are closed through the securing of external borders, with the EU funding efforts of both the Turkish and Libyan coast guards, alongside the expansion of EUROSUR, people are pushed toward illegal means of travel. This renders migration illegal “by definition,” whereby the methods of entry characterize an individual as illegal regardless of any potential legitimate reasons they may have for doing so. Thus, the multiplication of surveillance and identification methods means that border security regimes actively produce the “illegal migrant,” as what Andersson (2016, 2014) refers to as “an illegality industry.”

While the level of monitoring and data collection may not differ depending on one’s category—whether they are an “asylum seeker,” an “economic migrant,” or something else—the purposes and consequences associated with their data may differ significantly. It is important to note that once this data is attached to an identity, it becomes difficult to delete or challenge. Through identification processes, it is possible to track an individual throughout their migration and asylum process. If someone is identified as “illegal,” either through a decision within EURODAC or by being detected attempting to reach Europe by sea, then this identity becomes destructive. This highlights the paradoxical way in which data analysis in border regimes produces identities that are both fixed and permanently attached to an individual, yet also gains particular significance when discussing targeted groups, such as asylum seekers and refugees, whose lives and experiences are continuously shaped by interactions with bureaucratic institutions that impose such identities upon them.

These “data-identities,” in turn, can create a population of “forbidden data,” where profiles resembling “people like them” lead to the exclusion of entire categories or groups of individuals. Or, conversely, in other cases, the lack of data needed to create an approved identification leads to the exclusion of certain individuals. For example, in contrast to databases that collect intrusive information, which can harm an individual’s ability to access fundamental rights, the opposite occurs in the case of EUROSUR. Here, the refusal to collect personal data has been established, and instead, any attempt to enter the EU is rejected if it is done through illegal means (e.g., without passing fingerprint scanning), regardless of whether an individual may have the right to asylum upon reaching European territory. In other words, the category of illegality defined through surveillance technologies overrides the experiences of those subjected to this surveillance. This reveals the politics emerging in the constructed “distance” between a person and their data subject, between the human and the digital. This “dispersive and political combination of migration and crime” creates a “specific dynamic of social exclusion.”

The creation of “crimmigrant bodies” is therefore an integral part of a broader process of “social sorting.” This refers to the advancement of surveillance systems that rely on databases, with search and categorization capabilities of individuals, resulting in differential treatment and discrimination of a person according to how their “virtual self” has been identified.

The increased surveillance and development of digital borders significantly enhance the control of “undesirable populations.” In such cases, borders become more impermeable than ever, creating “a world of long-term data monitoring where borders are everywhere.” Moreover, this “productive” and “persistent” surveillance reflects the priority given to new ways of calculating risk, for the creation of preventive frameworks and measures, as the defining logic of policy. Instead of seeking to understand the underlying causes—which, in the case of border crossings, involve violence, war, or environmental destruction from which a person is fleeing—attention is turned to managing the consequences, that is, controlling and limiting migration to Europe.

The criminalization of migration, social dialogue, the focus on border security, and the overly simplified and often incorrect categorization of individuals in these processes do not contribute to resolving the migration “crisis.” However, they maintain a “political utility” through the elimination of responsibility and accountability. Elements of this political utility are reflected in the broader discourse of “technological solutions” adopted by European border policies, thus shifting accountability from governments and human factors toward digital databases and algorithms. Consequently, data-driven borders, which disperse accountability and claim to enhance security, can be considered a useful political tool, while being presented as a natural evolution of border security in times of “crisis.”

The issue here is therefore not simply how an individual’s data is collected, stored or processed, but rather how data-driven decision-making is part of a specific economic and political agenda that systematically seeks to stigmatize, marginalize and exclude “undesirable” populations. Consequently, we challenge the idea that data and data-based technologies are neutral objects, and we view “data justice” as a framework that understands the tendency of digitization based on the interests that drive such processes, and the social and economic organization that enables them.

However, we observe much less engagement with these developments from groups outside the digital rights and technology spaces—groups that we could consider to be dealing with social justice issues, such as inequality, discrimination, and oppression, or those originating from historically marginalized communities. In other words, there is a “disconnect” between those who raise concerns about technology on one hand, and those who engage with social justice on the other.

The politics of big borders: Data (in)justice and the governace of refugees
First Monday, April 2019
translation W.