The Watching the Border: The Effectiveness of the AI Act in Regulating Facial Recognition Technology
Publishing Date: 14 Juli 2025
Author: Maria Fritsch

AI Act image, Co.Tec, info.cotec.de, 14 Juli 2025.
Watching the Border: The Effectiveness of the EU AI Act in Regulating Facial Recognition Technology
Brief Summary: This article examines whether the EU Artificial Intelligence Act (AI Act) effectively protects fundamental rights, particularly the right to privacy when facial recognition technologies (FRTs) are used in border and migration management. There is an increase in FRT Systems such as the Entry/Exit System (EES), the Shared Biometric Matching Service (sBMS), and Automated Border Control (ABC) gates. These developments raise important legal questions. The article begins by defining types of FRT and identifying fundamental rights at risk under the EU Charter. It then examines how the AI Act classifies FRT as high-risk under Article 6(2) and Annex III. Legal obligations such as risk management (Article 9) and Fundamental Rights Impact Assessments (Article 27) are also discussed. However, these safeguards are largely procedural. Broad exemptions, delayed implementation, and decentralised enforcement weaken them. Some systems are exempt from public registration (Article 49(4)). Other systems do not need to comply until 2030 (Article 111). These limitations undermine the protection of rights such as privacy, non-discrimination, and access to remedies. The article concludes that while the AI Act sets a clear legal framework, it does not provide sufficient protection in practice. The law focuses on procedural compliance. It does not address the growing concerns regarding fundamental rights that stem from the use of FRT at borders.
1. Introduction
Facial recognition technology (FRT) is increasingly used across EU border management systems. It is being employed or is planned to be employed for large-scale data infrastructures. The Schengen Information System (SIS) and the Visa Information System (VIS) support policing and visa processes. VIS uses FRT to verify identities by matching facial images against stored records. SIS, while not currently operationalising FRT, may incorporate facial image matching in the future. Other examples are the Entry/Exit System (EES), Automated Border Control (ABC) gates and the European Travel Information and Authorisation System (ETIAS).[1] FRT has also been tested and deployed in pilot projects by the European Border and Coast Guard Agency, Frontex.[2]
These systems are numerous, as illustrated by the list provided above. They are placed together without immediate elaboration to reflect the increase in FRT deployment across EU borders. These systems are also not abstract or futuristic. They are either already active or being rolled out in 2025. For example, the VIS has long supported EU-wide policing and visa management.[3] New systems such as EES, which combine automated border checks with biometric verification, raise significant privacy concerns.[4] They collect sensitive biometric data and process it in real-time at the point of entry. This shows a transition towards a growing use of FRT at EU borders.[5]
This expansion also reflects a global trend. In countries like China, facial recognition technologies are deeply integrated into public life. These uses range from access control to predictive policing.[6] While the EU context differs, the pace of FRT development is rapid. The challenge is not only to catch up legally but also to stay ahead. It is to ensure that EU frameworks are prepared to address the speed, scale, and opacity of such systems.[7] As Rinaldi and Teo observe, "The deployment of AI within the border and migration context raises similar issues, namely the invasion of privacy through surveillance, and concerns of bias and disproportionate impacts on vulnerable and marginalised groups. However, (…) the potential harm to human rights extends beyond these well-traversed concerns."[8]
To assess whether the EU can meet this challenge, this article uses the 2024 Artificial Intelligence Act (AI Act). This is the EU's most recent Regulation governing artificial intelligence. It classifies biometric identification technologies as high-risk applications. FRT falls under this designation. It is listed in Article 6(2) and Annex III of the Regulation.[9]
1.1 Aim
Therefore, this article investigates the following question: To what extent does the EU AI Act effectively protect fundamental rights, particularly privacy, when facial recognition technology is used in border and migration management?
The analysis will examine whether the AI Act includes effective and enforceable safeguards. In particular, it examines whether the Act can effectively regulate FRT. This includes examining how far the Regulation translates principles into enforceable rights. The analysis focuses on obligations, exceptions, and oversight mechanisms introduced by the AI Act in border-specific contexts.
1.2 Relevance and Context
Simmler and Canova have raised concerns about the inefficiency of the AI Act. They argue that: "The provisions of the AI Act are not sufficiently clear and precise to serve as a legal framework for FRT use by law enforcement
authorities of Member States."[10] Moreover, scholars have noted that FRT in EU border control disproportionately targets non-EU nationals. These individuals are subject to biometric profiling and are often excluded from effective redress.[11]
This issue is especially relevant today. The year 2025 marks the operational rollout of major EU biometric systems like the EES. It also marks the finalisation of the AI Act.[12] The combination of increased technological deployment and new legislation creates a critical window.[13] It enables the evaluation of how effectively the law addresses emerging risks. Border FRT is no longer a concern for the future. It is being deployed now, with real consequences for privacy, equality, and oversight.
This topic is both legally and politically urgent. The rise of biometric surveillance reflects a shift toward automated border control. It changes how borders are enforced. It also changes how people are treated and how rights are protected. These developments raise serious and immediate challenges to fundamental rights.[14] The article examines the AI Act in light of the Charter of Fundamental Rights of the European Union and the General Data Protection Regulation (GDPR). The analysis evaluates whether the Regulation is normatively and structurally effective. It assesses whether the Act can meet the EU's legal obligations.[15]
The article begins in Chapter 2 by establishing the legal and conceptual framework of facial recognition technology. It pays particular attention to how FRT is integrated into border systems and which rights are at stake under the EU Charter.[16] Chapter 3 outlines the key legal provisions of the AI Act. It focuses on how FRT systems are classified as high-risk.[17] It also clarifies which obligations apply and outlines any applicable loopholes or exemptions.[18] Chapter 4 offers a critical evaluation of these legal protections. The analysis is structured by rights: privacy and data protection (Articles 7 and 8 CFR), non-discrimination (Article 21), access to remedies (Article 47), and transparency and oversight.[19] Chapter 5 concludes by summarising the findings.
1.3 Limitations
Some further caveats are necessary. Not all AI-driven systems analysed in the article are currently in deployment. Some are only at the testing stage. This includes controversial technologies such as emotion recognition systems. At the time of writing, these have not yet been deployed.[20] These technologies will be briefly mentioned to illustrate the broader trend toward experimental AI in migration governance.
2. Deployment of Facial Recognition Technologies
2.1 What is Facial Recognition Technology in Border Control?
Facial recognition technology (FRT) refers to a range of systems that use biometric data to analyse and interpret human faces. These technologies differ in function and design, and not all pose the same level of risk.[21] Understanding these distinctions is essential before evaluating how the EU AI Act addresses them. This section outlines four major types of FRT identified in recent academic literature.
The first category is facial verification (1:1). These systems compare a live image of an individual to a stored biometric template to confirm identity, for example, when a traveller scans their passport at an airport e-gate. Facial verification is generally used with the subject's knowledge and consent.[22]
The second category is facial identification (1:N). These systems match an individual's facial image to a database of stored profiles to identify the person. This method is commonly used in public surveillance or law enforcement. Identification can be performed in real-time (live footage) or post-doc (from recorded material).[23]
Biometric categorisation technologies classify individuals based on inferred characteristics such as gender, ethnicity, or age. These systems use facial features to assign individuals into socially or biologically defined categories. The process relies on machine learning models trained on labelled datasets. These techniques show high accuracy in controlled conditions, but their performance drops significantly across demographic groups.[24] This raises concerns about bias, particularly when different categories overlap.
Emotion recognition technologies detect emotional states such as anger, fear, or happiness based on facial expressions. These systems use machine learning to map visual or biometric cues to predefined emotional categories. The reliability of this technology is widely challenged. Emotion recognition technologies may rely on questionable assumptions about the universality and detectability of emotions. The interpretation of facial expressions varies across cultures and individuals.[25] This undermines the validity of inferences based on emotions.
2.2 EU Border Systems Using Facial Recognition
The European Union has progressively integrated biometric technologies, particularly facial recognition, into its border and migration management systems. These technologies aim to improve security, simplify border crossings, and improve the management of migration. This section outlines the key systems employing FRT, detailing their functionalities, deployment timelines, and technical applications.
2.2.1 Entry/Exit System (EES)
The Entry/Exit System (EES) is an upcoming EU initiative. It is designed to register the entry and exit data of third-country nationals crossing the external borders of the Schengen Area. Scheduled for launch in October 2025, the EES will replace the traditional passport-stamping method. It introduces a digital system that records travellers' names, travel document details, biometric data (fingerprints and facial images), and the date and place of entry and exit. The system employs facial verification. It compares a live facial image captured at border control with the biometric data stored in the traveller's electronic passport. This process is designed to speed up border crossings and accurately verify the identities of travellers. This system uses facial verification (1:1). [26]
2.2.2 Shared Biometric Matching Service (sBMS)
Launched on May 19 2025, the Shared Biometric Matching Service (sBMS) is a central biometric matching system. It was developed by eu-LISA. The sBMS enables the interoperability of various EU information systems, including the EES. It enables the storage and comparison of biometric data across these platforms.
The sBMS stores around 400 million biometric templates. These include facial images. This improves the EU's ability to identify individuals across different databases. This system uses facial verification (1:1) and facial identification (1:N).[27]
2.2.3 Automated Border Control (ABC) Gates
Automated Border Control (ABC) gates, commonly called eGates, are self-service barriers. They use biometric verification to facilitate border crossings. These systems are operational at many EU airports. They perform facial verification by comparing the live facial image of the traveller with the biometric data stored in their electronic passport.[28]
ABC gates aim to speed up border control and, therefore, reduce waiting times. While primarily used by EU citizens, some member states allow certain third-country nationals to use them. The integration of ABC gates with the EES is expected to enhance their functionality and security. This system uses facial verification (1:1).[29]
These different systems show the increased use of facial recognition at EU borders and the different complexities of these different systems.
2.3 Fundamental Rights at Stake in EU Border and Migration Contexts
The deployment of facial recognition technologies (FRT) in EU border and migration systems poses significant challenges to different fundamental rights protected by the Charter of Fundamental Rights of the European Union (CFR). These impacts arise from how FRT is designed, deployed, and integrated into interoperable systems. These include the Entry/Exit System (EES), the Shared Biometric Matching Service (sBMS), and Automated Border Control (ABC) gates.
2.3.1 Right to Privacy (Article 7 CFR)
FRT systems such as the EES and ABC gates rely on facial verification (1:1). They match live images of travellers to stored passport biometrics.[30] Though less invasive than facial identification (1:N), this still constitutes a rights interference. Furthermore, the interoperable nature of border databases heightens the risk to privacy. Information stored in one system can be accessed across others (e.g., via sBMS), expanding the scope and permanence of potential interference.[31]
The systems operate in real time. Facial data is captured, processed, and used for border crossing decisions within seconds. These are real-time applications of FRT where decisions are made immediately at the point of entry. Travellers have limited awareness and no ability to opt-out. These conditions undermine meaningful consent.[32] The systemic nature of these processes, happening at almost all border crossings, raises concerns about normalising biometric data collection.
2.3.2 Right to Non-Discrimination (Article 21 CFR)
Facial recognition technologies are vulnerable to demographic bias. Studies have shown significantly higher error rates for women, children, older individuals, and individuals with darker skin tones.[33] The use of FRT at borders amplifies these risks, as decisions made in error can result in detainment or denial of entry. Thus, these systems disproportionately affect non-EU citizens, reproducing discriminatory outcomes.[34]
2.3.3 Right to Data Protection (Article 8 CFR)
FRT systems process sensitive biometric data. Under the GDPR and Law Enforcement Directive, such data requires strict protection. However, border systems often process this information without the data subject's explicit consent, as required by Article 9 of the GDPR for processing special categories of data.[35]
Furthermore, data collected by the EES can processed through sBMS is stored and shared across EU databases. This includes interoperability with systems such as VIS, though facial recognition plays a minimal role.[36]
The risks to data protection arise not only from the scale of data collected but also from the fragmented governance over these systems. The presence of multiple data controllers and processors across Member States and agencies dilutes accountability. This structural opacity prevents individuals from understanding or contesting how their data is used.[37]
2.3.4 Right to an Effective Remedy (Article 47 CFR)
Individuals subjected to errors in FRT-based border systems often lack access to meaningful redress. Travellers may be denied entry based on false negatives without understanding the cause. The automated nature of border checks and lack of real-time explanation contribute to this opacity. [38]
Existing complaint mechanisms are slow, fragmented, and opaque. National data protection authorities (DPAs) often lack jurisdiction over transnational systems, and cross-border cooperation mechanisms are underdeveloped.[39] The burden of proof often lies on the individual, despite the opacity and complexity of automated systems. These factors create structural barriers to accountability.
3. The Regulation of Facial Recognition Technology at Borders under the AI Act
The AI Act (Regulation (EU) 2024/1689) establishes a comprehensive, horizontal legal framework for regulating artificial intelligence systems in the Union. This includes specific provisions relevant to the use of facial recognition technology (FRT) in the context of migration, asylum, and border control.[40] This chapter outlines the classification, regulation, and obligations of these systems under the AI Act.
3.1 Classification of Border FRT Systems as High-Risk AI
Facial recognition technologies in border management are classified as high-risk AI systems under Article 6(2) of the AI Act and Annex III. Article 6(2) states that any AI system in Annex III is high-risk if used for a purpose in that Annex.[41] This applies if the system operates in conformity with Union law regulating the activity in question. Annex III, point 7(d), identifies "AI systems intended to be used by or on behalf of competent public authorities in the area of migration, asylum and border control management, for the purpose of detecting, recognising or identifying natural persons" as high-risk systems.[42] This exclusion applies to systems used solely for verifying the authenticity of travel documents. This includes optical checks against passports but not facial biometric comparisons.[43]
Annex III, point 1(a), categorises remote biometric identification systems used in public spaces for real-time or retrospective identification as high-risk. This does not apply where systems are used exclusively for biometric verification. FRT systems used in sBMS or crowd surveillance at entry points fit this definition. This is true if they are designed for one-to-many (1:N) identification rather than one-to-one (1:1) verification.[44] Systems falling under points 1(b) and 1(c) of the same Annex are also classified as high-risk. This applies if they perform biometric categorisation based on inferred characteristics (such as race, ethnicity, gender or sexual orientation) or emotion recognition.[45]
3.2 Legal Obligations for High-Risk Border AI Systems
High-risk AI systems deployed at the EU's borders are subject to multiple regulatory obligations under the AI Act. Article 9 mandates a risk management system for high-risk AI systems. This system must assess known and potential risks that the AI system may pose to the health, safety, and fundamental rights of individuals. It must function as a "continuous iterative process" throughout the lifecycle of the system. This includes design, deployment, and post-market monitoring (Article 9(2)).[46] Migrants are highly vulnerable to biases in biometric categorisation and identification processes, making this obligation particularly important.[47]
Article 27 sets a key obligation for public authorities or public service providers deploying high-risk AI systems. Under Article 27(1), a Fundamental Rights Impact Assessment (FRIA) must be conducted prior to deployment.[48] This assessment must outline intended use, context, duration, affected persons, impacts on fundamental rights, and mitigation or oversight mechanisms. FRT systems currently used at border entry points are subject to this obligation. This is particularly true when they conduct identification or categorisation functions under Annex III, points 1 and 7.[49]
While Article 27 aims to ensure that deployers account for potential harms, Article 14(5) weakens this protection.[50] It does so by allowing high-risk systems to bypass human oversight where the requirement for confirmation by two natural persons is deemed disproportionate.[51]
Article 49(4) exempts high-risk AI systems used in migration, asylum, and border control from the requirement to register them in the public EU database of high-risk systems. This exemption is justified on grounds of public security.[52]
Article 113 stipulates that the AI Act will take effect in full as of August 2, 2026. However, Article 111 creates a transitional exemption for "legacy" systems.[53] These are AI systems that were in use before August 2, 2026. For border systems such as Eurodac or SIS, this provision means that they are exempt from compliance until December 31, 2030.[54] This exemption applies unless the systems are substantially modified. [55] This deferment risks allowing the continued use of opaque systems for up to six years without the protection of the AI Act.
3.3 Prohibited Uses and Exceptions
The AI Act forbids specific uses of AI outright under Article 5. Article 5(1)(d) prohibits the use of "real-time" remote biometric identification systems in publicly accessible spaces for law enforcement purposes. This prohibition includes exceptions. These involve finding crime victims, preventing terrorist threats, and prosecuting serious offences.[56] Migration control is not explicitly listed among these exceptions, but border authorities might cite public security concerns to justify similar actions. This may occur especially at sea borders or transit zones. In these areas, the distinction between law enforcement and border control is not clearly defined.[57]
Article 6(3) allows providers to self-assess their AI systems as falling outside the high-risk classification. This applies if the system is only intended to perform a narrow procedural task or if Union law does not regulate the use case.[58] This creates a loophole that allows FRT systems used in initial screening to avoid classification. This includes emotion recognition or behaviour analysis tools. As a result, such systems may not be subject to the corresponding obligations of the AI Act.[59]
Recital 60 repeats the importance of rights protection in this field. It notes that "special consideration should be given to the situation of persons in vulnerable contexts, such as those subject to migration and asylum procedures."[60] It warns against the use of AI in ways that would breach the principle of non-refoulement. It also warns against actions that undermine international refugee law protections. This is especially important for border FRT systems used with predictive analytics or biometric categorisation. Such future systems may be used to infer intent or risk without due process.[61]
3.4 Oversight, Monitoring, and Enforcement
The AI Act is enforced decentralised among Member States and coordinated at the EU level. Article 77(2) requires each Member State to designate one or more competent authorities for implementing and enforcing the Regulation. These authorities are responsible for ensuring compliance with the AI Act.[62]
At the Union level, the European Artificial Intelligence Office (AI Office) plays a coordination role. It monitors the implementation of regulations, develops guidance and best practices, and supervises high-risk AI systems that may pose systemic risks.[63] This is particularly relevant in cross-border contexts such as migration management. This includes oversight of large-scale border systems and biometric infrastructures used in migration management. The relevant legal basis is found in Recital 63 and Article 77(1) and (2).[64] Effective enforcement requires the timely designation of national authorities and political will for independent oversight of security systems.[65]
4. Evaluating Fundamental Rights Safeguards under the AI Act
This chapter assesses the extent to which the AI Act protects fundamental rights in the context of FRT technologies at EU borders. Each section focuses on a specific right affected by these systems.
4.1 Privacy and Data Protection (Articles 7 and 8 CFR)
The AI Act recognises the privacy risks of facial recognition technology (FRT) in migration contexts but fails to provide adequate protection. This arises from dependence on procedural protection and deferrals for existing systems.
Some FRT systems, such as the Entry/Exit System (EES), process facial images for identity verification and identification, frequently without meaningful consent. These systems fall under Article 7 of the Charter of Fundamental Rights (CFR), which guarantees the right to respect for private life. They also fall under Article 8 CFR, which protects personal data. However, compliance with these rights depends on the interpretation and enforcement of the AI Act's obligations.[66]
The Act introduces a risk-based framework. It requires deployers of high-risk systems to conduct risk management (Article 9) and Fundamental Rights Impact Assessments (FRIA) under Article 27. These obligations are designed to assess possible infringements and guide mitigation. Neither obligation ensures a substantive limitation on surveillance. They standardise risk assessments without mandating that outcomes prioritise rights over operational efficiency. These instruments insufficiently address privacy risks, merely requiring their documentation.[67]
Further, the effectiveness of FRIAs is undermined by Article 27(4). This provision permits deployers to replace them with existing Data Protection Impact Assessments (DPIAs) under GDPR. This substitution allows a lower threshold for assessment. Data Protection Impact Assessments (DPIAs) are not intended to address issues like large-scale biometric surveillance. This is especially problematic in coercive settings such as border zones.[68]
Article 49(4) introduces a second gap. It exempts many high-risk border systems from the public registration requirement. Lacking public traceability prevents oversight by affected individuals or oversight bodies regarding compliance with AI Act obligations. This undermines the enforceability of Articles 7 and 8 CFR.[69]
These weaknesses are increased by the transitional exemption in Article 111. AI systems already in use before August 2026 are not required to comply with the Act until 2030 unless they are substantially modified. Legacy biometric systems such as Eurodac, VIS, and sBMS fall into this category. Where facial recognition is or will be used in these systems, it remains outside the AI Act's protections until 2030. This means that for years, highly intrusive systems will continue operating without protection from the AI Act.[70]
The AI Act also does not address the structural issue of function creep, which was introduced in Chapter 2.2. Biometric data, once collected, can be reused for various purposes through interoperable platforms like sBMS. Although Recital 60 acknowledges vulnerable groups, the Act lacks enforcement mechanisms to prevent secondary uses.[71]
In summary, the AI Act's framework is legally coherent but does not ensure practical protection. It treats privacy and data protection as procedural steps rather than strong, enforceable rights. It allows opaque systems to remain in place for years. This causes Articles 7 and 8 CFR to be vulnerable to systemic circumvention.[72]
4.2 Non-Discrimination (Article 21 CFR)
Article 21 of the Charter of Fundamental Rights forbids any discrimination based on "sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation."[73] Facial recognition technology (FRT) at borders faces considerable issues due to algorithmic bias and system design flaws. Additionally, it can have disproportionate impacts on specific population groups. [74]
The classification of systems such as EES under Annex III of the AI Act reflects their potential impact on fundamental rights.[75] However, classification alone does not guarantee that discrimination is prevented. Instead, the Act mandates that deployers establish risk management systems (Article 9). They are also required to perform Fundamental Rights Impact Assessments (Article 27), both of which must assess potential discriminatory impacts.[76] These safeguards are process-based, not outcome-based. They require assessment but not actual mitigation of discriminatory bias in practice.[77]
FRT systems are especially vulnerable to discrimination in 1:N identification contexts. These are situations where individuals are compared against large-scale biometric databases for recognition or categorisation. As shown in studies cited in Chapter 2.1, error rates disproportionately impact persons of colour, women, and minors.[78] The deployment of such systems in migration creates a high risk of discrimination.[79]
The AI Act does not ban biometric categorisation by protected characteristics unless the system is specifically designed for that purpose. This exposes a regulatory gap. Systems producing indirectly discriminatory outcomes, such as unequal false positive rates, are not automatically subject to prohibition or increased scrutiny. Annex III point 1(b) classifies these systems as high-risk, yet it does not impose specific mitigation obligations outside the general framework of Articles 9 and 27.[80]
Furthermore, Article 6(3) allows providers to avoid high-risk classification by self-assessing their systems.[81] This enables systems used for early-stage screening or categorisation to bypass obligations. This can occur even where they result in discriminatory outcomes.[82] Such exclusions risk undermining the non-discrimination guarantees under Article 21 CFR. They also show the limitations of the AI Act's scope.
Recital 60 stresses the need to consider the situation of vulnerable persons, particularly in border and asylum contexts. [83] However, without mandatory audits or performance testing by independent authorities, discriminatory outcomes may stay undetected and uncorrected.
4.3 Access to Remedies (Article 47 CFR)
Article 47 of the Charter of Fundamental Rights ensures the right to an effective remedy before a tribunal for any violation of rights and freedoms.[84] However, the right to access remedies is challenged by structural opacity, technical complexity, and institutional fragmentation. These barriers prevent individuals from understanding, contesting, or rectifying how automated systems affect their legal status, entry and detention.[85]
As established in Chapter 2, Facial Recognition Technology (FRT) is utilised across various border systems. These include the Entry/Exit System (EES), Eurodac, the Visa Information System (VIS), and the shared Biometric Matching Service (sBMS). The AI Act recognises the high-risk nature of such systems (Annex III).[86] However, it does not ensure that individuals impacted will have clear access to redress mechanisms.
One key issue is the lack of notification and explainability. Individuals subjected to biometric recognition, especially in 1:N matching or categorisation, are often unaware that a result has been generated or that it may impact their rights. Article 13 of the AI Act mandates informing users about their interactions with AI systems.[87] However, this excludes specific systems. Especially those that operate invisibly or across interoperable platforms.[88] Without notification, the right to contest becomes ineffective.
Even where individuals are informed, effective contestation requires meaningful access to the reasoning behind decisions. The AI Act does not require mandatory explanations for automated outcomes in border contexts.[89] As shown in previous chapters, many FRT uses in migration contexts work under exceptions that reduce transparency, for instance, the exemption from registration in the public database under Article 49(4).[90]
Furthermore, the overlapping roles of national border authorities and third-country operators create jurisdictional complexities. This makes it difficult for individuals to identify the correct body against whom to direct their claims. Responsibility for system errors or discriminatory outputs may be unclear, particularly with data sharing across Member States.[91]
Article 27 of the AI Act requires a Fundamental Rights Impact Assessment (FRIA) to be conducted prior to deployment. However, this tool focuses on prospective evaluation and internal compliance rather than individual redress. The FRIA is not public nor subject to automatic judicial review, which restricts its usefulness for impacted persons. Similarly, market surveillance mechanisms under Article 77 are not intended for individual complaints or direct remedies for rightsholders.[92]
The GDPR provides some procedural protections like access, rectification, and erasure rights.[93] However, these rights are limited in scope and often subject to public security exceptions. Migrants and asylum seekers often lack the resources or legal support needed to assert their rights effectively.[94]
Therefore, the current use of facial recognition technology at borders undermines the enforcement of Article 47 CFR. The lack of individual notifications, explainability, and transparency obligations leads to an accountability gap. Again, the AI Act recognises fundamental rights concerns but does not provide mechanisms strong enough to guarantee redress for persons affected.[95]
4.4 Transparency and Oversight
Thus, transparency and independent oversight are essential protection when deploying high-risk artificial intelligence (AI) systems in migration and or at borders.
Enforcement mechanisms under the AI Act are decentralised. Article 77 requires each Member State to designate at least one national competent authority to ensure implementation and compliance.[96] These authorities are responsible for conducting market surveillance and applying penalties in case of non-compliance. While this structure allows adaptation to national contexts, it risks uneven application of safeguards across the Union. Border management often involves cross-border collaboration and centralised databases (e.g. sBMS). However, no equivalent EU-wide enforcement body has binding authority.[97]
The decentralised enforcement model of the AI Act risks fragmenting fundamental rights protection. Member States have some discretion to interpret risk classifications (Article 6(3)) and appoint oversight bodies (Article 77).[98] Consequently, the same FRT system may face strict Regulation in one country while experiencing minimal scrutiny in another. This threatens the consistency and predictability of legal protections, especially in this context where there are often multiple jurisdictions involved.[99]
The AI Office, created by the Act as an EU-level coordination body, promotes best practices, issues guidance, and oversees high-risk AI systems that pose systemic risks. However, it has limited investigatory powers and no direct enforcement role.[100] In highly securitised contexts like border control, where political sensitivities often outweigh rights-based concerns, weak oversight structures are not likely be effective.[101]
Finally, the long transitional period under Article 113, allowing existing AI systems to remain in use until 2030 without full compliance, severely undermines any immediate transparency gains.[102] Legacy FRT systems deployed can operate outside the AI Act's protection for six more years. This creates a transparency gap at a critical time when scrutiny of FRT is essential due to its rapid expansion.[103]
In conclusion, the AI Act's provisions on transparency and oversight are undermined by broad exemptions, decentralised enforcement, and significant deferrals for existing systems.
5. Summary of Key Findings
This article assesses the extent to which the EU Artificial Intelligence Act (AI Act) protects fundamental rights, particularly privacy when facial recognition technologies (FRTs) are used in migration and border control. The analysis centred on the Act's legal classifications, obligations, and limitations.
As shown in Chapters 2 and 3, facial recognition technology is already operational or soon to be deployed in systems such as the Entry/Exit System (EES) and the Automated Border Control (ABC) gates. These systems rely on facial verification (1:1) and identification (1:N). AI Act classifies these systems as high-risk under Article 6(2) and Annex III points 1 and 7.[104] However, classification alone does not ensure effective protection. Chapter 4 showed that the key protections, risk management systems (Article 9) and Fundamental Rights Impact Assessments (Article 27) are, to some extent, procedural tools. They require evaluation but not mitigation.[105]
For privacy and data protection under Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (CFR),[106] the AI Act's provisions are weakened by substitutions and delays. Article 27(4) allows FRIAs to be replaced by Data Protection Impact Assessments under the General Data Protection Regulation (GDPR), which may be less suitable for large FRT-based systems.[107] Article 49(4) exempts high-risk border systems from the public database, and Article 111 delays full compliance until 2030 for legacy systems.[108] This causes Articles 7 and 8 CFR to be vulnerable to systemic circumvention.[109]
Non-discrimination under Article 21 CFR is also impacted.[110] The Act does not prevent indirectly discriminatory outcomes. Systems with higher error rates for specific demographic groups are not required to adopt corrective measures unless explicitly designed to categorise by protected traits. Instead, these safeguards are process-based rather than outcome-based. Without better independent oversight or enforcement, the right to non-discrimination is negatively impacted.
Access to remedies under Article 47 CFR is limited by structural opacity.[111] Article 13 mandates the collection of user information but exempts systems deployed in specific border contexts.[112] As shown in Chapter 4.3, individuals are often unaware of how FRT systems affect them and lack meaningful explanations or channels to challenge outcomes. Fundamental Rights Impact Assessments (FRIAs) are not easily accessible, and enforcement remains fragmented between national and European Union (EU) authorities.[113]
Finally, transparency and oversight are undermined by decentralised enforcement and broad exceptions. Article 77 of the AI Act requires Member States to appoint national supervisory authorities, while the European AI Office acts only in a coordinating role.[114]
The EU AI Act does not effectively protect fundamental rights, especially privacy when applied to FRT in border and migration contexts. While the Regulation identifies the risks and classifies such systems as high-risk, its safeguards are procedural, delayed, and limited by vague obligations. Key systems are exempted from transparency requirements. Legacy systems remain in use for years without compliance. Individual redress mechanisms are structurally limited or inaccessible. The Act thus recognises the importance of rights but fails to ensure their protection.
Bibliography
Charter of Fundamental Rights of the European Union. Official Journal of the European Union, C 202, 7 June 2016.
Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data by Competent Authorities for the Purposes of the Prevention, Investigation, Detection or Prosecution of Criminal Offences, and on the Free Movement of Such Data (Law Enforcement Directive). Official Journal of the European Union, L 119/89, 4 May 2016.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation). Official Journal of the European Union, L 119/1, 4 May 2016.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689, 12 July 2024.
Avello Martínez, María. “EU Borders and Potential Conflicts between New Technologies and Human Rights.” Peace & Security Paix et Sécurité Internationales 11 (2023): 1204–1210. https://doi.org/10.25267/Paixsecurint
Dakhil, Nasreen, and Adnan M. Abdulazeez. “Face Recognition Based on Deep Learning: A Comprehensive Review.” The Indonesian Journal of Computer Science 13, no. 3 (15 June 2024). https://doi.org/10.33022/ijcs.v13i3.4037.
Directorate-General for Migration and Home Affairs (European Commission), and ECORYS. Feasibility Study on a Forecasting and Early Warning Tool for Migration Based on Artificial Intelligence Technology: Executive Summary. Luxembourg: Publications Office of the European Union, 2021. https://data.europa.eu/doi/10.2837/222662.
El Fadel, Nazar. “Facial Recognition Algorithms: A Systematic Literature Review.” Journal of Imaging 11, no. 2 (2025): 58. https://doi.org/10.3390/jimaging11020058.
European Commission. “Commission Announces Launch of the Shared Biometric Matching Service.” Migration and Home Affairs, May 19, 2025. https://home-affairs.ec.europa.eu/news/commission-announces-launch-shared-biometric-matching-service-2025-05-19_en.
European Commission. “Entry/Exit System (EES).” Migration and Home Affairs. Last modified May 2025. https://home-affairs.ec.europa.eu/policies/schengen/smart-borders/entry-exit-system_en.
European Commission. “Visa Information System (VIS).” Migration and Home Affairs. Last modified March 2025. https://home-affairs.ec.europa.eu/policies/schengen-borders-and-visa/visa-information-system_en.
European Union Agency for Fundamental Rights (FRA). Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement. Luxembourg: Publications Office of the European Union, 2019. https://fra.europa.eu/en/publication/2019/facial-recognition-technology-fundamental-rights-considerations-context-law.
Galbally, Julian, Pietro Ferrara, Raul Haraksim, Anastasios Psyllos, and Laurent Beslay. Study on Face Identification Technology for Its Implementation in the Schengen Information System. EUR 29808 EN. Luxembourg: Publications Office of the European Union, 2019. https://doi.org/10.2760/661464.
Gandhi, Shrutika. “Frontex as a Hub for Surveillance and Data Sharing: Challenges for Data Protection and Privacy Rights.” Computer Law & Security Review 53 (2024): 105963. https://doi.org/10.1016/j.clsr.2024.105963.
Gültekin-Várkonyi, Gizem. “Navigating Data Governance Risks: Facial Recognition in Law Enforcement under EU Legislation.” Internet Policy Review 13, no. 3 (2024). https://doi.org/10.14763/2024.3.1798.
Hidayat, Fadhil, Ahmad Almurib, Diah Susanti, and Ahmad Sofyan. “Face Recognition for Automatic Border Control: A Systematic Literature Review.” IEEE PP(99) (January 2024): 1–1. https://doi.org/10.1109/ACCESS.2024.3373264.
Katirai, Amelia. “Ethical Considerations in Emotion Recognition Technologies: A Review of the Literature.” AI and Ethics 4 (2024): 927–948. https://doi.org/10.1007/s43681-023-00307-3.
Li, Zhizhao, Yuqing Guo, Masaru Yarime, and Xun Wu. “Policy Designs for Adaptive Governance of Disruptive Technologies: The Case of Facial Recognition Technology (FRT) in China.” Policy Design and Practice 6, no. 1 (2023): 27–40. https://doi.org/10.1080/25741292.2022.2162248.
Lynch, Nessa. “Facial Recognition Technology in Policing and Security: Case Studies in Regulation.” Laws 13, no. 3 (2024): Article 35. https://doi.org/10.3390/laws13030035.
Metcalfe, Philippa, Lina Dencik, Eleftherios Chelioudakis, and Boudewijn van Eerd. Smart Borders, Private Interests and AI Policy in Europe. Bristol: Data Justice Lab, 2023.
Nail, Thomas. Theory of the Border. Oxford: Oxford University Press, 2016.
Palmiotto, Francesca. “Fundamental Rights Impact Assessments under the AI Act and GDPR.” German Law Journal 25, no. 2 (2024): 210–236.
Rinaldi, Alberto, and Sue Anne Teo. “The Use of Artificial Intelligence Technologies in Border and Migration Control and the Subtle Erosion of Human Rights.” International & Comparative Law Quarterly, April 4, 2025, 1–29. https://doi.org/10.1017/S0020589325000090.
Simmler, Monika, and Giulia Canova. “Facial Recognition Technology in Law Enforcement: Regulating Data Analysis of Another Kind.” Computer Law & Security Review 56 (2025): 106092. https://doi.org/10.1016/j.clsr.2024.106092.
Stewart, Ludivine Sarah. “The Regulation of AI-Based Migration Technologies under the EU AI Act: (Still) Operating in the Shadows?” European Law Journal 30, no. 1–2 (2024): 122–130. https://doi.org/10.1111/eulj.12516.
[1] Alberto Rinaldi and Sue Anne Teo, “The Use of Artificial Intelligence Technologies in Border and Migration Control and the Subtle Erosion of Human Rights,” International & Comparative Law Quarterly, April 4, 2025, 1–29, https://doi.org/10.1017/S0020589325000090.
[2] Shrutika Gandhi, “Frontex as a Hub for Surveillance and Data Sharing: Challenges for Data Protection and Privacy Rights,” Computer Law & Security Review 53 (2024): 105963, https://doi.org/10.1016/j.clsr.2024.105963.
[3] European Commission, “Visa Information System (VIS),” Migration and Home Affairs, last modified March 2025, https://home-affairs.ec.europa.eu/policies/schengen-borders-and-visa/visa-information-system_en.
[4] European Commission, “Entry/Exit System (EES),” Migration and Home Affairs, last modified May 2025, https://home-affairs.ec.europa.eu/policies/schengen/smart-borders/entry-exit-system_
[5] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,” 12.
[6] Zhizhao Li, Yuqing Guo, Masaru Yarime, and Xun Wu, “Policy Designs for Adaptive Governance of Disruptive Technologies: The Case of Facial Recognition Technology (FRT) in China,” Policy Design and Practice 6, no. 1 (2023): 27–40, https://doi.org/10.1080/25741292.2022.2162248.
[7] Gizem Gültekin-Várkonyi, “Navigating Data Governance Risks: Facial Recognition in Law Enforcement under EU Legislation,” Internet Policy Review 13, no. 3 (2024), https://doi.org/10.14763/2024.3.1798.
[8] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,” 6.
[9] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), Official Journal of the European Union, L 2024/1689, 12 July 2024.
[10] Monika Simmler and Giulia Canova, “Facial Recognition Technology in Law Enforcement: Regulating Data Analysis of Another Kind,” Computer Law & Security Review 56 (2025): 106092, https://doi.org/10.1016/j.clsr.2024.106092.
[11] Ludivine Sarah Stewart, “The Regulation of AI-Based Migration Technologies under the EU AI Act: (Still) Operating in the Shadows?” European Law Journal 30, no. 1–2 (2024): 127–30, https://doi.org/10.1111/eulj.12516.
[12] European Commission, “Entry/Exit System (EES),” Migration and Home Affairs.
[13] Gültekin-Várkonyi, “Navigating Data Governance Risks,” 10.
[14] Charter of Fundamental Rights, arts. 7, 8, 21, and 47; Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation), Official Journal of the European Union, L 119/1, 4 May 2016, arts. 5–23.
[15] Regulation (EU) 2024/1689 (AI Act).
[16] Charter of Fundamental Rights.
[17] Regulation (EU) 2024/1689 (AI Act), art. 6(2).
[18] Ibid., annex III.
[19] Charter of Fundamental Rights, arts. 7, 8, 21, 47.
[20] Gültekin-Várkonyi, “Navigating Data Governance Risks,” 14–15.
[21] European Union Agency for Fundamental Rights (FRA), Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement (Luxembourg: Publications Office of the European Union, 2019), 7, https://fra.europa.eu/en/publication/2019/facial-recognition-technology-fundamental-rights-considerations-context-law.
[22] Fadhil Hidayat et al., “Face Recognition for Automatic Border Control: A Systematic Literature Review,” IEEE Access PP(99) (January 2024): 1–1, https://doi.org/10.1109/ACCESS.2024.3373264.
[23] Ruggero Donida Labati et al., “Biometric Recognition in Automated Border Control: A Survey,” ACM Computing Surveys 49, no. 2 (2016): Article 24, 1–39, https://doi.org/10.1145/2933241.
[24] Nessa Lynch, “Facial Recognition Technology in Policing and Security Case Studies in Regulation,” Laws 13, no. 3 (2024): Article 35, https://doi.org/10.3390/laws13030035.
[25] Amelia Katirai, “Ethical Considerations in Emotion Recognition Technologies: A Review of the Literature,” AI and Ethics 4 (2024): 927–948, https://doi.org/10.1007/s43681-023-00307-3.
[26] European Commission, “EES.”
[27] European Commission, “Commission Announces Launch of the Shared Biometric Matching Service,” Migration and Home Affairs, May 19, 2025, https://home-affairs.ec.europa.eu/news/commission-announces-launch-shared-biometric-matching-service-2025-05-19_en.
[28] European Commission, “Smart Borders: Automated Border Control,” Migration and Home Affairs, https://home-affairs.ec.europa.eu/policies/schengen/smart-borders_en.
[29] European Commission, “Smart Borders.”
[30] European Commission, “EES.”
[31] Charter of Fundamental Rights, art. 7., FRA, Facial Recognition Technology, 24.
[32] FRA, Facial Recognition Technology, 26.
[33] Simmler and Canova, “FRT in Law Enforcement,” 5.
[34] Charter of Fundamental Rights, art. 21.
[35] GDPR, arts. 6 and 9; Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data (Law Enforcement Directive), Official Journal of the European Union, L 119/89, 4 May 2016, arts. 8–10.
[36] María Avello Martínez, “EU Borders and Potential Conflicts between New Technologies and Human Rights,” Paix & Sécurité Internationales 11 (2023): 1204–1210, https://doi.org/10.25267/Paix_secur_int.2023.i11.1204
[37] FRA, Facial Recognition Technology, 30. Charter of Fundamental Rights, art. 8.
[38] Stewart, “AI-Based Migration Technologies,” 133.
[39] Charter of Fundamental Rights, art. 47. Martínez, “EU Borders and Human Rights,” 1208. GDPR, art. 51.
[40] AI Act.
[41] AI Act, art. 6(2).
[42] AI Act, Annex III, point 7(d).
[43] AI Act, Annex III, point 7.
[44] AI Act, Annex III, point 1(a).
[45] AI Act, Annex III, points 1(b)–(c)
[46] AI Act, art. 9(2).
[47] Stewart, “Migration Technologies,” 133.
[48] AI Act, art. 27(1).
[49] AI Act, Annex III, points 1 and 7.
[50] Francesca Palmiotto, “Fundamental Rights Impact Assessments under the AI Act and GDPR,” German Law Journal 25, no. 2 (2024): 221.
[51] AI Act, art. 14(5).
[52] AI Act, art. 49(4).
[53] AI Act, arts. 111, 113.
[54] Costica Dumbrava, “Artificial Intelligence at EU Borders: Overview of Applications and Key Issues,” EPRS, 2021, https://op.europa.eu/en/publication-detail/-/publication/a4c1940f-ef4a-11eb-a71c-01aa75ed71a1.
[55] AI Act, arts. 113. Rinaldi and Teo, AI at Borders, 28.
[56] AI Act, art. 5(1)(d).
[57] Simmler and Canova, “FRT in Law Enforcement,”6.
[58] AI Act, art. 6(3).
[59] Palmiotto, “FRIA under the AI Act,” 218.
[60] AI Act, Recital 60.
[61] Dumbrava, AI at EU Borders, 18.
[62] AI Act, art. 77(2).
[63] AI Act, Recital 63.
[64] AI Act, arts. 77(1)-(2), Recital 63.
[65] Stewart, “Migration Technologies,” 129.
[66] FRA, Facial Recognition Technology, 26; Charter of Fundamental Rights, arts. 7–8.
[67] AI Act, arts. 9, 27; Palmiotto, FRIA under the AI Act, 218.
[68] AI Act, art. 27(4); GDPR, art. 35; Rinaldi and Teo, AI at Borders, 13.
[69] Charter of Fundamental Rights, arts. 7–8. AI Act, art. 49(4); Dumbrava, AI at EU Borders.
[70] AI Act, art. 111; Simmler and Canova, FRT in Law Enforcement, 106092.
[71] AI Act, Recital 60; Gültekin-Várkonyi, “Data Governance Risks,” 2024.
[72] Charter of Fundamental Rights, arts. 7–8; AI Act, arts. 27, 49, 111.
[73] Charter of Fundamental Rights, art. 21.
[74] FRA, Facial Recognition Technology,, 29.
[75] AI Act, art. 6(2), Annex III.
[76] AI Act, arts. 9, 27.
[77] El Fadel, “FRT Algorithms,” 58.
[78] Simmler and Canova, FRT in Law Enforcement, 10.
[79] Gültekin-Várkonyi, “Navigating Data Governance Risks, 16”
[80] AI Act, Annex III, point 1(b). AI Act, arts. 9, 27.
[81] AI Act, art. 6(3).
[82] Palmiotto, FRIA under the AI Act, 218
[83] AI Act, Recital 60.
[84] Charter of Fundamental Rights, art. 47.
[85] Palmiotto, FRIA under the AI Act, 231.
[86] AI Act, Annex III.
[87] AI Act, art. 13.
[88] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,”16.
[89] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,” 22
[90] AI Act, art. 49(4).
[91] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,” 18.–19.
[92] AI Act, arts. 27, 77; Palmiotto, FRIA under the AI Act, 236.
[93] GDPR, arts. 15–17.
[94] FRA, Facial Recognition Technology, 33.
[95] Charter of Fundamental Rights, art. 47; AI Act, Recital 60
[96] AI Act, art. 77.
[97] Metcalfe et al., Smart Borders, 57.
[98] AI Act, arts. 6(3), 77.
[99] Palmiotto, FRIA under the AI Act, 224.
[100] AI Act, art. 77(2), Recital 63.
[101] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,” 18.
[102] AI Act, arts. 111, 113.
[103] Palmiotto, FRIA under the AI Act, 236.
[104] AI Act, art. 6(2), Annex III points 1, 7.
[105] AI Act, arts. 9, 27; Palmiotto, FRIA under the AI Act, 219.
[106] Charter of Fundamental Rights, arts. 7, 8.
[107] AI Act, art. 27(4); GDPR, art. 35.
[108] AI Act, arts. 49(4), 111.
[109] Charter of Fundamental Rights, arts. 7, 8.
[110] Charter of Fundamental Rights, art. 21.
[111] Charter of Fundamental Rights, art. 47.
[112] AI Act, art. 13; Rinaldi and Teo, AI at Borders, 16.
[113] Rinaldi and Teo, “Artificial Intelligence Technologies in Border and Migration Control,” 16.
[114] AI Act, art. 77; Recital 63.