The European Union (EU) enthusiastically funds the development of experimental artificial intelligence (AI) technologies for border control applications. However, the EU has received criticism for funding some of these projects, like the Horizon 2020 project iBorderCtrl. The researchers working on iBorderCtrl created an Automated Deception Detection System (ADDS). This system has been criticised by academics and activists for being based on faulty scientifical assumptions and potentially acting in a discriminatory manner. The European Commission (EC) responded by saying that iBorderCtrl was just a research project that did not envision deployment. This thesis examines whether the experimentation with ADDS is problematic from a human rights perspective by investigating whether the justifications made for funding iBorderCtrl corresponds with fundamental rights principles and exploring legal and ethical concerns with researching with ADDS. Information is gathered through a desk-based literature review and semi-structured interviews with ten experts. Experts include the Data Protection Officer (DPO) of iBorderCtrl, two Frontex respondents, and a researcher from Statewatch among others. Using securitisation theory and science and technology studies (STS), the thesis suggests that iBorderCtrl was funded because migrants are perceived to be security threats which allows the development of extraordinary technologies to manage their movement. Moreover, fears of crisis, crime, and terrorism create a sense of urgency moving the threshold for acceptable technologies even further. These two reasons are joined by the EU’s desire for innovation and implementation of AI technologies which erode the walls between experimentation and implementation. Thus, implying that iBorderCtrl is not ‘just’ as in only research. Furthermore, experimentation with ADDS is found to be problematic because it operates in a weakly regulated legal space. Fundamental rights are perceived as barriers. ADDS is considered a high-risk AI system, but not prohibited in the proposed AI Act. This weak legal regulation is arguably deliberate to facilitate for technology development. Moreover, a problematic ethical aspect of iBorderCtrl is that there is a differentiation made between migrants’ rights and the rights of EU citizens, as migrants are presented as justifiable targets for high-risk AI systems. Consequently, persons in vulnerable situations are targeted by experimentation with undignified technologies. iBorderCtrl can therefore not be considered ‘just’ research as in lawful and ethical.