Artificial Intelligence systems are used more and more nowadays, from the application of decision support systems to autonomous vehicles. Hence, the widespread use of AI systems in various fields raises concerns about their potential impact on human safety and autonomy, especially regarding fair decision-making. In our research, we primarily concentrate on aspects of non-discrimination, encompassing both group and individual fairness. Therefore, it must be ensured that decisions made by such systems are fair and unbiased. Although there are many different methods for bias mitigation, few of them meet existing legal requirements. Unclear legal frameworks further worsen this problem. To address this issue, this paper investigates current state-of-the-art methods for bias mitigation and contrasts them with the legal requirements, with the scope limited to the European Union and with a particular focus on the AI Act. Moreover, the paper initially examines state-of-the-art approaches to ensure AI fairness, and subsequently, outlines various fairness measures. Challenges of defining fairness and the need for a comprehensive legal methodology to address fairness in AI systems are discussed. The paper contributes to the ongoing discussion on fairness in AI and highlights the importance of meeting legal requirements to ensure fairness and non-discrimination for all data subjects.
This article focuses on the potential use of artificial intelligence (AI) in the resettlement of refugees and its implications for their fundamental rights. Resettlement is the process of selecting refugees and transferring them to a third country that agrees to admit them permanently. While AI has the potential to improve efficiency and effectiveness in resettlement processes, such as matching refugees with suitable resettlement countries and places, it also poses various risks and raises concerns about human rights violations. The article examines the current and potential uses of AI throughout the resettlement process, including the selection and referral of refugees for resettlement and the facilitation and improvement of integration. The article then analyses the potential impact of AI on fundamental rights. The article concludes by arguing for a cautious approach to using AI in resettlement processes. It emphasises that the use of AI in assessing refugee inclusion and exclusion for resettlement, as well as matching them with different resettlement states, could conflict with fundamental rights. This raises concerns about transparency, explainability, accountability, effective remedies, data protection, and the risk of bias and discrimination.
This article examines the evolutionary trajectory of perceptual diversification concerning Yinsi, privacy, and personal information in China. It elucidates how efforts to integrate privacy within the constitutional framework, a complex undertaking, have resulted in a heterogeneous system. This system forges an economically rational, technologically trustworthy, and socially experimental infrastructure that simultaneously embodies materialist and post-neoliberal characteristics. The study traces the transformation from collectivist and charismatic conceptualization to judicial unevenness arising from the unwritten nature of de-constitutionalized privacy. This evolution ultimately leads to digital incentive compatibility, reflecting a pressure-driven post-neoliberal economic rationale. Personal information with Chinese characteristics represents a normative construct aimed at harmonizing economic liberties and enhancing market efficiency while exemplifying sovereign statecraft of data production relations. The article underscores China's paternalist yet inertial adaptability, manifested in its pursuit of legal and institutional reforms concerning social identity, shaping socio-economic and performance legitimacy structures. Furthermore, the study introduces a tripartite cognitive and infrastructural schema of identifiability, incorporating legal, technological, and social dimensions to highlight the interchangeable roles that the state, private sector, and individuals have played in institutionalizing identities. The inherent complexities of such systems might expose them to market inefficiencies and digital harms, particularly when hierarchical interventions deviate from the original economic intention of data production and circulation. Consequently, the article advocates for elevating privacy constitutionalism to a more explicit and codified status in both legislative and judicial domains. This elevation would confer formal authority to address imbalances and unchecked competing interests in public and private stakeholderism, ultimately striving for a polycentric and proportionate (re-)equilibrium between the normative efficiency of identity infrastructures and the preservation of moral rights in digital China.
This article sets out the results of an empirical study exploring the intersection of religious views and perspectives and the regulation of post-mortem data donation (PMDD), particularly focusing on issues of consent and individual control. Through semi-structured interviews with practicing members of the Christian clergy of the United Kingdom, the study investigated the ethical and practical implications of integrating religious viewpoints into secular debates on data protection and privacy, using PMDD as a use case. The findings revealed a consensus among participants that religious perspectives, including Christian perspectives, can enhance the ethical robustness of PMDD regulatory frameworks by promoting values such as dignity, autonomy, and respect for individual preferences. However, the study also identified a possible gap in the systematic consideration of these views within existing regulatory practices pertaining to data protection and privacy. Pursuant to these findings, the article argues for the adoption of an "opt-out" consent mechanism, which balances public health benefits with individual rights, as a pragmatic approach to PMDD regulation. Additionally, the article highlights the potential for religious insights to enrich policy dialogues, ensuring that legal rules relating to data protection and information governance resonate with a broader array of societal values.
Here, I pose a hypothetical scenario starring the Metaverse arrival in fifteen years. First, I describe this network of networks. Then, I provide some notes on the Metaverse impact on the politics means in the constitutional states, i.e., rule of law, democracy, and human rights. Next, I propose some measures to fit that political means to the Metaverse ecosystem. Hence, they will serve as the basis for the Metaverse regulation in advance and -in turn- they will be useful as a starting point for the academic debate and will enlighten us for the analysis of concepts and institutions that today require reforms because they are not suitable for the regulation of intersubjective conducts in the digital era; e.g., civil liability for damage caused by robotics and autonomous systems, 'unlimited' power of Terms of service imposition by largest internet platforms, real democracy weakness, data and privacy protection by the use of extended reality tools, non-personal data protection.