The amended Digital Identity Framework Regulation (“eIDAS 2″) is expected to be implemented by 2026, including its new solution of the Digital Identity Wallet from each Member State for its residents, citizens, and businesses. Widely used public key cryptosystems including those in the current EUDI Wallet prototypes are using electronic signatures and authentication that will need to be replaced by post-quantum resistant cryptography (PQC). In April 2024, the EU recommended general action by the Member States to prepare for quantum capability. We suggest that the European Digital Identity Wallet could be the starting point for an impactful debut of hybrid “quantum resistant” cryptography tools to align the Member States in the transition. We look at the awareness campaigns of ENISA and national cybersecurity authorities in the USA, Spain, UK and Germany on the transition to PQC using a hybrid approach. There seems to be some early consensus that NIST's PQC algorithms are likely to set the international standard. Given the eIDAS 2′s flexible, technologically neutral language, it allows the timely implementation of new secure encryption methods. The Wallet could be an exemplary model for large businesses, or app developers, and SMEs that also must transition to PQC to render secure those asymmetrically encrypted quantum-vulnerable digital assets. A very large and relatively fast uptake of the EUDI Wallet system is expected, and if it holds the promises of functionality, user friendliness, and security across the changing technological world, the EUDI Wallet's approach could become a benchmark for the transition to post-quantum capacity.
Advancements in artificial intelligence (AI) have drastically simplified the creation of synthetic media. While concerns often focus on potential misinformation harms, ‘non-consensual intimate deepfakes’ (NCID) – a form of image-based sexual abuse – pose a current, severe, and growing threat, disproportionately impacting women and girls. This article examines the measures implemented with the recently adopted Online Safety Act 2023 (OSA) and argues that the new criminal offences and the ‘systems and processes’ approach the law adopts are insufficient to counter NCID in the UK. This is because the OSA relies on platform policies that often lack consistency regarding synthetic media and on platforms’ content removal mechanisms which offer limited redress to victim-survivors after the harm has already occurred. The article argues that stronger prevention mechanisms are necessary and proposes that the law should mandate all AI-powered deepfake creation tools to ban the generation of intimate synthetic content and require the implementation of comprehensive and enforceable content moderation systems.
It came somewhat unexpected when Dutch Booking's acquisition of Swedish Etraveli was blocked in the EU as the parties operated in two separate segments of the online economy, hotel accommodation and flight booking, making the merger unproblematic under normal circumstances. However, in the digital economy, nothing is normal as enforcement has tightened, mostly vis-à-vis US tech giants but apparently also vis-à-vis European undertakings. Interestingly, customers' unwillingness to shop around for offers, as otherwise accepted by, e.g., the UK authority, played a role in the outcome. The decision has been challenged before the EU's General Court, providing a case to watch.
Recent years have seen a surge in the development and use of companion chatbots, conversational agents specifically designed to act as virtual friends, romantic partners, life coaches or even therapists. Yet, these tools raise many concerns, especially when their target audience is comprised of vulnerable individuals. While the recently adopted AI Act is expected to address some of these concerns, both compliance and enforcement are bound to take time. Since the development of companion chatbots involves the processing of personal data at nearly every step of the process, from training to fine-tuning to deployment, this paper argues that the General Data Protection Regulation (“GDPR”), and data protection by design more specifically, already provides a solid ground for regulators and courts to force controllers to mitigate these risks. In doing so, it sheds light on the broad material scope of Articles 24(1) and 25(1) GDPR, highlights the role of these provisions as proxies to Fundamental Rights Impact Assessments (“FRIAs”), and peels off the many layers of personal data processing involved in the companion chatbots supply chain. That reasoning served as the basis for a complaint lodged with the Belgian data protection authority, the full text and supporting evidence of which are provided as supplementary materials.
The General Law for the Protection of Personal Data (LGPD), issued in Brazil in August 2018, establishes as one of the legal bases for the processing of personal data the execution of public policies by the State. A systematic review of the literature identified the existence of six critical points that represent challenges for public managers in the elaboration and implementation of policies that require the processing of personal data. The objective of this research is to establish the levels of criticality of the factors identified by the literature review, as well as to verify the existence of other critical points on which the literature has not yet advanced. To this end, a group of 11 specialists was selected to participate in the research that used the Delphi Method, a technique that consists of applying a set of questionnaires sequentially and individually, in order to establish a dialog between the participants and build a collective response. The results indicate a coherence between what was verified in the theory and the perception of the specialists. Another 10 critical points for the processing of personal data by the government were mentioned by the participants. In general, the main elements of tension identified addressed the lack of training of public officials and the sharing of personal data.
When talking about digital transformation, data sovereignty considerations and data transfers cannot be excluded from the discussion, given the considerable likelihood that digital technologies deployed along the process collect, process and transfer (personal) data in multiple jurisdictions. An increasing number of nations, especially those within the BRICS grouping (Brazil, Russia, India, China, and South Africa) are developing their data governance and digital transformation approaches based on data sovereignty considerations, deeming specific types of data as key strategic and economic resources, which deserve particular protection and that must be leveraged for national development. From this perspective, this paper will try to shed light on how data sovereignty and data transfers interplay in the context of digital transformations. Particularly, we will consider the various dimensions that compose the concept of data sovereignty and will utilise a range of examples from the BRICS grouping to back some of the key considerations developed with empirical evidence. We define data sovereignty as the capacity to understand how and why (personal) data are processed and by whom, develop data processing capabilities, and effectively regulate data processing, thus retaining self-determination and control. We have chosen the BRICS grouping for three reasons. First, research on the grouping's data policies and digital transformation is still minimal despite their leading role. Second, BRICS account for over 40 % of the global population, or 3.2 billion people (which can be seen as 3.2 billion “data subjects” or data producers, depending on perspective, thus making them key players in data governance and digital transformation. Third, the BRICS members have realised that digital transformation is essential for the future of their economies and societies and have shaped specific data governance visions which must be considered by other countries, especially from the global majority, to understand why data governance is instrumental to foster thriving digital environments.
After representing the main country embracing a market-led approach to Open Banking, the U.S. is on the verge of switching to a regulatory-driven regime by mandating the sharing of financial data. Relying on the Section 1033 of the Dodd-Frank Act, the Consumer Financial Protection Bureau (CFPB) has, indeed, recently proposed a rulemaking on “Personal Financial Data Rights.” As the U.S. is, therefore, apparently following the EU, which has been at the forefront of the government-led Open Banking movement, the paper aims at analyzing the CFPB's proposal by taking stock of the EU experience. The review of the EU regulatory framework and its UK implementation provides useful insights about the functioning and challenging trade-offs of Open Banking, thus ultimately enabling us to assess whether the CFPB's proposal would provide significant added value for innovation and competition or would rather represent an unnecessary regulatory burden.
As a specific type of algorithmic discrimination, algorithmic proxy discrimination (APD) exerts disparate impacts on legally protected groups because machine learning algorithms adopt facially neutral proxies to refer to legally protected features through their operational logic. Based on the relationship between sensitive feature data and the outcome of interest, APD can be classified as direct or indirect conductive. In the context of big data, the abundance and complexity of algorithmic proxy relations render APD inescapable and difficult to discern, while opaque algorithmic proxy relations impede the imputation of APD. Thus, as traditional antidiscrimination law strategies, such as blocking relevant data or disparate impact liability, are modeled on human decision-making and cannot effectively regulate APD. The paper proposes a regulatory framework targeting APD based on data and algorithmic aspects.
In recent years, there has been extensive discourse on the moderation of abusive content online. Image-based Sexual Abuses (IBSAs) represent a type of abusive content that involves sexual images or videos. Platforms must moderate user-generated online content to tackle this issue effectively. One way to achieve this is by allowing users to report content, which can be flagged as abusive. In such instances, platforms may enforce their terms of service and prohibit certain types of content or users. Alongside these efforts, numerous countries have been making progress in defining and regulating this subject by implementing dedicated regulations. However, national solutions alone are insufficient for addressing a constantly increasing global emergency. Consequently, digital platforms create their own definitions of abusive conduct to overcome obstacles arising from conflicting national laws. In this paper, we use an ontological approach to model two types of abusive behavior. To do this, we applied the UFO-L patterns to build ontological models and based them on a top-level ontology, the Unified Foundational Ontology (UFO). The outcome is a set of ontological models that digital platforms can use to monitor and manage user compliance with the service provider’s code of conduct.