The AI Act is based on, and at the same time aims to protect fundamental rights, implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. Based on a risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. In this paper we explore the assumption that despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited.
The duty of ensuring epidemiological safety, including the duty to ensure vaccination against SARS-CoV-2 to people, is included in the framework of the national constitutional rights. The healthcare institutions providing vaccination and medical practitioners performing vaccination are one of the key assets of the national health care system, to whom the duty in the field of public health and protection of lives that is a part of human rights have been delegated. Violation of the epidemiological safety requirements in the Republic of Latvia, if it may cause a risk to human health, is subject to a fine. In this study, the authors have analysed the administrative offence cases, in which administrative liability has been imposed on medical institutions for performing vaccination with age-inappropriate vaccine, explain separation of administrative liability from criminal liability in such cases, reveal compensation mechanisms in the event of consequences, when inappropriate vaccination has caused harm to persons' life or health. The results of the research show that no appropriate security measures have been introduced in the medical institutions to prevent or avoid administrative offences in particular cases, as the result medical institutions were subject to first-time application of administrative liability. Besides, there are lack sufficiently secure system for the examination and registration of patients in the medical institutions. The minor patients were unsecured and have been vaccinated with an inappropriate vaccine, because a specific (non appropriate) vaccine has been requested by the minors' parents or the minors themselves.
Telehealth enables equal, high-quality, and efficient provision of health services, but it also poses serious risks in the absence of a legal basis. Despite its increasing use and promising potential, there has been no specific legal framework for telehealth in Turkey until recently. A new by-law governing the procedures and principles of telehealth services has been introduced by the Ministry of Health. As repeatedly referred to in the regulation, the most important issue is the positioning of telehealth in the data protection context. This article, therefore, aims to map telehealth services within the frame of the Turkish data protection regime. In this regard, we show how the category of personal data and the purpose of the processing should be determined. Thereafter we argue how relevant actors should be identified and present the rights and obligations of these actors in the light of basic principles regarding data processing, security, and transfer.
In the post-pandemic world, the ability of researchers to reuse, for the purposes of scientific research, data that had been collected by others and for different purposes has rightfully become a policy priority. At the same time, new technologies with tremendous capacity in data aggregation and computation open new horizons and possibilities for scientific research. It is in this context that the European Commission published in May 2022 its proposal for a sector-specific regulation aiming at establishing the legal landscape and governance mechanisms for the secondary use of health data within the European Union. The ambitious project is centred on administrative efficiency and aspires to unleash the potential of new technologies. However, the quest for efficiency usually comes with privacy compromises and power asymmetries and the case of the European Health Data Space Regulation is no different. This paper draws attention to some of these compromises and suggests specific amendments.
The role of the Council of Europe (CoE) in tobacco control remains largely unexplored. This paper aims to fill this gap, focusing on the CoE's European Social Charter. Article 11 of the Charter protects the right to health, and adequate tobacco control measures are necessary to respect this article. This paper examines the potential and limits of the Collective Complaints procedure, one of the two monitoring mechanisms of the Charter, as a means to evaluate the compliance of national tobacco control measures with Article 11. It demonstrates that, so far, this mechanism has never been used in this way. However, although the Collective Complaints procedure presents several drawbacks, it should not be underestimated. Indeed, it possesses certain features, such as the collective nature of the complaint and the lack of the requirement of the exhaustion of domestic remedies, which might make it a particularly suitable tool for the abovementioned purpose.
The use and disclosure of patient information is subject to multiple legal and ethical obligations. Within European human rights law the differences relating to consent are reflected in the separate requirements of data protection law, the common law, and professional ethics. The GDPR requires explicit consent. This contrasts with the ethical and common law availability of reliance on implied consent for the use of patient information for that patient's care and treatment. For any proposed use of patient information for healthcare purposes other than direct care, even where GDPR may be satisfied if the patient refuses to consent to disclosure, the information should not normally be disclosed. For any proposed use or disclosure outside healthcare the justification should normally be consent. However, consent is often not possible or appropriate and an overriding public interest can be relied upon to justify the use or disclosure, both legally and ethically.