{"title":"Evolving cybersecurity of AI-featured digital products and services: Rise of standardisation and certification?","authors":"Michal Rampášek, Matúš Mesarčík, Jozef Andraško","doi":"10.1016/j.clsr.2024.106093","DOIUrl":null,"url":null,"abstract":"<div><div>The field of cybersecurity has changed dramatically since the Cybersecurity Strategy for the Digital Decade was presented by the European Commission and the High Representative of the Union for Foreign Affairs and Security Policy in December 2020. The Cybersecurity Strategy highlights the potential of AI as a new technology, but also the need for cyber security of AI technology. Indeed, since the strategy was adopted, AI has shown that it has enormous potential for growth, but also several risks and vulnerabilities that this new technology brings. The paper analyses the shift and further development in the field of cybersecurity of digital products and services, AI itself as a technology, as well as products and services that will contain an AI component. In our opinion, the way to ensure that not only AI technology itself, but also products and services are cyber-secure, is to achieve a high level of standardisation of best practices, as there are many gaps in this area. The adoption of technical standards will fully form a path for conformity assessment and certification of not only AI systems but also AI-featured digital products and services. However, the current regulatory trend is to adopt a comprehensive legal regulation of AI even before such technical standards are fully developed and adopted. We consider this risky. Despite the well-intentioned effort to define and regulate AI, the purpose set forth in the AIA may not be achieved, as the requirements adopted in this way can very quickly become unnecessarily burdensome or even outdated due to increasing technological development. The proof of this is also the recent rise of large ML models, known as foundation models, which significantly changed the previous understanding of the creation of AI systems. It will be the technological development of AI, AI specific standardisation, and subsequent certification of digital products and services, which will govern future activities in building Europe's cyber resilience.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106093"},"PeriodicalIF":3.3000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924001584","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
The field of cybersecurity has changed dramatically since the Cybersecurity Strategy for the Digital Decade was presented by the European Commission and the High Representative of the Union for Foreign Affairs and Security Policy in December 2020. The Cybersecurity Strategy highlights the potential of AI as a new technology, but also the need for cyber security of AI technology. Indeed, since the strategy was adopted, AI has shown that it has enormous potential for growth, but also several risks and vulnerabilities that this new technology brings. The paper analyses the shift and further development in the field of cybersecurity of digital products and services, AI itself as a technology, as well as products and services that will contain an AI component. In our opinion, the way to ensure that not only AI technology itself, but also products and services are cyber-secure, is to achieve a high level of standardisation of best practices, as there are many gaps in this area. The adoption of technical standards will fully form a path for conformity assessment and certification of not only AI systems but also AI-featured digital products and services. However, the current regulatory trend is to adopt a comprehensive legal regulation of AI even before such technical standards are fully developed and adopted. We consider this risky. Despite the well-intentioned effort to define and regulate AI, the purpose set forth in the AIA may not be achieved, as the requirements adopted in this way can very quickly become unnecessarily burdensome or even outdated due to increasing technological development. The proof of this is also the recent rise of large ML models, known as foundation models, which significantly changed the previous understanding of the creation of AI systems. It will be the technological development of AI, AI specific standardisation, and subsequent certification of digital products and services, which will govern future activities in building Europe's cyber resilience.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.