Rupinder Kaur, Raman Kumar, Gagandeep Kaur, M. Chethan, Sehijpal Singh, Anurag Sinha, Pulkit Kumar, Riyam Adnan Hasan, Zainab Ahmed Abass
Ensuring the authenticity of digital images is essential in forensic investigations, media, and scientific research, where these images serve as critical evidence. This necessity leads to the development of digital image forgery detection (DIF). This study reviewed DIF articles from 2005 to 2024. It performs a comprehensive evaluation and bibliometric analysis of DIF methodologies, aiming to uncover trends, technological advancements, and thematic progressions. The study utilized Scopus data to illustrate key DIF methodologies, citation trends, and thematic changes. It offers fresh insights by showcasing the rising prevalence of deep learning‐based DIF techniques post‐2018 and the emergence of hybrid models that integrate traditional and AI‐driven methods to bolster detection robustness and precision. The review highlights that China and the United States are leading the field, with significant contributions from institutions such as the South China University of Technology and the State University of New York at Albany. The bibliometric analysis reveals three key trends: a marked increase in deep learning‐based DIF methods since 2018, indicating a shift away from traditional feature‐based techniques; strengthening collaboration between industry and academia, especially in China and the US, fueling significant advancements; and a heightened focus on real‐world forgeries, such as deepfakes, emphasizing the necessity for more adaptable detection tools. It highlights challenges like the lack of substantial and varied benchmark datasets. This comprehensive study also suggests enhancing DIF accuracy and applicability across different domains. This article is categorized under: Algorithmic Development > Multimedia Technologies > Computational Intelligence
{"title":"The Evolution of Digital Image Forgery Detection: A Comprehensive Bibliometric Review, Trends, and Future Prospects","authors":"Rupinder Kaur, Raman Kumar, Gagandeep Kaur, M. Chethan, Sehijpal Singh, Anurag Sinha, Pulkit Kumar, Riyam Adnan Hasan, Zainab Ahmed Abass","doi":"10.1002/widm.70066","DOIUrl":"https://doi.org/10.1002/widm.70066","url":null,"abstract":"Ensuring the authenticity of digital images is essential in forensic investigations, media, and scientific research, where these images serve as critical evidence. This necessity leads to the development of digital image forgery detection (DIF). This study reviewed DIF articles from 2005 to 2024. It performs a comprehensive evaluation and bibliometric analysis of DIF methodologies, aiming to uncover trends, technological advancements, and thematic progressions. The study utilized Scopus data to illustrate key DIF methodologies, citation trends, and thematic changes. It offers fresh insights by showcasing the rising prevalence of deep learning‐based DIF techniques post‐2018 and the emergence of hybrid models that integrate traditional and AI‐driven methods to bolster detection robustness and precision. The review highlights that China and the United States are leading the field, with significant contributions from institutions such as the South China University of Technology and the State University of New York at Albany. The bibliometric analysis reveals three key trends: a marked increase in deep learning‐based DIF methods since 2018, indicating a shift away from traditional feature‐based techniques; strengthening collaboration between industry and academia, especially in China and the US, fueling significant advancements; and a heightened focus on real‐world forgeries, such as deepfakes, emphasizing the necessity for more adaptable detection tools. It highlights challenges like the lack of substantial and varied benchmark datasets. This comprehensive study also suggests enhancing DIF accuracy and applicability across different domains. This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item> Algorithmic Development > Multimedia </jats:list-item> <jats:list-item> Technologies > Computational Intelligence </jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"120 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examines personalized algorithmic pricing and consumer protection across three major jurisdictions the United States, European Union, and India analyzing how artificial intelligence‐driven pricing systems challenge traditional regulatory frameworks and threaten consumer autonomy. The research adopts a comparative methodology combining doctrinal legal analysis with empirical examination of enforcement patterns, scrutinizing recent regulatory developments including the EU's Digital Services Act, the US Department of Justice's RealPage litigation, and India's Consumer Protection Act amendments. The central argument demonstrates that transparency‐only approaches prove fundamentally inadequate in addressing algorithmic filter bubbles and market concentration. Evidence from India's fast‐commerce sector reveals sophisticated discrimination patterns, including device‐based pricing differentials and usage‐pattern exploitation, while “hub‐and‐spoke conspiracies” enable algorithmic collusion without explicit coordination between competitors. Key findings of study that existing legal frameworks, designed for pre‐digital markets, cannot effectively address technologically sophisticated forms of consumer harm and market manipulation. The study identifies critical gaps in jurisdictional approaches: India's reactive consumer protection model, the EU's proactive transparency requirements, and the US's antitrust‐centric enforcement. The research proposes moving beyond disclosure paradigms toward “information enrichment” mandates requiring platforms to actively diversify algorithmic recommendations, coupled with user‐controlled choice architectures and structural market reforms. These interventions, aligned with fundamental rights principles requiring states to serve as ultimate guarantors of diversity offering pathways for regulatory frameworks that balance technological innovation with consumer welfare and market competition. This article is categorized under: Commercial, Legal, and Ethical Issues > Legal Issues Commercial, Legal, and Ethical Issues > Ethical Considerations Commercial, Legal, and Ethical Issues > Security and Privacy
{"title":"AI Price Tags and Privacy: When Your Data Sets Your Price","authors":"Varda Mone, Abhishek Thommandru, Fazilov Farkhod Maratovich, Khudaykulov Feruzbek Khurramovich, Abzalova Khurshida Mirziyatovna","doi":"10.1002/widm.70070","DOIUrl":"https://doi.org/10.1002/widm.70070","url":null,"abstract":"This study examines personalized algorithmic pricing and consumer protection across three major jurisdictions the United States, European Union, and India analyzing how artificial intelligence‐driven pricing systems challenge traditional regulatory frameworks and threaten consumer autonomy. The research adopts a comparative methodology combining doctrinal legal analysis with empirical examination of enforcement patterns, scrutinizing recent regulatory developments including the EU's Digital Services Act, the US Department of Justice's RealPage litigation, and India's Consumer Protection Act amendments. The central argument demonstrates that transparency‐only approaches prove fundamentally inadequate in addressing algorithmic filter bubbles and market concentration. Evidence from India's fast‐commerce sector reveals sophisticated discrimination patterns, including device‐based pricing differentials and usage‐pattern exploitation, while “hub‐and‐spoke conspiracies” enable algorithmic collusion without explicit coordination between competitors. Key findings of study that existing legal frameworks, designed for pre‐digital markets, cannot effectively address technologically sophisticated forms of consumer harm and market manipulation. The study identifies critical gaps in jurisdictional approaches: India's reactive consumer protection model, the EU's proactive transparency requirements, and the US's antitrust‐centric enforcement. The research proposes moving beyond disclosure paradigms toward “information enrichment” mandates requiring platforms to actively diversify algorithmic recommendations, coupled with user‐controlled choice architectures and structural market reforms. These interventions, aligned with fundamental rights principles requiring states to serve as ultimate guarantors of diversity offering pathways for regulatory frameworks that balance technological innovation with consumer welfare and market competition. This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item> Commercial, Legal, and Ethical Issues > Legal Issues </jats:list-item> <jats:list-item> Commercial, Legal, and Ethical Issues > Ethical Considerations </jats:list-item> <jats:list-item> Commercial, Legal, and Ethical Issues > Security and Privacy </jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A cross-modal information retrieval (CMIR) has emerged as a pivotal research area, enabling efficient retrieval across diverse data with multiple modalities. With the production of multimodal data, advanced deep learning frameworks have demonstrated significant promise in aligning and mapping heterogeneous data representations into a unified latent space. This review explores the revolution of advanced deep learning techniques in CMIR, highlighting key advancements, methodology, and challenges, especially focusing on intelligent frameworks that leverage architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), Transformers, and generative adversarial networks (GANs), for enhancing semantic alignment and retrieval accuracy. It also discusses challenges such as modality, imbalance, cross-representation, and inter-permeability with other modalities, providing insight into emerging trends such as multi-model, generative AI, autoencoders, and large-scale, pretrained models, by synthesizing recent advancements and identifying research gaps. This review paper aims to provide a foundation for future exploration in intelligent CMIR systems; the findings underscore the transformative latent of advanced deep learning frameworks in addressing the growing demand for accurate and scalable CMIR solutions.
{"title":"Advanced Deep Learning Frameworks for Cross-Modal Information Retrieval: A Comprehensive Review of Techniques, Challenges, and Future Directions","authors":"Aamir Khan, Nisha Chandran S., D. R. Gangodkar","doi":"10.1002/widm.70055","DOIUrl":"https://doi.org/10.1002/widm.70055","url":null,"abstract":"A cross-modal information retrieval (CMIR) has emerged as a pivotal research area, enabling efficient retrieval across diverse data with multiple modalities. With the production of multimodal data, advanced deep learning frameworks have demonstrated significant promise in aligning and mapping heterogeneous data representations into a unified latent space. This review explores the revolution of advanced deep learning techniques in CMIR, highlighting key advancements, methodology, and challenges, especially focusing on intelligent frameworks that leverage architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), Transformers, and generative adversarial networks (GANs), for enhancing semantic alignment and retrieval accuracy. It also discusses challenges such as modality, imbalance, cross-representation, and inter-permeability with other modalities, providing insight into emerging trends such as multi-model, generative AI, autoencoders, and large-scale, pretrained models, by synthesizing recent advancements and identifying research gaps. This review paper aims to provide a foundation for future exploration in intelligent CMIR systems; the findings underscore the transformative latent of advanced deep learning frameworks in addressing the growing demand for accurate and scalable CMIR solutions.","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems are essential for information filtering but often suffer from the cold start problem caused by limited interaction data. Recent advances in deep learning (DL) and large language models (LLMs) have shown promise, yet systematic analysis of their effectiveness remains scarce. To address this gap, we introduce a paradigm-driven taxonomy that categorizes solutions by their primary source of information: content, structure, transfer, and generation. Within this framework, DL methods have matured in leveraging content and structural information from interaction logs and multimodal data, while LLMs demonstrate advantages in text-rich and data-sparse environments through transfer-based paradigms that exploit semantic understanding and pre-trained knowledge. Furthermore, emerging generative approaches show potential for synthesizing data or relations to alleviate information scarcity. No universal solution exists; effectiveness depends on the dominant paradigm of a given scenario as well as data availability and computational cost. Combining DL and LLM offers substantial opportunities, including enhanced feature representation, data augmentation, and hybrid pipelines. However, research gaps persist, particularly the lack of standardized evaluation metrics and limited exploration of integration strategies. Addressing these challenges through a paradigm-aware perspective could significantly improve the robustness and adaptability of the cold-start recommendation in diverse contexts.
{"title":"A Review of Deep Learning and Large Language Models for Cold Start Problem in Recommender Systems","authors":"Chenlong Liu, Daguang Jiang, Yi Cai, Hui Li","doi":"10.1002/widm.70068","DOIUrl":"https://doi.org/10.1002/widm.70068","url":null,"abstract":"Recommender systems are essential for information filtering but often suffer from the cold start problem caused by limited interaction data. Recent advances in deep learning (DL) and large language models (LLMs) have shown promise, yet systematic analysis of their effectiveness remains scarce. To address this gap, we introduce a paradigm-driven taxonomy that categorizes solutions by their primary source of information: content, structure, transfer, and generation. Within this framework, DL methods have matured in leveraging content and structural information from interaction logs and multimodal data, while LLMs demonstrate advantages in text-rich and data-sparse environments through transfer-based paradigms that exploit semantic understanding and pre-trained knowledge. Furthermore, emerging generative approaches show potential for synthesizing data or relations to alleviate information scarcity. No universal solution exists; effectiveness depends on the dominant paradigm of a given scenario as well as data availability and computational cost. Combining DL and LLM offers substantial opportunities, including enhanced feature representation, data augmentation, and hybrid pipelines. However, research gaps persist, particularly the lack of standardized evaluation metrics and limited exploration of integration strategies. Addressing these challenges through a paradigm-aware perspective could significantly improve the robustness and adaptability of the cold-start recommendation in diverse contexts.","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toshitaka Hayashi, Dalibor Cimr, Hamido Fujita, Richard Cimler
This paper presents a critical review of one-class classification (OCC). Old articles defined OCC in a vague way, which allowed OCC models to learn from multiple classes. This paper reconsiders the OCC definition, as training data includes solely one class, and samples belonging to other classes are not available. Moreover, the review introduces a new OCC taxonomy consisting of boundary, distance, probability, fake, and subtask-based approaches. Additionally, the article reveals that many OCC algorithms have learned multiple classes. Common violations include accessing unlabeled datasets, importing other datasets, and hyperparameter tuning based on the testing results. In addition, this paper suggests two gray zones in OCC: creating fake datasets and fake OCC problems from scratch, and decomposing samples into smaller units for accessing multiple classes. These gray zones could contribute to future theory to learn from a single class. On the other hand, the application of OCC can use multiple classes; generally, multiple classes outperform a single class. However, the applications will no longer be OCC after learning multiple classes.
{"title":"Critical Review for One-Class Classification: Recent Advances and Reality Behind Them","authors":"Toshitaka Hayashi, Dalibor Cimr, Hamido Fujita, Richard Cimler","doi":"10.1002/widm.70058","DOIUrl":"https://doi.org/10.1002/widm.70058","url":null,"abstract":"This paper presents a critical review of one-class classification (OCC). Old articles defined OCC in a vague way, which allowed OCC models to learn from multiple classes. This paper reconsiders the OCC definition, as training data includes solely one class, and samples belonging to other classes are not available. Moreover, the review introduces a new OCC taxonomy consisting of boundary, distance, probability, fake, and subtask-based approaches. Additionally, the article reveals that many OCC algorithms have learned multiple classes. Common violations include accessing unlabeled datasets, importing other datasets, and hyperparameter tuning based on the testing results. In addition, this paper suggests two gray zones in OCC: creating fake datasets and fake OCC problems from scratch, and decomposing samples into smaller units for accessing multiple classes. These gray zones could contribute to future theory to learn from a single class. On the other hand, the application of OCC can use multiple classes; generally, multiple classes outperform a single class. However, the applications will no longer be OCC after learning multiple classes.","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146121908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an era of rapid digital communication, the proliferation of manipulated information has emerged as a critical global challenge that undermines the integrity of information. Misinformation, often spread unintentionally, and disinformation, deliberately crafted to deceive, have far-reaching consequences, including eroding public trust, disrupting democratic processes, and endangering public health. Various forms, such as fake news, manipulated media, fake reviews, spam, and phishing, exploit social media and communication platforms to mislead users. Numerous techniques have been developed to detect false content, as discussed in several review articles devoted to the topic, but without mentioning quantum computing approaches. Notably, recent quantum computing reviews have not addressed misinformation or disinformation-related applications, despite growing interest in quantum methods across domains such as medicine, finance, and cybersecurity. This gap and the presence of relevant literature, especially over the last 2 years, highlight a pressing need for surveying research works into the intersection of quantum computing and misinformation or disinformation detection, which this work aims to address.
{"title":"Quantum Frontiers in the Battle for Information Integrity","authors":"Vincenzo Loia, Stefania Tomasiello","doi":"10.1002/widm.70067","DOIUrl":"https://doi.org/10.1002/widm.70067","url":null,"abstract":"In an era of rapid digital communication, the proliferation of manipulated information has emerged as a critical global challenge that undermines the integrity of information. Misinformation, often spread unintentionally, and disinformation, deliberately crafted to deceive, have far-reaching consequences, including eroding public trust, disrupting democratic processes, and endangering public health. Various forms, such as fake news, manipulated media, fake reviews, spam, and phishing, exploit social media and communication platforms to mislead users. Numerous techniques have been developed to detect false content, as discussed in several review articles devoted to the topic, but without mentioning quantum computing approaches. Notably, recent quantum computing reviews have not addressed misinformation or disinformation-related applications, despite growing interest in quantum methods across domains such as medicine, finance, and cybersecurity. This gap and the presence of relevant literature, especially over the last 2 years, highlight a pressing need for surveying research works into the intersection of quantum computing and misinformation or disinformation detection, which this work aims to address.","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Language Models (LMs) have demonstrated exceptional performance across various Natural Language Processing (NLP) tasks. Despite these advancements, LMs can inherit and amplify societal biases related to sensitive attributes such as gender and race, limiting their adoption in real‐world applications. Therefore, fairness has been extensively explored in LMs, leading to the proposal of various fairness notions. However, the lack of clear agreement on which fairness definition to apply in specific contexts and the complexity of understanding the distinctions between these definitions can create confusion and impede further progress. To this end, this paper proposes a systematic survey that clarifies the definitions of fairness as they apply to LMs. Specifically, we begin with a brief introduction to LMs and fairness in LMs, followed by a comprehensive, up‐to‐date overview of existing fairness notions in LMs and the introduction of a novel taxonomy that categorizes these concepts based on their transformer architecture: encoder‐only, decoder‐only, and encoder‐decoder LMs. We further illustrate each definition through experiments, showcasing their practical implications and outcomes. Finally, we discuss current research challenges and open questions, aiming to foster innovative ideas and advance the field. The repository is publicly available online at https://github.com/vanbanTruong/Fairness‐in‐Large‐Language‐Models/tree/main/definitions . This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Commercial, Legal, and Ethical Issues > Social Considerations Technologies > Artificial Intelligence .
{"title":"Fairness Definitions in Language Models Explained","authors":"Zhipeng Yin, Zichong Wang, Avash Palikhe, Wenbin Zhang","doi":"10.1002/widm.70063","DOIUrl":"https://doi.org/10.1002/widm.70063","url":null,"abstract":"Language Models (LMs) have demonstrated exceptional performance across various Natural Language Processing (NLP) tasks. Despite these advancements, LMs can inherit and amplify societal biases related to sensitive attributes such as gender and race, limiting their adoption in real‐world applications. Therefore, fairness has been extensively explored in LMs, leading to the proposal of various fairness notions. However, the lack of clear agreement on which fairness definition to apply in specific contexts and the complexity of understanding the distinctions between these definitions can create confusion and impede further progress. To this end, this paper proposes a systematic survey that clarifies the definitions of fairness as they apply to LMs. Specifically, we begin with a brief introduction to LMs and fairness in LMs, followed by a comprehensive, up‐to‐date overview of existing fairness notions in LMs and the introduction of a novel taxonomy that categorizes these concepts based on their transformer architecture: encoder‐only, decoder‐only, and encoder‐decoder LMs. We further illustrate each definition through experiments, showcasing their practical implications and outcomes. Finally, we discuss current research challenges and open questions, aiming to foster innovative ideas and advance the field. The repository is publicly available online at <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/vanbanTruong/Fairness-in-Large-Language-Models/tree/main/definitions\">https://github.com/vanbanTruong/Fairness‐in‐Large‐Language‐Models/tree/main/definitions</jats:ext-link> . This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item> Commercial, Legal, and Ethical Issues > Fairness in Data Mining </jats:list-item> <jats:list-item> Commercial, Legal, and Ethical Issues > Social Considerations </jats:list-item> <jats:list-item> Technologies > Artificial Intelligence </jats:list-item> </jats:list> .","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Road safety is a critical issue due to its significant impact on public health and economic stability. Traffic accidents result in millions of fatalities and injuries globally each year, imposing substantial healthcare costs and loss of productivity. Therefore, systematic data collection is urgently needed to identify key road safety challenges and implement effective solutions. This study examines recent advancements in artificial intelligence (AI) and deep learning techniques for detecting road anomalies, including potholes and speed bumps, utilizing cost‐effective, commercially available cameras. It provides a comprehensive overview of various methodologies for detecting road damage, emphasizing the value of integrating visual, qualitative, and quantitative analyses. Additionally, the study evaluates various algorithms, including R‐CNN (Regions with CNN) for object detection and CrackU‐net for crack detection, to analyze their effectiveness in enhancing road maintenance and safety. Beyond technical methods, the study also examines global trends in road safety, emphasizing the need for comprehensive policy frameworks and knowledge transfer from developed to developing countries to reduce fatalities and enhance road infrastructure. Finally, the study addresses challenges such as limited visibility, adverse weather conditions, and the current limitations of existing models, while discussing the potential for future advancements in automated road safety systems. This article is categorized under: Technologies > Artificial Intelligence
{"title":"Artificial Intelligence for Road Anomaly Detection: A Review","authors":"Rohit Samanta, Amutha Sadasivan, Muthu Subash Kavitha, Surendiran Balasubramanian","doi":"10.1002/widm.70054","DOIUrl":"https://doi.org/10.1002/widm.70054","url":null,"abstract":"Road safety is a critical issue due to its significant impact on public health and economic stability. Traffic accidents result in millions of fatalities and injuries globally each year, imposing substantial healthcare costs and loss of productivity. Therefore, systematic data collection is urgently needed to identify key road safety challenges and implement effective solutions. This study examines recent advancements in artificial intelligence (AI) and deep learning techniques for detecting road anomalies, including potholes and speed bumps, utilizing cost‐effective, commercially available cameras. It provides a comprehensive overview of various methodologies for detecting road damage, emphasizing the value of integrating visual, qualitative, and quantitative analyses. Additionally, the study evaluates various algorithms, including R‐CNN (Regions with CNN) for object detection and CrackU‐net for crack detection, to analyze their effectiveness in enhancing road maintenance and safety. Beyond technical methods, the study also examines global trends in road safety, emphasizing the need for comprehensive policy frameworks and knowledge transfer from developed to developing countries to reduce fatalities and enhance road infrastructure. Finally, the study addresses challenges such as limited visibility, adverse weather conditions, and the current limitations of existing models, while discussing the potential for future advancements in automated road safety systems. This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item> Technologies > Artificial Intelligence </jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the contemporary healthcare landscape, secure and efficient data sharing is paramount, especially when utilizing cloud‐based platforms. The advent of cloud computing has revolutionized healthcare data sharing, offering unparalleled accessibility and scalability. However, the inherent risks associated with data breaches and privacy violations pose significant challenges, necessitating robust security measures. In such scenarios, the integration of threat intelligence with privacy‐preserving techniques becomes imperative to safeguard sensitive healthcare information. This research introduces a novel algorithm, FedGANet, alongside an integrated Privacy‐Preserving Threat Intelligence Model (FedGAN‐PPTIM), developed to strengthen secure healthcare data exchange within cloud and IoMT environments. FedGANet enhances traditional security paradigms by jointly leveraging Generative Adversarial Networks (GANs) to synthesize realistic threat scenarios and Federated Learning (FL) to enable decentralized model training without exposing sensitive patient data. The model further aligns with interoperability considerations, supporting seamless integration into diverse clinical ecosystems. The proposed FedGAN‐PPTIM framework is extensively compared with established privacy‐preserving and threat intelligence approaches across multiple evaluation metrics, including privacy leakage, threat detection rate, false positive rate, and communication overhead. The simulation analysis demonstrates that FedGANet outperforms existing methods, significantly reducing privacy leakage and communication overhead while maintaining high threat detection rates and low false positive rates. These results underscore the efficacy of FedGANet in addressing privacy and security challenges in healthcare data sharing. This article is categorized under: Technologies > Cloud Computing Technologies > Artificial Intelligence Commercial, Legal, and Ethical Issues > Security and Privacy
{"title":"A Privacy‐Preserving Threat Intelligence Model for Secure Healthcare Data Sharing in the Cloud","authors":"I. Sakthidevi, G. Fathima","doi":"10.1002/widm.70064","DOIUrl":"https://doi.org/10.1002/widm.70064","url":null,"abstract":"In the contemporary healthcare landscape, secure and efficient data sharing is paramount, especially when utilizing cloud‐based platforms. The advent of cloud computing has revolutionized healthcare data sharing, offering unparalleled accessibility and scalability. However, the inherent risks associated with data breaches and privacy violations pose significant challenges, necessitating robust security measures. In such scenarios, the integration of threat intelligence with privacy‐preserving techniques becomes imperative to safeguard sensitive healthcare information. This research introduces a novel algorithm, FedGANet, alongside an integrated Privacy‐Preserving Threat Intelligence Model (FedGAN‐PPTIM), developed to strengthen secure healthcare data exchange within cloud and IoMT environments. FedGANet enhances traditional security paradigms by jointly leveraging Generative Adversarial Networks (GANs) to synthesize realistic threat scenarios and Federated Learning (FL) to enable decentralized model training without exposing sensitive patient data. The model further aligns with interoperability considerations, supporting seamless integration into diverse clinical ecosystems. The proposed FedGAN‐PPTIM framework is extensively compared with established privacy‐preserving and threat intelligence approaches across multiple evaluation metrics, including privacy leakage, threat detection rate, false positive rate, and communication overhead. The simulation analysis demonstrates that FedGANet outperforms existing methods, significantly reducing privacy leakage and communication overhead while maintaining high threat detection rates and low false positive rates. These results underscore the efficacy of FedGANet in addressing privacy and security challenges in healthcare data sharing. This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item> Technologies > Cloud Computing </jats:list-item> <jats:list-item> Technologies > Artificial Intelligence </jats:list-item> <jats:list-item> Commercial, Legal, and Ethical Issues > Security and Privacy </jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement of generative artificial intelligence, AI‐generated image methods have experienced rapid development in interior design rendering. These methods enable the rapid generation of creative interior design renderings but accompany uncertainties in the generated images, which challenges the requirements of design renderings. Researchers have explored various approaches to enhance consistency in AI‐generated images. This review summarizes the methods and roles of generative artificial intelligence in interior design compared with traditional techniques and the relationships between the AI‐generated images and controlled parameters such as the workflow nodes, prompts, and models. Image consistency is a critical factor in the design generation process; their methods to control interior design renderings include prompts, image‐to‐image, ControlNet, IP‐Adapter, LoRA, SAM, and so forth. Much evidence reveals that ControlNet could control the positional relationship, IP‐Adapter could influence different styles, LoRA could excel in customized styles, and SAM could modify local regions. This article is categorized under: Technologies > Artificial Intelligence Commercial, Legal, and Ethical Issues > Fairness in Data Mining
{"title":"A Review on the Consistency of AI ‐Generated Images for Interior Design Rendering","authors":"Shuangyang Tan, Shasha Chen","doi":"10.1002/widm.70056","DOIUrl":"https://doi.org/10.1002/widm.70056","url":null,"abstract":"With the advancement of generative artificial intelligence, AI‐generated image methods have experienced rapid development in interior design rendering. These methods enable the rapid generation of creative interior design renderings but accompany uncertainties in the generated images, which challenges the requirements of design renderings. Researchers have explored various approaches to enhance consistency in AI‐generated images. This review summarizes the methods and roles of generative artificial intelligence in interior design compared with traditional techniques and the relationships between the AI‐generated images and controlled parameters such as the workflow nodes, prompts, and models. Image consistency is a critical factor in the design generation process; their methods to control interior design renderings include prompts, image‐to‐image, ControlNet, IP‐Adapter, LoRA, SAM, and so forth. Much evidence reveals that ControlNet could control the positional relationship, IP‐Adapter could influence different styles, LoRA could excel in customized styles, and SAM could modify local regions. This article is categorized under: <jats:list list-type=\"simple\"> <jats:list-item> Technologies > Artificial Intelligence </jats:list-item> <jats:list-item> Commercial, Legal, and Ethical Issues > Fairness in Data Mining </jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145897539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}