Pub Date : 2026-02-09DOI: 10.1177/2167647X261423109
Xianfeng Gong, Mingyang Mao
This study intends to identify the critical factors that shape college students' adoption of AI-generated news, with a specific focus on integrating Big Data methodologies into the Technology Acceptance Model (TAM) framework. Building on TAM, the research incorporates "trust" as a core variable to develop a dual-path theoretical model that combines technological cognition (e.g., perceived usefulness, perceived ease of use) and psychological emotions. Unlike traditional TAM-based studies relying solely on questionnaire data, this research enriches its data sources by leveraging Big Data techniques-including the collection and analysis of college students' real-time behavioral data (e.g., AI news reading duration, sharing frequency, source verification clicks) and unstructured text data (e.g., sentiment orientation in comment sections)-to complement the survey data from 300 college students. Through a questionnaire survey of 300 college students and data analysis using the structural equation model, the study found that trust has the strongest direct positive impact on the willingness to use (β = 0.49, p < 0.001), and its influence is significantly greater than perceived usefulness (β = 0.35, p < 0.001). Meanwhile, although perceived ease of use does not directly affect the willingness to use, it has significant indirect effects by enhancing trust and perceived usefulness. The results show that in the AI news context with high-risk perception, trust is a more crucial psychological mechanism than traditional technological cognitive factors. These findings have expanded the explanatory boundaries of the TAM model in new technology fields and provided empirical evidence and practical inspiration for AI developers to optimize system credibility and for educators to conduct algorithmic literacy training.
本研究旨在确定影响大学生采用人工智能生成新闻的关键因素,并特别关注将大数据方法整合到技术接受模型(TAM)框架中。本研究以TAM为基础,将“信任”作为核心变量,构建了技术认知(如感知有用性、感知易用性)与心理情绪相结合的双路径理论模型。与传统的基于tam的研究仅仅依赖于问卷数据不同,本研究利用大数据技术——包括收集和分析大学生的实时行为数据(如AI新闻阅读时长、分享频率、来源验证点击)和非结构化文本数据(如评论区情绪倾向)——来丰富其数据源,以补充300名大学生的调查数据。通过对300名大学生的问卷调查,运用结构方程模型进行数据分析,研究发现信任对使用意愿的直接正向影响最强(β = 0.49, p < 0.001),其影响显著大于感知有用性(β = 0.35, p < 0.001)。同时,感知易用性虽然不直接影响使用意愿,但通过增强信任和感知有用性,具有显著的间接影响。结果表明,在具有高风险感知的人工智能新闻情境中,信任是比传统技术认知因素更为关键的心理机制。这些发现拓展了TAM模型在新技术领域的解释边界,为人工智能开发者优化系统可信度和教育工作者开展算法素养培训提供了经验证据和实践启示。
{"title":"Perceived Usefulness, Trust, and Behavioral Intention: A Study on College Student User Adoption Behaviors of Artificial Intelligence Generated News Based on Technology Acceptance Model.","authors":"Xianfeng Gong, Mingyang Mao","doi":"10.1177/2167647X261423109","DOIUrl":"https://doi.org/10.1177/2167647X261423109","url":null,"abstract":"<p><p>This study intends to identify the critical factors that shape college students' adoption of AI-generated news, with a specific focus on integrating Big Data methodologies into the Technology Acceptance Model (TAM) framework. Building on TAM, the research incorporates \"trust\" as a core variable to develop a dual-path theoretical model that combines technological cognition (e.g., perceived usefulness, perceived ease of use) and psychological emotions. Unlike traditional TAM-based studies relying solely on questionnaire data, this research enriches its data sources by leveraging Big Data techniques-including the collection and analysis of college students' real-time behavioral data (e.g., AI news reading duration, sharing frequency, source verification clicks) and unstructured text data (e.g., sentiment orientation in comment sections)-to complement the survey data from 300 college students. Through a questionnaire survey of 300 college students and data analysis using the structural equation model, the study found that trust has the strongest direct positive impact on the willingness to use (β = 0.49, <i>p</i> < 0.001), and its influence is significantly greater than perceived usefulness (β = 0.35, <i>p</i> < 0.001). Meanwhile, although perceived ease of use does not directly affect the willingness to use, it has significant indirect effects by enhancing trust and perceived usefulness. The results show that in the AI news context with high-risk perception, trust is a more crucial psychological mechanism than traditional technological cognitive factors. These findings have expanded the explanatory boundaries of the TAM model in new technology fields and provided empirical evidence and practical inspiration for AI developers to optimize system credibility and for educators to conduct algorithmic literacy training.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"2167647X261423109"},"PeriodicalIF":2.6,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1177/2167647X251411174
Qurat Ul Ain, Hammad Afzal, Fazli Subhan, Mazliham Mohd Suud, Younhyun Jung
Dysarthria, a motor speech disorder characterized by slurred and often unintelligible speech, presents substantial challenges for effective communication. Conventional automatic speech recognition systems frequently underperform on dysarthric speech, particularly in severe cases. To address this gap, we introduce low-latency acoustic transcription and textual encoding (LATTE), an advanced framework designed for real-time dysarthric speech recognition. LATTE integrates preprocessing, acoustic processing, and transcription mapping into a unified pipeline, with its core powered by a hybrid architecture that combines convolutional layers for acoustic feature extraction with bidirectional temporal layers for modeling temporal dependencies. Evaluated on the UA-Speech dataset, LATTE achieves a word error rate of 12.5%, phoneme error rate of 8.3%, and a character error rate of 1%. By enabling accurate, low-latency transcription of impaired speech, LATTE provides a robust foundation for enhancing communication and accessibility in both digital applications and real-time interactive environments.
{"title":"Advancing Dysarthric Speech-to-Text Recognition with LATTE: A Low-Latency Acoustic Modeling Approach for Real-Time Communication.","authors":"Qurat Ul Ain, Hammad Afzal, Fazli Subhan, Mazliham Mohd Suud, Younhyun Jung","doi":"10.1177/2167647X251411174","DOIUrl":"https://doi.org/10.1177/2167647X251411174","url":null,"abstract":"<p><p>Dysarthria, a motor speech disorder characterized by slurred and often unintelligible speech, presents substantial challenges for effective communication. Conventional automatic speech recognition systems frequently underperform on dysarthric speech, particularly in severe cases. To address this gap, we introduce low-latency acoustic transcription and textual encoding (LATTE), an advanced framework designed for real-time dysarthric speech recognition. LATTE integrates preprocessing, acoustic processing, and transcription mapping into a unified pipeline, with its core powered by a hybrid architecture that combines convolutional layers for acoustic feature extraction with bidirectional temporal layers for modeling temporal dependencies. Evaluated on the UA-Speech dataset, LATTE achieves a word error rate of 12.5%, phoneme error rate of 8.3%, and a character error rate of 1%. By enabling accurate, low-latency transcription of impaired speech, LATTE provides a robust foundation for enhancing communication and accessibility in both digital applications and real-time interactive environments.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"2167647X251411174"},"PeriodicalIF":2.6,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146143844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1177/2167647X251409135
Pir Noman Ahmad, Muhammad Shahid Anwar, Saleha Masood, Atta Ur Rehman, Muhammad Zubair
Named entity recognition (NER) is a core task in natural language processing that identifies and classifies entities, such as people, organizations, and locations within text. It has traditionally been applied in areas like text summarization, machine translation, and question answering. In recent years, NER has gained growing importance in health care, where electronic clinical records and online platforms generate large amounts of unstructured medical data. However, applying NER in clinical contexts introduces unique challenges due to the complexity of medical terminology and the need for high accuracy. In this study, we focused on the development of a real-time, low-latency NER system designed for cross-lingual speech-to-text applications, with a particular emphasis on cancer therapy-related clinical records and traditional Chinese medicine (TCM). We explored the integration of deep learning (DL) architectures optimized for low-latency neural processing to extract structured information from multilingual spoken content in medical settings, particularly in multimodal environments. We evaluate DL-based methods and propose a semi-supervised approach that combines TCM-specific corpora with biomedical resources to improve recognition accuracy. The findings provide both a systematic review of current methods and practical insights for building real-time clinical applications that support decision-making and information management in health care.
{"title":"Real-Time Named Entity Recognition from Textual Electronic Clinical Records in Cancer Therapy Using Low-Latency Neural Networks.","authors":"Pir Noman Ahmad, Muhammad Shahid Anwar, Saleha Masood, Atta Ur Rehman, Muhammad Zubair","doi":"10.1177/2167647X251409135","DOIUrl":"https://doi.org/10.1177/2167647X251409135","url":null,"abstract":"<p><p>Named entity recognition (NER) is a core task in natural language processing that identifies and classifies entities, such as people, organizations, and locations within text. It has traditionally been applied in areas like text summarization, machine translation, and question answering. In recent years, NER has gained growing importance in health care, where electronic clinical records and online platforms generate large amounts of unstructured medical data. However, applying NER in clinical contexts introduces unique challenges due to the complexity of medical terminology and the need for high accuracy. In this study, we focused on the development of a real-time, low-latency NER system designed for cross-lingual speech-to-text applications, with a particular emphasis on cancer therapy-related clinical records and traditional Chinese medicine (TCM). We explored the integration of deep learning (DL) architectures optimized for low-latency neural processing to extract structured information from multilingual spoken content in medical settings, particularly in multimodal environments. We evaluate DL-based methods and propose a semi-supervised approach that combines TCM-specific corpora with biomedical resources to improve recognition accuracy. The findings provide both a systematic review of current methods and practical insights for building real-time clinical applications that support decision-making and information management in health care.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"2167647X251409135"},"PeriodicalIF":2.6,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1177/2167647X251406211
Victor Chang, Péter Kacsuk, Gary Wills, Reinhold Behringer
{"title":"Editorial Summary of Selected Articles.","authors":"Victor Chang, Péter Kacsuk, Gary Wills, Reinhold Behringer","doi":"10.1177/2167647X251406211","DOIUrl":"https://doi.org/10.1177/2167647X251406211","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-20DOI: 10.1177/2167647X251398729
Saifullah Jan, Iftikhar Alam, Inayat Khan
This study presents a real-time, context-adaptive advertisement (ad in short) recommendation framework that dynamically updates user context and utilizes a multistage ranking and filtering pipeline to deliver highly relevant and personalized ads. Contextual ads contribute to better conversion rates and play a significant role in e-commerce. In contrast, non-contextual ads engender frustration among advertisers and users: commercialization efforts frequently prove ineffective due to poor user engagement, as evidenced by high ad-skipping rates. The current practices in digital advertising involve non-contextual and irrelevant ads, which result in poor conversion rates. To address this problem, this article explores semantically enriched and context-aware recommender systems, aiming to align ads with user interests. The proposed framework investigates various components, including a user context extractor (UCE), recommender system, ads database, ads ranker, and ads filter. This study also explores how high-quality and relevant content, along with clickable advertising, contributes to improving customer relationships and reducing ad avoidance. During contextual augmentation, ads that become relevant and engaging are projected to have increased click-through rates in a real-world application. Customer engagement and satisfaction would also increase due to a reduction in ad fatigue and the delivery of relevant content. Furthermore, it can curb ad avoidance because users will gladly respond to ads that suit their interests. Businesses make higher conversions because the more relevant recommendation means greater user interaction. The proposed framework combines a UCE, an ad database, a ranking mechanism, and a filtering module to deliver real-time, personalized recommendations. Evaluated using a k-nearest neighbor-based model, the system achieved improved precision (from 0.8275 to 0.9283), recall (from 0.4628 to 0.5201), and normalized discounted cumulative gain (from 0.9906 to 0.9915). These gains demonstrate that integrating fine-grained, dynamic user context substantially enhances recommendation quality and user engagement, offering a scalable foundation for intelligent, adaptive advertising systems. This research contributes toward the future development of an AI-enabled advertising strategy, with an emphasis on dynamic ad targeting that goes hand in hand with personalization and thus improved conversion rate.
{"title":"Does Context Matter? The Role of Fine-Tuned Contextual Augmentation in Online Ad Delivery on Social Media.","authors":"Saifullah Jan, Iftikhar Alam, Inayat Khan","doi":"10.1177/2167647X251398729","DOIUrl":"https://doi.org/10.1177/2167647X251398729","url":null,"abstract":"<p><p>This study presents a real-time, context-adaptive advertisement (ad in short) recommendation framework that dynamically updates user context and utilizes a multistage ranking and filtering pipeline to deliver highly relevant and personalized ads. Contextual ads contribute to better conversion rates and play a significant role in e-commerce. In contrast, non-contextual ads engender frustration among advertisers and users: commercialization efforts frequently prove ineffective due to poor user engagement, as evidenced by high ad-skipping rates. The current practices in digital advertising involve non-contextual and irrelevant ads, which result in poor conversion rates. To address this problem, this article explores semantically enriched and context-aware recommender systems, aiming to align ads with user interests. The proposed framework investigates various components, including a user context extractor (UCE), recommender system, ads database, ads ranker, and ads filter. This study also explores how high-quality and relevant content, along with clickable advertising, contributes to improving customer relationships and reducing ad avoidance. During contextual augmentation, ads that become relevant and engaging are projected to have increased click-through rates in a real-world application. Customer engagement and satisfaction would also increase due to a reduction in ad fatigue and the delivery of relevant content. Furthermore, it can curb ad avoidance because users will gladly respond to ads that suit their interests. Businesses make higher conversions because the more relevant recommendation means greater user interaction. The proposed framework combines a UCE, an ad database, a ranking mechanism, and a filtering module to deliver real-time, personalized recommendations. Evaluated using a <i>k</i>-nearest neighbor-based model, the system achieved improved precision (from 0.8275 to 0.9283), recall (from 0.4628 to 0.5201), and normalized discounted cumulative gain (from 0.9906 to 0.9915). These gains demonstrate that integrating fine-grained, dynamic user context substantially enhances recommendation quality and user engagement, offering a scalable foundation for intelligent, adaptive advertising systems. This research contributes toward the future development of an AI-enabled advertising strategy, with an emphasis on dynamic ad targeting that goes hand in hand with personalization and thus improved conversion rate.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145859048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1177/2167647X251399169
Qiong He, Zhenwei Yang, Yijia Li
Enhancing brand value is critical for new energy vehicle (NEV) enterprises amid fierce competition. This study leverages online consumer reviews as core big data to drive brand equity improvement via advanced big data analytics. A large-scale dataset of 5564 reviews for top five best-selling NEVs was collected from "Dongche Di" via web scraping, followed by a big data processing pipeline (data cleaning, Jieba segmentation, and stop-word filtering). To mine unstructured text big data, we used word cloud visualization, semantic network analysis, and an Latent Dirichlet Allocation (LDA)-Long Short-Term Memory (LSTM) fusion model: LDA identified key consumer concern dimensions, while LSTM enabled deep sentiment classification. Big data analysis revealed five core NEV brand perception dimensions (range, driving experience, interior space, price, and high-speed performance) and quantified emotions-prominent negativity in driving experience, minimal negativity in interior space, and overall dominant negativity. Guided by the Consumer-Based Brand Equity model, we proposed brand enhancement strategies. This study showcases big data analytics' power in scaling consumer perception understanding, offering a data-centric framework for NEV firms to optimize branding.
{"title":"Enhancing NEV Brand Equity Through Big Data Analytics: An LDA-LSTM Approach to Mining Online Consumer Reviews.","authors":"Qiong He, Zhenwei Yang, Yijia Li","doi":"10.1177/2167647X251399169","DOIUrl":"https://doi.org/10.1177/2167647X251399169","url":null,"abstract":"<p><p>Enhancing brand value is critical for new energy vehicle (NEV) enterprises amid fierce competition. This study leverages online consumer reviews as core big data to drive brand equity improvement via advanced big data analytics. A large-scale dataset of 5564 reviews for top five best-selling NEVs was collected from \"Dongche Di\" via web scraping, followed by a big data processing pipeline (data cleaning, Jieba segmentation, and stop-word filtering). To mine unstructured text big data, we used word cloud visualization, semantic network analysis, and an Latent Dirichlet Allocation (LDA)-Long Short-Term Memory (LSTM) fusion model: LDA identified key consumer concern dimensions, while LSTM enabled deep sentiment classification. Big data analysis revealed five core NEV brand perception dimensions (range, driving experience, interior space, price, and high-speed performance) and quantified emotions-prominent negativity in driving experience, minimal negativity in interior space, and overall dominant negativity. Guided by the Consumer-Based Brand Equity model, we proposed brand enhancement strategies. This study showcases big data analytics' power in scaling consumer perception understanding, offering a data-centric framework for NEV firms to optimize branding.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145858991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1177/2167647X251403895
Zhaodi Yu, Zhenxiang Xu, Jiangang Qi
In the context of a global risk society, emergency law has become a critical field for balancing the expansion of state power with the protection of civil rights during crises. Despite its growing importance, a systematic, quantitative comparison of the knowledge landscapes of international and Chinese emergency law scholarship has been notably absent. This study employs bibliometric and knowledge mapping analysis, utilizing CiteSpace software. A total of 274 publications were retrieved from the Web of Science Core Collection and 391 from the China National Knowledge Infrastructure database. These data were used to systematically map and compare the research status, collaborative networks, and core themes of the two academic communities. The findings indicate that while both international and Chinese research are crisis-driven, with publication surges corresponding to major events such as the 9/11 attacks, SARS, and the COVID-19 pandemic, they function as two academically isolated communities with no author-level collaboration. A fundamental divergence in research paradigms was identified. International scholarship follows a "limitation-oriented" paradigm, rooted in liberal constitutionalism, focusing on the tension between emergency powers and human rights, and the risks of a state of exception. In contrast, Chinese research adopts a "construction-oriented" paradigm aimed at building an efficient, state-centric crisis response system, dominated by concepts such as emergency management and the "one plan and three sub-systems" framework. This study concludes that there are two worlds of emergency law. The international paradigm primarily treats emergency law as a mechanism to constrain state authority and protect individual rights from government overreach. In contrast, the Chinese paradigm views law as an instrument to enhance state capacity and ensure effective crisis management. This fundamental divergence in normative goals and theoretical foundations identified in this study presents significant theoretical and practical challenges for global emergency governance and offers a clear direction for future comparative legal studies.
在全球风险社会背景下,紧急状态法已成为在危机中平衡国家权力扩张与公民权利保护的关键领域。尽管其重要性日益增加,但对国际和中国紧急法学术知识格局的系统、定量比较明显缺乏。本研究采用文献计量学和知识图谱分析法,利用CiteSpace软件。共检索到Web of Science核心文献274篇,检索到中国国家知识基础设施数据库391篇。这些数据被用于系统地绘制和比较两个学术界的研究现状、合作网络和核心主题。研究结果表明,虽然国际和中国的研究都是危机驱动的,发表量激增对应于9/11袭击、SARS和COVID-19大流行等重大事件,但它们在学术上是两个孤立的社区,没有作者层面的合作。研究范式存在根本性分歧。国际学术遵循一种“以限制为导向”的范式,根植于自由宪政主义,关注紧急权力与人权之间的紧张关系,以及例外状态的风险。相比之下,中国的研究采用“建构导向”的范式,旨在构建一个高效的、以国家为中心的危机应对体系,以应急管理和“一计划三子系统”框架等概念为主导。本研究的结论是,紧急状态法有两个世界。国际范例主要将紧急状态法视为一种约束国家权威和保护个人权利免受政府越权的机制。相比之下,中国范式将法律视为提高国家能力和确保有效危机管理的工具。本研究确定的规范目标和理论基础的根本分歧为全球应急治理提出了重大的理论和实践挑战,并为未来的比较法律研究提供了明确的方向。
{"title":"The Two Worlds of Emergency Law: A Comparative Study of International and Chinese Scholarship Through Knowledge Domain Mapping.","authors":"Zhaodi Yu, Zhenxiang Xu, Jiangang Qi","doi":"10.1177/2167647X251403895","DOIUrl":"https://doi.org/10.1177/2167647X251403895","url":null,"abstract":"<p><p>In the context of a global risk society, emergency law has become a critical field for balancing the expansion of state power with the protection of civil rights during crises. Despite its growing importance, a systematic, quantitative comparison of the knowledge landscapes of international and Chinese emergency law scholarship has been notably absent. This study employs bibliometric and knowledge mapping analysis, utilizing CiteSpace software. A total of 274 publications were retrieved from the Web of Science Core Collection and 391 from the China National Knowledge Infrastructure database. These data were used to systematically map and compare the research status, collaborative networks, and core themes of the two academic communities. The findings indicate that while both international and Chinese research are crisis-driven, with publication surges corresponding to major events such as the 9/11 attacks, SARS, and the COVID-19 pandemic, they function as two academically isolated communities with no author-level collaboration. A fundamental divergence in research paradigms was identified. International scholarship follows a \"limitation-oriented\" paradigm, rooted in liberal constitutionalism, focusing on the tension between emergency powers and human rights, and the risks of a state of exception. In contrast, Chinese research adopts a \"construction-oriented\" paradigm aimed at building an efficient, state-centric crisis response system, dominated by concepts such as emergency management and the \"one plan and three sub-systems\" framework. This study concludes that there are two worlds of emergency law. The international paradigm primarily treats emergency law as a mechanism to constrain state authority and protect individual rights from government overreach. In contrast, the Chinese paradigm views law as an instrument to enhance state capacity and ensure effective crisis management. This fundamental divergence in normative goals and theoretical foundations identified in this study presents significant theoretical and practical challenges for global emergency governance and offers a clear direction for future comparative legal studies.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1177/2167647X251399606
Suhas Alalasandra Ramakrishnaiah, Yasir Abdullah Rabi, Ananth John Patrick, Mohammad Shabaz, Surbhi B Khan, Rijwan Khan, Ahlam Almusharraf
Engineering teams need timely signals about evolving requirements and release risk, yet multilingual fan discourse around live sports is noisy, code-switched, and saturated with sarcasm and event-driven drift. We present Hybrid DeepSentX, an AI-driven framework that converts crowd commentary into actionable requirements insight and sprint-level risk scores. The pipeline couples multilingual transformer encoders with an inductive GraphSAGE conversation graph to inject relational context across posts, and adds a reinforcement learner whose reward is shaped to prioritize correct decisions on sarcasm-heavy items and rapidly shifting events. We assembled a million-plus posts from X, Reddit, and sports forums and evaluated the framework against strong baselines, including BERT, long short-term memory, support-vector machines, and recent hybrid models, with significance tests, calibration analysis, ablations, and efficiency profiling. DeepSentX achieved higher macro-averaged accuracy and F1 on code-switched and sarcastic subsets, reduced missed risk flags, and produced developer-facing artefacts that directly support backlog grooming and defect triage. Relative to prior hybrids that combine transformers with either graph reasoning or reinforcement alone, our contributions are fourfold: (i) a unified multilingual design that integrates transformer, graph, and reinforcement components for sarcasm and drift robustness, (ii) an annotated multi-platform corpus with explicit code switching and sarcasm labels and per platform language balance, (iii) a rigorous comparative study reporting accuracy, calibration, latency, memory, and parameter count, and (iv) deployment artefacts that turn model outputs into requirement clusters and sprint risk scores suitable for continuous planning.
{"title":"Hybrid DeepSentX Framework for AI-Driven Requirements Insight and Risk Prediction in Multilingual Sports Using Natural Language Processing.","authors":"Suhas Alalasandra Ramakrishnaiah, Yasir Abdullah Rabi, Ananth John Patrick, Mohammad Shabaz, Surbhi B Khan, Rijwan Khan, Ahlam Almusharraf","doi":"10.1177/2167647X251399606","DOIUrl":"https://doi.org/10.1177/2167647X251399606","url":null,"abstract":"<p><p>Engineering teams need timely signals about evolving requirements and release risk, yet multilingual fan discourse around live sports is noisy, code-switched, and saturated with sarcasm and event-driven drift. We present Hybrid DeepSentX, an AI-driven framework that converts crowd commentary into actionable requirements insight and sprint-level risk scores. The pipeline couples multilingual transformer encoders with an inductive GraphSAGE conversation graph to inject relational context across posts, and adds a reinforcement learner whose reward is shaped to prioritize correct decisions on sarcasm-heavy items and rapidly shifting events. We assembled a million-plus posts from X, Reddit, and sports forums and evaluated the framework against strong baselines, including BERT, long short-term memory, support-vector machines, and recent hybrid models, with significance tests, calibration analysis, ablations, and efficiency profiling. DeepSentX achieved higher macro-averaged accuracy and F1 on code-switched and sarcastic subsets, reduced missed risk flags, and produced developer-facing artefacts that directly support backlog grooming and defect triage. Relative to prior hybrids that combine transformers with either graph reasoning or reinforcement alone, our contributions are fourfold: (i) a unified multilingual design that integrates transformer, graph, and reinforcement components for sarcasm and drift robustness, (ii) an annotated multi-platform corpus with explicit code switching and sarcasm labels and per platform language balance, (iii) a rigorous comparative study reporting accuracy, calibration, latency, memory, and parameter count, and (iv) deployment artefacts that turn model outputs into requirement clusters and sprint risk scores suitable for continuous planning.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-22DOI: 10.1177/2167647X251366060
Xuna Wang
With the rapid development of social media and online platforms, the speed and influence of emergency dissemination in cyberspace have significantly increased. The swift changes in public opinion, especially the phenomenon of opinion reversals, exert profound impacts on social stability and government credibility. The hypernetwork structure, characterized by its multilayered and multidimensional complexity, offers a new theoretical framework for analyzing multiagents and their interactions in the evolution of public opinion. Based on hypernetwork theory, this study constructs a four-layer subnet model encompassing user interaction network, event evolution network, semantic association network, and emotional conduction network. By extracting network structural features and conducting cross-layer linkage analysis, an identification system for public opinion reversals in emergencies is established. Taking the donation incident involving Hongxing Erke during the Henan rainstorm in 2021 as a case study, an empirical analysis of the public opinion reversal process is conducted. The research results indicate that the proposed hypernetwork model can effectively identify key nodes in public opinion reversals. The multi-indicator collaborative identification system for public opinion reversals aids in rapidly and effectively detecting signals of such reversals. This study not only provides new methodological support for the dynamic identification of public opinion reversals but also offers theoretical references and practical guidance for public opinion monitoring and emergency response decision-making in emergencies.
{"title":"A Study of Public Opinion Reversal Recognition of Emergency Based on Hypernetwork.","authors":"Xuna Wang","doi":"10.1177/2167647X251366060","DOIUrl":"10.1177/2167647X251366060","url":null,"abstract":"<p><p>With the rapid development of social media and online platforms, the speed and influence of emergency dissemination in cyberspace have significantly increased. The swift changes in public opinion, especially the phenomenon of opinion reversals, exert profound impacts on social stability and government credibility. The hypernetwork structure, characterized by its multilayered and multidimensional complexity, offers a new theoretical framework for analyzing multiagents and their interactions in the evolution of public opinion. Based on hypernetwork theory, this study constructs a four-layer subnet model encompassing user interaction network, event evolution network, semantic association network, and emotional conduction network. By extracting network structural features and conducting cross-layer linkage analysis, an identification system for public opinion reversals in emergencies is established. Taking the donation incident involving Hongxing Erke during the Henan rainstorm in 2021 as a case study, an empirical analysis of the public opinion reversal process is conducted. The research results indicate that the proposed hypernetwork model can effectively identify key nodes in public opinion reversals. The multi-indicator collaborative identification system for public opinion reversals aids in rapidly and effectively detecting signals of such reversals. This study not only provides new methodological support for the dynamic identification of public opinion reversals but also offers theoretical references and practical guidance for public opinion monitoring and emergency response decision-making in emergencies.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"497-512"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1177/2167647X251406607
Yuping Yan, Hanyang Xie, Liang Chen, You Wen, Huaquan Su
Data in power grid digital operation exhibit multisource heterogeneous characteristics, resulting in low integration efficiency and slow anomaly detection response. To address this, this paper proposes a method for power grid digital operation data integration based on K-medoids clustering. The basic service layer utilizes an Field Programmable Gate Array parallel architecture. This enables millisecond-level synchronous acquisition and dynamic preprocessing of multisource data, such as mechanical vibration, partial discharge signals, and temperature. The implementation is based on the analysis of the power grid digital operation structure. The data are then fed back to the cloud service layer, which, through business integration services, data analysis, and data access services, performs data filtering and analysis. Subsequently, the data are input to the application layer via the database server. The application layer employs a K-medoids clustering method that introduces a density-weighted Euclidean distance metric and an adaptive centroid selection strategy, significantly enhancing the clustering performance of multisource data. In particular, the proposed architecture supports real-time data processing and can be extended to cross-modal scenarios, including integration with speech-to-text systems in power grid monitoring. By aligning with low-latency neural network principles, this method facilitates timely decision-making in intelligent operation environments. Experiments confirm the method's efficacy. It acquires and integrates multisource heterogeneous power grid digital operation data effectively. The data throughput of different power grid digital operation data sources all exceed 110 MB/s. The silhouette coefficient of the integrated data sets is greater than 0.91, indicating that the integration of power grid digital operation data using this method exhibits good separability and reliability, enabling rapid detection of data anomalies within the power grid, thus laying a solid foundation for the operation and maintenance management of power grid digital operation.
{"title":"Method for Power Grid Digital Operation Data Integration Based on K-Medoids Clustering with Support for Real-Time Cross-Modal Applications.","authors":"Yuping Yan, Hanyang Xie, Liang Chen, You Wen, Huaquan Su","doi":"10.1177/2167647X251406607","DOIUrl":"https://doi.org/10.1177/2167647X251406607","url":null,"abstract":"<p><p>Data in power grid digital operation exhibit multisource heterogeneous characteristics, resulting in low integration efficiency and slow anomaly detection response. To address this, this paper proposes a method for power grid digital operation data integration based on K-medoids clustering. The basic service layer utilizes an Field Programmable Gate Array parallel architecture. This enables millisecond-level synchronous acquisition and dynamic preprocessing of multisource data, such as mechanical vibration, partial discharge signals, and temperature. The implementation is based on the analysis of the power grid digital operation structure. The data are then fed back to the cloud service layer, which, through business integration services, data analysis, and data access services, performs data filtering and analysis. Subsequently, the data are input to the application layer via the database server. The application layer employs a K-medoids clustering method that introduces a density-weighted Euclidean distance metric and an adaptive centroid selection strategy, significantly enhancing the clustering performance of multisource data. In particular, the proposed architecture supports real-time data processing and can be extended to cross-modal scenarios, including integration with speech-to-text systems in power grid monitoring. By aligning with low-latency neural network principles, this method facilitates timely decision-making in intelligent operation environments. Experiments confirm the method's efficacy. It acquires and integrates multisource heterogeneous power grid digital operation data effectively. The data throughput of different power grid digital operation data sources all exceed 110 MB/s. The silhouette coefficient of the integrated data sets is greater than 0.91, indicating that the integration of power grid digital operation data using this method exhibits good separability and reliability, enabling rapid detection of data anomalies within the power grid, thus laying a solid foundation for the operation and maintenance management of power grid digital operation.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"13 6","pages":"453-470"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145716573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}