Pub Date : 2025-12-29DOI: 10.1109/ACCESS.2025.3649245
Sresth Khaitan;Islabudeen Mohamed Meerasha
The escalating scale and sophistication of cyberattacks pose a formidable challenge to conventional intrusion detection systems (IDS) because they lack the flexibility to adapt to evolving threats. We propose a composite deep learning architecture that integrates an autoencoder (AE) for unsupervised feature compression alongside a one-dimensional (1D) convolutional neural network (CNN) to extract local patterns and a soft voting ensemble of Support Vector Machine (SVM), Random Forest (RF), and XGBoost classifiers. We test our approach on three standard benchmarks – UNSW NB15, NSL KDD, and CICIDS2017 – to illustrate its robustness across both legacy and modern attack scenarios. Our approach achieves 99.81 % accuracy on binary classification and 99.90 % on multi-class classification for NSL KDD. On UNSW NB15, it delivers 99.19 % binary accuracy and 98.41 % multi-class accuracy. For CICIDS2017, the model attains 99.59 % binary and 99.76 % multi-class accuracy. These results outperform conventional machine learning baselines and confirm the benefit of combining deep feature learning with ensemble methods. Ablation studies show that each component – autoencoder, convolutional network, and ensemble – contributes meaningful gains, and statistical tests, including paired t tests and analysis of variance, validate the significance of these improvements. We evaluate our model on both classic and modern benchmarks to demonstrate a versatile framework for real-time intrusion detection that delivers consistently high precision while adapting smoothly to new attack patterns.
{"title":"AE-CNN Ensemble: A Novel Architecture for Effective Network Intrusion Detection and Classification","authors":"Sresth Khaitan;Islabudeen Mohamed Meerasha","doi":"10.1109/ACCESS.2025.3649245","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3649245","url":null,"abstract":"The escalating scale and sophistication of cyberattacks pose a formidable challenge to conventional intrusion detection systems (IDS) because they lack the flexibility to adapt to evolving threats. We propose a composite deep learning architecture that integrates an autoencoder (AE) for unsupervised feature compression alongside a one-dimensional (1D) convolutional neural network (CNN) to extract local patterns and a soft voting ensemble of Support Vector Machine (SVM), Random Forest (RF), and XGBoost classifiers. We test our approach on three standard benchmarks – UNSW NB15, NSL KDD, and CICIDS2017 – to illustrate its robustness across both legacy and modern attack scenarios. Our approach achieves 99.81 % accuracy on binary classification and 99.90 % on multi-class classification for NSL KDD. On UNSW NB15, it delivers 99.19 % binary accuracy and 98.41 % multi-class accuracy. For CICIDS2017, the model attains 99.59 % binary and 99.76 % multi-class accuracy. These results outperform conventional machine learning baselines and confirm the benefit of combining deep feature learning with ensemble methods. Ablation studies show that each component – autoencoder, convolutional network, and ensemble – contributes meaningful gains, and statistical tests, including paired t tests and analysis of variance, validate the significance of these improvements. We evaluate our model on both classic and modern benchmarks to demonstrate a versatile framework for real-time intrusion detection that delivers consistently high precision while adapting smoothly to new attack patterns.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"1320-1340"},"PeriodicalIF":3.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11317956","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/ACCESS.2025.3648396
Nornazurah Nazir Ali;Hidayat Zainuddin;Jeefferie Abd Razak;Suzie Sukarti
Silicone Rubber (SiR) is a widely used composite insulating material for high-voltage (HV) outdoor applications. However, prolonged exposure to electrical and environmental stresses accelerates its degradation, necessitating continuous improvements in formulation. The conventional One Factor at a Time (OFAT) method is both material-intensive and time-consuming, limiting optimization efficiency. This study employs a Two-Level Full Factorial Design of Experiments (DoE) to systematically analyse and optimize multiple factors affecting SiR performance. Four key parameters were investigated: Alumina Trihydrate (ATH) filler concentration, Dicumyl Peroxide (DCP) curing agent concentration, mixing speed, and mixing time. The resistance to electrical tracking and erosion was evaluated by monitoring leakage current (LC) during the Inclined Plane Tracking (IPT) test in accordance with BS EN IEC 60587:2022. Statistical tools including Analysis of Variance (ANOVA), effect lists, Pareto charts, and half-normal plots were applied to assess factor significance and interactions. Regression modelling and 3D surface plots were used for predictive analysis and visualization. Results revealed that both individual factors and interactions significantly influenced SiR performance (p < 0.05). The optimized formulation of 50 pphr ATH, 0.5 pphr DCP, 70 rpm mixing speed, and 10 min mixing time achieved a desirability score with a Process Capability Index (Cpk) of 2.14, indicating a robust and reproducible process. Complementary analyses, including tracking depth, erosion length, weight loss, tensile strength, Fourier Transform Infrared Spectroscopy (FTIR), and Scanning Electron Microscopy (SEM), validated the findings. Increased mixing time reduced weight loss by 0.036%, whereas higher DCP content increased weight loss by 0.0405%. The optimized samples with the lowest LC exhibited superior chemical stability, improved surface morphology, and enhanced mechanical strength. This work demonstrates the effectiveness of DoE in optimizing both material formulation and processing parameters for SiR, providing valuable insights into mitigating HV insulation failures. By minimizing redundant experimentation, the approach supports more sustainable development of polymeric insulation technologies.
{"title":"Advanced Material Formulation and Processing Parameter Optimization for High-Voltage Polymeric Insulation Blends: A Two-Level Factorial and ANOVA Study","authors":"Nornazurah Nazir Ali;Hidayat Zainuddin;Jeefferie Abd Razak;Suzie Sukarti","doi":"10.1109/ACCESS.2025.3648396","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3648396","url":null,"abstract":"Silicone Rubber (SiR) is a widely used composite insulating material for high-voltage (HV) outdoor applications. However, prolonged exposure to electrical and environmental stresses accelerates its degradation, necessitating continuous improvements in formulation. The conventional One Factor at a Time (OFAT) method is both material-intensive and time-consuming, limiting optimization efficiency. This study employs a Two-Level Full Factorial Design of Experiments (DoE) to systematically analyse and optimize multiple factors affecting SiR performance. Four key parameters were investigated: Alumina Trihydrate (ATH) filler concentration, Dicumyl Peroxide (DCP) curing agent concentration, mixing speed, and mixing time. The resistance to electrical tracking and erosion was evaluated by monitoring leakage current (LC) during the Inclined Plane Tracking (IPT) test in accordance with BS EN IEC 60587:2022. Statistical tools including Analysis of Variance (ANOVA), effect lists, Pareto charts, and half-normal plots were applied to assess factor significance and interactions. Regression modelling and 3D surface plots were used for predictive analysis and visualization. Results revealed that both individual factors and interactions significantly influenced SiR performance (p < 0.05). The optimized formulation of 50 pphr ATH, 0.5 pphr DCP, 70 rpm mixing speed, and 10 min mixing time achieved a desirability score with a Process Capability Index (Cpk) of 2.14, indicating a robust and reproducible process. Complementary analyses, including tracking depth, erosion length, weight loss, tensile strength, Fourier Transform Infrared Spectroscopy (FTIR), and Scanning Electron Microscopy (SEM), validated the findings. Increased mixing time reduced weight loss by 0.036%, whereas higher DCP content increased weight loss by 0.0405%. The optimized samples with the lowest LC exhibited superior chemical stability, improved surface morphology, and enhanced mechanical strength. This work demonstrates the effectiveness of DoE in optimizing both material formulation and processing parameters for SiR, providing valuable insights into mitigating HV insulation failures. By minimizing redundant experimentation, the approach supports more sustainable development of polymeric insulation technologies.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"1493-1507"},"PeriodicalIF":3.6,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316127","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1109/ACCESS.2025.3648539
Mouna Garai;Maha Sliti;Manel Mrabet;Lassaad Ben Ammar
Vehicular edge computing (VEC) is emerging as a key enabler for intelligent transportation systems that are both latency- and energy-sensitive. This survey is motivated by the need for a unified, KPI-driven view of AI-based vehicular computation offloading that explicitly links performance gains to sustainability objectives in smart cities. We synthesize recent advances in AI-powered offloading for vehicular networks, with emphasis on deep reinforcement learning (DRL) and multi-agent variants that learn adaptive, sequential policies under dynamic topology, fluctuating wireless capacity, and heterogeneous workloads. We propose a unified taxonomy that spans infrastructure-based, vehicle-assisted, and hybrid architectures; map offloading decisions to key performance dimensions (end-to-end latency, energy efficiency, reliability, throughput, and task-success rate); and formalize a minimal KPI model that links radio, compute, and caching components. The review compares algorithmic designs (DQN/DDPG/A3C/SAC, prioritized and federated variants, DRL+optimizer hybrids), scheduling granularities and baseline choices, while examining reproducibility factors such as simulators, mobility models, and dataset availability. We further discuss integration with enabling technologies (cellular vehicle-to-everything (C-V2X)/NR-V2X, reconfigurable intelligent surfaces (RIS), UAV relays, edge caching), security and privacy considerations, and the sustainability implications of AI-driven offloading for intelligent urban environments. The paper concludes with open challenges including non-stationarity, sim-to-real transfer, safety constraints, and explainability and outlines a research agenda toward robust, accountable, and resource-efficient offloading policies deployable in real world VEC systems.
{"title":"AI-Enabled Vehicular Data Offloading for Sustainable Smart Cities: Taxonomy, KPI Models, and Open Challenges","authors":"Mouna Garai;Maha Sliti;Manel Mrabet;Lassaad Ben Ammar","doi":"10.1109/ACCESS.2025.3648539","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3648539","url":null,"abstract":"Vehicular edge computing (VEC) is emerging as a key enabler for intelligent transportation systems that are both latency- and energy-sensitive. This survey is motivated by the need for a unified, KPI-driven view of AI-based vehicular computation offloading that explicitly links performance gains to sustainability objectives in smart cities. We synthesize recent advances in AI-powered offloading for vehicular networks, with emphasis on deep reinforcement learning (DRL) and multi-agent variants that learn adaptive, sequential policies under dynamic topology, fluctuating wireless capacity, and heterogeneous workloads. We propose a unified taxonomy that spans infrastructure-based, vehicle-assisted, and hybrid architectures; map offloading decisions to key performance dimensions (end-to-end latency, energy efficiency, reliability, throughput, and task-success rate); and formalize a minimal KPI model that links radio, compute, and caching components. The review compares algorithmic designs (DQN/DDPG/A3C/SAC, prioritized and federated variants, DRL+optimizer hybrids), scheduling granularities and baseline choices, while examining reproducibility factors such as simulators, mobility models, and dataset availability. We further discuss integration with enabling technologies (cellular vehicle-to-everything (C-V2X)/NR-V2X, reconfigurable intelligent surfaces (RIS), UAV relays, edge caching), security and privacy considerations, and the sustainability implications of AI-driven offloading for intelligent urban environments. The paper concludes with open challenges including non-stationarity, sim-to-real transfer, safety constraints, and explainability and outlines a research agenda toward robust, accountable, and resource-efficient offloading policies deployable in real world VEC systems.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"1468-1492"},"PeriodicalIF":3.6,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316122","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1109/ACCESS.2025.3648171
Amrit Aryal;Santosh Giri;Sanjeeb Prasad Panday;Suman Sharma;Babu R. Dawadi;Sushant Chalise
In this paper, we present an efficient method for three-dimensional (3D) scene reconstruction from multiview images and 360° imaging using 3D Gaussian splatting. The proposed approach significantly improves memory efficiency and rendering speed compared to traditional Neural Radiance Fields (NeRF) based methods. It achieves up to 67% memory reduction, a threefold reduction in training time, and real-time rendering at over 89 frames per second (fps) while maintaining high visual fidelity, with Peak Signal-to-Noise Ratio (PSNR) up to 32 decibels (dB), on a consumer-grade NVIDIA RTX 3060 graphics card. The reconstruction pipeline integrates the Structure-from-Motion technique for camera pose estimation, followed by adaptive Gaussian optimization and model refinement. The final models are cleaned, compressed, and visualized in real time using Unreal Engine through the XV3DGS plugin, enabling immersive walkthroughs and educational applications. The experimental results on multiple data sets demonstrate the effectiveness of the method in terms of reconstruction accuracy, processing time, and deployment readiness. Additionally, we evaluate the impact of image acquisition strategies, training iterations, and dataset types on reconstruction quality and performance. The results indicate that using fewer, carefully chosen images from the most informative viewpoints can achieve high-fidelity reconstructions. This work provides a functional link between academic 3D reconstruction and real-time virtual deployment, offering a practical framework for cultural heritage preservation, simulation, visualization, and immersive exploration.
{"title":"Efficient 3D Scene Reconstruction From Multi-View RGB Images Using Optimized Gaussian Splatting","authors":"Amrit Aryal;Santosh Giri;Sanjeeb Prasad Panday;Suman Sharma;Babu R. Dawadi;Sushant Chalise","doi":"10.1109/ACCESS.2025.3648171","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3648171","url":null,"abstract":"In this paper, we present an efficient method for three-dimensional (3D) scene reconstruction from multiview images and 360° imaging using 3D Gaussian splatting. The proposed approach significantly improves memory efficiency and rendering speed compared to traditional Neural Radiance Fields (NeRF) based methods. It achieves up to 67% memory reduction, a threefold reduction in training time, and real-time rendering at over 89 frames per second (fps) while maintaining high visual fidelity, with Peak Signal-to-Noise Ratio (PSNR) up to 32 decibels (dB), on a consumer-grade NVIDIA RTX 3060 graphics card. The reconstruction pipeline integrates the Structure-from-Motion technique for camera pose estimation, followed by adaptive Gaussian optimization and model refinement. The final models are cleaned, compressed, and visualized in real time using Unreal Engine through the XV3DGS plugin, enabling immersive walkthroughs and educational applications. The experimental results on multiple data sets demonstrate the effectiveness of the method in terms of reconstruction accuracy, processing time, and deployment readiness. Additionally, we evaluate the impact of image acquisition strategies, training iterations, and dataset types on reconstruction quality and performance. The results indicate that using fewer, carefully chosen images from the most informative viewpoints can achieve high-fidelity reconstructions. This work provides a functional link between academic 3D reconstruction and real-time virtual deployment, offering a practical framework for cultural heritage preservation, simulation, visualization, and immersive exploration.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"1269-1286"},"PeriodicalIF":3.6,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11314521","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decentralized Finance (DeFi) promises to lay ground for a more open financial system enabled by blockchain technology. Therein, stablecoins have recently gained momentum as regulated and trusted payment instruments, increasingly adopted for cross-border transactions and supported by initiatives such as the GENIUS Act in the U.S. and the European MiCAR framework. While stablecoins create the foundation of trust for linking DeFi with traditional finance, the ecosystem still depends heavily on cryptocurrency markets due to limited real-world asset integration. Existing research largely focuses on traditional securities and tradable assets, but scant attention has been paid to one of the world’s largest asset classes, real estate. To address this gap, we propose a framework for the tokenization of real estate for integration into the DeFi ecosystem. Using the Design Science Research (DSR) approach, we construct and evaluate our framework through expert interviews and smart contract simulations. The simulations validate technical feasibility and demonstrate efficiency gains, with batch transfers reducing transaction costs for portfolio purchases. Building on these evaluations, we derive design principles for the nascent field of real-world asset tokenization. These principles highlight the importance of covering the entire product range, pursuing end-to-end compliance, leveraging token standards for interoperability, and extending their functionality for efficiency and scalability. By combining regulatory, organizational, and technical perspectives, our work advances design knowledge for compliant integration of real-world assets into DeFi.
{"title":"From Bricks to Blocks: Designing a Framework for the Tokenization of Real Estate for DeFi","authors":"Tobias Kranz;Vincent Schaaf;Tobias Guggenberger;Jens Strüker","doi":"10.1109/ACCESS.2025.3648172","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3648172","url":null,"abstract":"Decentralized Finance (DeFi) promises to lay ground for a more open financial system enabled by blockchain technology. Therein, stablecoins have recently gained momentum as regulated and trusted payment instruments, increasingly adopted for cross-border transactions and supported by initiatives such as the GENIUS Act in the U.S. and the European MiCAR framework. While stablecoins create the foundation of trust for linking DeFi with traditional finance, the ecosystem still depends heavily on cryptocurrency markets due to limited real-world asset integration. Existing research largely focuses on traditional securities and tradable assets, but scant attention has been paid to one of the world’s largest asset classes, real estate. To address this gap, we propose a framework for the tokenization of real estate for integration into the DeFi ecosystem. Using the Design Science Research (DSR) approach, we construct and evaluate our framework through expert interviews and smart contract simulations. The simulations validate technical feasibility and demonstrate efficiency gains, with batch transfers reducing transaction costs for portfolio purchases. Building on these evaluations, we derive design principles for the nascent field of real-world asset tokenization. These principles highlight the importance of covering the entire product range, pursuing end-to-end compliance, leveraging token standards for interoperability, and extending their functionality for efficiency and scalability. By combining regulatory, organizational, and technical perspectives, our work advances design knowledge for compliant integration of real-world assets into DeFi.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"4089-4109"},"PeriodicalIF":3.6,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11314489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/ACCESS.2025.3647622
Honglin Mou;Shenyang Xu;Jianhua Wei;Wenhua Ma;Fusu Dong;Rong Hu;Shiying Lin
Real-time physiological monitoring offers a promising tool for proactive safety management in high-risk construction environments, yet its practical use is hindered by the lack of reliable clinical outcome labels and strong inter-individual variability. This study proposes a weakly supervised health-risk forecasting framework that integrates clinical-style physiological scoring, analytic hierarchy process (AHP) weighting, unsupervised clustering, and supervised learning to enable early prediction of operational risk tiers. A total of 42 627 de-identified wristband measurements from 24 construction workers were analyzed, including heart rate, body temperature, systolic and diastolic blood pressure, and oxygen saturation. Composite risk indices were generated using guideline-informed scoring and AHP weighting and grouped into four risk tiers (Low, Medium, High, Extreme) via K-means clustering to serve as proxy outcome labels. XGBoost, Random Forest, and Logistic Regression models were evaluated using strict leave-one-worker-out cross-validation. Across unseen workers, the proposed framework achieved stable discrimination of Extreme-risk states, with recall approaching 0.95 and AUC exceeding 0.97. Bootstrap analysis further confirmed the robustness of Extreme-risk detection under irregular sampling and class imbalance. These results indicate the feasibility of reliable early risk warning using wearable physiological data for construction safety management.
{"title":"A Clinically-Guided Machine Learning Framework for Operational Health Risk Tier Forecasting in Construction Workers Using Wearable Data","authors":"Honglin Mou;Shenyang Xu;Jianhua Wei;Wenhua Ma;Fusu Dong;Rong Hu;Shiying Lin","doi":"10.1109/ACCESS.2025.3647622","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3647622","url":null,"abstract":"Real-time physiological monitoring offers a promising tool for proactive safety management in high-risk construction environments, yet its practical use is hindered by the lack of reliable clinical outcome labels and strong inter-individual variability. This study proposes a weakly supervised health-risk forecasting framework that integrates clinical-style physiological scoring, analytic hierarchy process (AHP) weighting, unsupervised clustering, and supervised learning to enable early prediction of operational risk tiers. A total of 42 627 de-identified wristband measurements from 24 construction workers were analyzed, including heart rate, body temperature, systolic and diastolic blood pressure, and oxygen saturation. Composite risk indices were generated using guideline-informed scoring and AHP weighting and grouped into four risk tiers (Low, Medium, High, Extreme) via K-means clustering to serve as proxy outcome labels. XGBoost, Random Forest, and Logistic Regression models were evaluated using strict leave-one-worker-out cross-validation. Across unseen workers, the proposed framework achieved stable discrimination of Extreme-risk states, with recall approaching 0.95 and AUC exceeding 0.97. Bootstrap analysis further confirmed the robustness of Extreme-risk detection under irregular sampling and class imbalance. These results indicate the feasibility of reliable early risk warning using wearable physiological data for construction safety management.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"4208-4221"},"PeriodicalIF":3.6,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/ACCESS.2025.3647479
Talal Ashraf Butt;Muhammad Iqbal;Noor Arshad
Artificial intelligence systems increasingly operate in high-risk domains where regulatory frameworks such as the EU AI Act, NIST AI RMF, and ISO/IEC 42001 impose explicit evidence and accountability requirements. However, existing engineering practice remains largely manual, retrospective, and decoupled from operational pipelines, resulting in inconsistent provenance, limited reproducibility, and inadequate clause-level traceability. This paper introduces Governance as Evidence for AI Pipelines (GEAP), a pipeline-native governance framework that expresses regulatory and organizational policies as machine-interpretable Governance as Code rules. GEAP integrates governance directly into a unified SDLC–MLOps execution spine by enforcing promotion decisions at five gates—Data, Training, Validation, Release, and Operations—each of which emits signed, content-addressed artifacts into a tamper-evident Evidence Backbone. These artifacts are assembled into a per-run Conformity Bundle, from which the proposed Clause-to-Artifact Traceability mechanism deterministically renders clause coverage across multiple regulatory regimes without manual crosswalks or duplicated documentation. The framework further introduces quantitative governance metrics that measure adequacy, completeness, stability, and evidence hygiene. A detailed synthetic case study of an intensive-care sepsis early-warning system demonstrates GEAP’s ability to standardize promotion control, detect policy violations, and produce replayable, audit-ready compliance manifests in a high-risk clinical context. The results show that governance can operate as a deterministic, reproducible, and verifiable pipeline property rather than an external documentation exercise, enabling more disciplined, transparent, and accountable AI deployment practices.
{"title":"From Policy to Pipeline: A Governance Framework for AI Development and Operations Pipelines","authors":"Talal Ashraf Butt;Muhammad Iqbal;Noor Arshad","doi":"10.1109/ACCESS.2025.3647479","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3647479","url":null,"abstract":"Artificial intelligence systems increasingly operate in high-risk domains where regulatory frameworks such as the EU AI Act, NIST AI RMF, and ISO/IEC 42001 impose explicit evidence and accountability requirements. However, existing engineering practice remains largely manual, retrospective, and decoupled from operational pipelines, resulting in inconsistent provenance, limited reproducibility, and inadequate clause-level traceability. This paper introduces Governance as Evidence for AI Pipelines (GEAP), a pipeline-native governance framework that expresses regulatory and organizational policies as machine-interpretable Governance as Code rules. GEAP integrates governance directly into a unified SDLC–MLOps execution spine by enforcing promotion decisions at five gates—Data, Training, Validation, Release, and Operations—each of which emits signed, content-addressed artifacts into a tamper-evident Evidence Backbone. These artifacts are assembled into a per-run Conformity Bundle, from which the proposed Clause-to-Artifact Traceability mechanism deterministically renders clause coverage across multiple regulatory regimes without manual crosswalks or duplicated documentation. The framework further introduces quantitative governance metrics that measure adequacy, completeness, stability, and evidence hygiene. A detailed synthetic case study of an intensive-care sepsis early-warning system demonstrates GEAP’s ability to standardize promotion control, detect policy violations, and produce replayable, audit-ready compliance manifests in a high-risk clinical context. The results show that governance can operate as a deterministic, reproducible, and verifiable pipeline property rather than an external documentation exercise, enabling more disciplined, transparent, and accountable AI deployment practices.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"1373-1397"},"PeriodicalIF":3.6,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11311992","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/ACCESS.2025.3647525
Ipeleng Labius Machele;Adeiza J. Onumanyi;Adnan M. Abu-Mahfouz;Anish Kurien
In this paper, we introduce a multidimensional discriminant analysis-based method (MDAM), which is a modality testing method designed to determine whether an unknown input multidimensional time series is unimodal or multimodal. Existing unimodality testing methods face several key limitations: 1) they are primarily designed for unidimensional data and struggle with multidimensional extensions, 2) they rely on probability density function (PDF)-based approaches that fail in the presence of overlapping distributions, skewed data, and noise, and 3) they often misinterpret multimodal structures due to misleading PDF-based marginal analysis. To address these challenges, MDAM leverages a novel function that integrates the between-class mean and variance variables using a discriminant analysis approach. This distribution-independent method effectively detects modality variations across both mean and variance parameters, making it well-suited for high-dimensional and complex datasets. Comparative analysis based on synthetic and real datasets revealed that MDAM consistently outperformed five state-of-the-art techniques such as Folding, Runt, KS, DAT, and Dip, across unidimensional, multidimensional, balanced, unbalanced, unimodal, and multimodal datasets. Notably, MDAM achieved a high average accuracy of 99.8% across all dataset types, with a 20% to 40% accuracy improvement over the next-best algorithms in multimodal and mixed distributions. Its robustness across various evaluation metrics, including precision, recall, and F1 score, further establishes MDAM as a reliable tool for modality testing in time series datasets.
{"title":"MDAM: A Multidimensional Discriminant Analysis-Based Method for Time Series Modality Testing","authors":"Ipeleng Labius Machele;Adeiza J. Onumanyi;Adnan M. Abu-Mahfouz;Anish Kurien","doi":"10.1109/ACCESS.2025.3647525","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3647525","url":null,"abstract":"In this paper, we introduce a multidimensional discriminant analysis-based method (MDAM), which is a modality testing method designed to determine whether an unknown input multidimensional time series is unimodal or multimodal. Existing unimodality testing methods face several key limitations: 1) they are primarily designed for unidimensional data and struggle with multidimensional extensions, 2) they rely on probability density function (PDF)-based approaches that fail in the presence of overlapping distributions, skewed data, and noise, and 3) they often misinterpret multimodal structures due to misleading PDF-based marginal analysis. To address these challenges, MDAM leverages a novel function that integrates the between-class mean and variance variables using a discriminant analysis approach. This distribution-independent method effectively detects modality variations across both mean and variance parameters, making it well-suited for high-dimensional and complex datasets. Comparative analysis based on synthetic and real datasets revealed that MDAM consistently outperformed five state-of-the-art techniques such as Folding, Runt, KS, DAT, and Dip, across unidimensional, multidimensional, balanced, unbalanced, unimodal, and multimodal datasets. Notably, MDAM achieved a high average accuracy of 99.8% across all dataset types, with a 20% to 40% accuracy improvement over the next-best algorithms in multimodal and mixed distributions. Its robustness across various evaluation metrics, including precision, recall, and F1 score, further establishes MDAM as a reliable tool for modality testing in time series datasets.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"4127-4151"},"PeriodicalIF":3.6,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11311987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1109/ACCESS.2025.3647283
C. Yogalakshmi;B. J. Balamurugan
Let $G(V,E)$ be a simple, non-trivial, connected graph with vertex set $V(G)$ and edge set $E(G)$ . A vertex coloring $c: V(G) to mathbb {N}$ of $G$ is called a sigma coloring if $sigma (u) neq sigma (v)$ for all $uvin E(G)$ , where $sigma (u)$ denotes the color sum of the vertex $uin V(G)$ . The color sum $sigma (u)$ is defined as the sum of the colors assigned to the vertices adjacent to $u$ . The minimum number of colors required for such a coloring is called the sigma chromatic number of $G$ and is denoted by $sigma (G)$ . In this article, sigma coloring is systematically investigated for a structured family of graph $G_{p,r}$ , constructed by taking a complete graph $K_{p}$ and $p$ disjoint copies of the graph $overline {K_{r}}$ , such that the $i^{th}$ vertex of $K_{p}$ is adjacent to all the vertices of $i^{th}$ copy of $overline {K_{r}}$ , for $p geq 3$ and $r geq 1$ . In addition, the sigma coloring is analyzed for certain Cartesian product graphs namely $P_{n}square P_{m}, P_{n}square C_{q}, C_{p}square C_{q}$ and $K_{p}square P_{m}$ , where explicit coloring algorithms are presented and shown to scale with graph size. A general lower bound is also established for the sigma chromatic number of Cartesian products of complete graphs with regular and biregular graphs. As a significant theoretical contribution, this article disproves the conjecture proposed by Luzon et al. (2015), which states that every connected 4-regular graph of order at least six has a sigma chromatic number of three.
{"title":"Optimization of Sigma Coloring in Cartesian Products of Graphs and Lower Bounds for Sigma Chromatic Numbers of Graphs Containing Complete Subgraphs","authors":"C. Yogalakshmi;B. J. Balamurugan","doi":"10.1109/ACCESS.2025.3647283","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3647283","url":null,"abstract":"Let <inline-formula> <tex-math>$G(V,E)$ </tex-math></inline-formula> be a simple, non-trivial, connected graph with vertex set <inline-formula> <tex-math>$V(G)$ </tex-math></inline-formula> and edge set <inline-formula> <tex-math>$E(G)$ </tex-math></inline-formula>. A vertex coloring <inline-formula> <tex-math>$c: V(G) to mathbb {N}$ </tex-math></inline-formula> of <inline-formula> <tex-math>$G$ </tex-math></inline-formula> is called a sigma coloring if <inline-formula> <tex-math>$sigma (u) neq sigma (v)$ </tex-math></inline-formula> for all <inline-formula> <tex-math>$uvin E(G)$ </tex-math></inline-formula>, where <inline-formula> <tex-math>$sigma (u)$ </tex-math></inline-formula> denotes the color sum of the vertex <inline-formula> <tex-math>$uin V(G)$ </tex-math></inline-formula>. The color sum <inline-formula> <tex-math>$sigma (u)$ </tex-math></inline-formula> is defined as the sum of the colors assigned to the vertices adjacent to <inline-formula> <tex-math>$u$ </tex-math></inline-formula>. The minimum number of colors required for such a coloring is called the sigma chromatic number of <inline-formula> <tex-math>$G$ </tex-math></inline-formula> and is denoted by <inline-formula> <tex-math>$sigma (G)$ </tex-math></inline-formula>. In this article, sigma coloring is systematically investigated for a structured family of graph <inline-formula> <tex-math>$G_{p,r}$ </tex-math></inline-formula>, constructed by taking a complete graph <inline-formula> <tex-math>$K_{p}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$p$ </tex-math></inline-formula> disjoint copies of the graph <inline-formula> <tex-math>$overline {K_{r}}$ </tex-math></inline-formula>, such that the <inline-formula> <tex-math>$i^{th}$ </tex-math></inline-formula> vertex of <inline-formula> <tex-math>$K_{p}$ </tex-math></inline-formula> is adjacent to all the vertices of <inline-formula> <tex-math>$i^{th}$ </tex-math></inline-formula> copy of <inline-formula> <tex-math>$overline {K_{r}}$ </tex-math></inline-formula>, for <inline-formula> <tex-math>$p geq 3$ </tex-math></inline-formula> and <inline-formula> <tex-math>$r geq 1$ </tex-math></inline-formula>. In addition, the sigma coloring is analyzed for certain Cartesian product graphs namely <inline-formula> <tex-math>$P_{n}square P_{m}, P_{n}square C_{q}, C_{p}square C_{q}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$K_{p}square P_{m}$ </tex-math></inline-formula>, where explicit coloring algorithms are presented and shown to scale with graph size. A general lower bound is also established for the sigma chromatic number of Cartesian products of complete graphs with regular and biregular graphs. As a significant theoretical contribution, this article disproves the conjecture proposed by Luzon et al. (2015), which states that every connected 4-regular graph of order at least six has a sigma chromatic number of three.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"4162-4179"},"PeriodicalIF":3.6,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11311455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1109/ACCESS.2025.3647316
Adi Munandir;Aurik Gustomo;Prawira Fajarindra Belgiawan
The Fourth Industrial Revolution has largely bypassed traditional, asset-heavy industries like cement manufacturing, which face significant challenges in digitizing operations while managing innovation resistance and multi-generational workforces, particularly in emerging economies. This study investigates how digital resource orchestration can overcome these barriers to create a competitive advantage, employing a single case study design with five-year longitudinal observation and twelve in-depth interviews from Indonesia’s largest cement manufacturer. The research reveals that innovation resistance is not a temporary hurdle but a persistent institutional feature that must be systematically managed. A comprehensive framework is developed, demonstrating that successful transformation requires orchestrating people assets (digital leadership, capability development), process assets (governance, resource mechanisms), and technology assets (infrastructure, integration). The findings show that organizations progress through five maturity levels—from Traditional to Transformed—by applying sequential orchestration states that address specific resistance patterns at each stage. This study contributes to digital transformation theory by reconceptualizing innovation resistance as an organizational capability and provides an empirically grounded model for traditional industries seeking to bridge the digital divide.
{"title":"Breaking the Digital Divide: How Traditional Cement Manufacturing Creates Competitive Advantage Through Strategic Resource Orchestration","authors":"Adi Munandir;Aurik Gustomo;Prawira Fajarindra Belgiawan","doi":"10.1109/ACCESS.2025.3647316","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3647316","url":null,"abstract":"The Fourth Industrial Revolution has largely bypassed traditional, asset-heavy industries like cement manufacturing, which face significant challenges in digitizing operations while managing innovation resistance and multi-generational workforces, particularly in emerging economies. This study investigates how digital resource orchestration can overcome these barriers to create a competitive advantage, employing a single case study design with five-year longitudinal observation and twelve in-depth interviews from Indonesia’s largest cement manufacturer. The research reveals that innovation resistance is not a temporary hurdle but a persistent institutional feature that must be systematically managed. A comprehensive framework is developed, demonstrating that successful transformation requires orchestrating people assets (digital leadership, capability development), process assets (governance, resource mechanisms), and technology assets (infrastructure, integration). The findings show that organizations progress through five maturity levels—from Traditional to Transformed—by applying sequential orchestration states that address specific resistance patterns at each stage. This study contributes to digital transformation theory by reconceptualizing innovation resistance as an organizational capability and provides an empirically grounded model for traditional industries seeking to bridge the digital divide.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"3977-3998"},"PeriodicalIF":3.6,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11311476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}