Pub Date : 2026-03-01Epub Date: 2026-01-21DOI: 10.1016/j.array.2026.100682
Yichuan Tang , Enxhi Jaupi , Srikar Nekkanti , William G. DeMaria , Marsha W. Rolle , Haichong K. Zhang
Tissue-engineered blood vessels (TEBVs) represent a critical advancement in vascular medicine, offering transformative potential in drug testing, regenerative therapies, and disease modeling. Current evaluation methods, however, rely heavily on destructive techniques such as histology, which preclude further use of samples and limit real-time monitoring. Ultrasound Computed Tomography (USCT) emerges as a promising alternative, enabling non-destructive, high-resolution imaging within bioreactors. While prior work has demonstrated the feasibility of USCT for TEBV monitoring using needle and tubing phantoms, this study advances the field by imaging real TEBV samples and employing histological analysis as the ground truth for validation. This paper utilizes a prototype USCT system that achieve comprehensive 360-degree reconstructions of TEBV cross-sections. Validated through both needle-phantom studies and histology comparisons, the system demonstrates high accuracy with an average measurement error of 0.03 mm and adaptability within bioreactor environments. Our results underscore USCT’s capacity for non-destructive TEBV evaluation, paving the way for enhanced monitoring during cultivation. Future developments aim to refine image reconstruction and expand clinical applications.
{"title":"Evaluation of tissue-engineered blood vessel with ultrasound computed tomography","authors":"Yichuan Tang , Enxhi Jaupi , Srikar Nekkanti , William G. DeMaria , Marsha W. Rolle , Haichong K. Zhang","doi":"10.1016/j.array.2026.100682","DOIUrl":"10.1016/j.array.2026.100682","url":null,"abstract":"<div><div>Tissue-engineered blood vessels (TEBVs) represent a critical advancement in vascular medicine, offering transformative potential in drug testing, regenerative therapies, and disease modeling. Current evaluation methods, however, rely heavily on destructive techniques such as histology, which preclude further use of samples and limit real-time monitoring. Ultrasound Computed Tomography (USCT) emerges as a promising alternative, enabling non-destructive, high-resolution imaging within bioreactors. While prior work has demonstrated the feasibility of USCT for TEBV monitoring using needle and tubing phantoms, this study advances the field by imaging real TEBV samples and employing histological analysis as the ground truth for validation. This paper utilizes a prototype USCT system that achieve comprehensive 360-degree reconstructions of TEBV cross-sections. Validated through both needle-phantom studies and histology comparisons, the system demonstrates high accuracy with an average measurement error of 0.03 mm and adaptability within bioreactor environments. Our results underscore USCT’s capacity for non-destructive TEBV evaluation, paving the way for enhanced monitoring during cultivation. Future developments aim to refine image reconstruction and expand clinical applications.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100682"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-02-06DOI: 10.1016/j.array.2026.100706
Pooja Mudbhatkal , Martti Juhola , Mikko Asikainen , SantoshKumar Patel
The secret to decreasing downtime, guaranteeing smooth operations, and raising productivity in machine maintenance is predictive maintenance. With predictive maintenance, the need for emergency maintenance decreases. The goal of this study was to forecast spreader problems with the straddle carriers that Cargotec (Kalmar) uses. Machines called "straddle carriers" are used to pick and place shipping containers. The pick and ground action is carried out by the spreader, which is a part of the straddle carrier. The investigation was conducted using straddle carrier logs from their on-board automation systems. With different training times, all four of the advanced deep learning models were able to minimize false positives and false negatives and accurately forecast failures.
This study gives a thorough overview of different deep learning models in the context of predictive maintenance, as well as a comprehension of the advantages and disadvantages of the models that were employed.
在机器维护中,减少停机时间、保证平稳运行和提高生产率的秘诀是预测性维护。有了预测性维护,对紧急维护的需求就会减少。本研究的目的是预测Cargotec (Kalmar)使用的跨式运输工具的传播问题。被称为“跨运车”的机器被用来挑选和放置集装箱。pick - and - ground的动作是由吊具进行的,它是跨骑式运输车的一部分。调查使用了机载自动化系统的跨运工具日志。通过不同的训练时间,所有四种高级深度学习模型都能够最大限度地减少误报和误报,并准确预测故障。本研究对预测性维护背景下的不同深度学习模型进行了全面概述,并对所采用模型的优缺点进行了理解。
{"title":"Deep learning models for straddle carriers: Predictive maintenance","authors":"Pooja Mudbhatkal , Martti Juhola , Mikko Asikainen , SantoshKumar Patel","doi":"10.1016/j.array.2026.100706","DOIUrl":"10.1016/j.array.2026.100706","url":null,"abstract":"<div><div>The secret to decreasing downtime, guaranteeing smooth operations, and raising productivity in machine maintenance is predictive maintenance. With predictive maintenance, the need for emergency maintenance decreases. The goal of this study was to forecast spreader problems with the straddle carriers that Cargotec (Kalmar) uses. Machines called \"straddle carriers\" are used to pick and place shipping containers. The pick and ground action is carried out by the spreader, which is a part of the straddle carrier. The investigation was conducted using straddle carrier logs from their on-board automation systems. With different training times, all four of the advanced deep learning models were able to minimize false positives and false negatives and accurately forecast failures.</div><div>This study gives a thorough overview of different deep learning models in the context of predictive maintenance, as well as a comprehension of the advantages and disadvantages of the models that were employed.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100706"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-31DOI: 10.1016/j.array.2026.100701
Héctor Migallón , Antonio Jimeno-Morenilla , Eduard Duta-Costache , José-Luis Sánchez-Romero
This paper presents an efficient approach to toolpath generation tailored to the needs of Industry 5.0, with a focus on turning lathe machining. The study addresses the challenge of rapidly and accurately generating helical toolpaths in personalized manufacturing, where traditional sequential methods often become computational bottlenecks. To overcome this limitation, we propose efficient parallel implementations of the Virtual Digitizing (VD) algorithm, specifically designed to accelerate the computation of machining trajectories on both multicore and manycore architectures. The multicore implementation achieves notable speedups, especially when execution is properly tuned. The manycore strategy explores both asynchronous (coarse-grained) and synchronous (fine-grained) execution models. In the asynchronous method, independent trajectory computations are assigned to separate CUDA threads, whereas the synchronous method further parallelizes the internal processing of each trajectory point, providing finer computational granularity. Experimental evaluations conducted on authentic industrial shoe last models reveal notable gains in computational efficiency. The manycore implementation achieves up to acceleration on low-end GPUs, over on high-range devices and over on state-of-the-art GPU devices when compared to their respective CPU-based computations. Although the synchronous method introduces additional complexity, it delivers the best performance on powerful GPU platforms, whereas the asynchronous method is better suited for resource-constrained systems. Therefore, the study concludes that the optimal parallelization strategy depends on the available hardware.
{"title":"Efficient tool path computing for Industry 5.0: Application to turning lathe machining","authors":"Héctor Migallón , Antonio Jimeno-Morenilla , Eduard Duta-Costache , José-Luis Sánchez-Romero","doi":"10.1016/j.array.2026.100701","DOIUrl":"10.1016/j.array.2026.100701","url":null,"abstract":"<div><div>This paper presents an efficient approach to toolpath generation tailored to the needs of Industry 5.0, with a focus on turning lathe machining. The study addresses the challenge of rapidly and accurately generating helical toolpaths in personalized manufacturing, where traditional sequential methods often become computational bottlenecks. To overcome this limitation, we propose efficient parallel implementations of the Virtual Digitizing (VD) algorithm, specifically designed to accelerate the computation of machining trajectories on both multicore and manycore architectures. The multicore implementation achieves notable speedups, especially when execution is properly tuned. The manycore strategy explores both asynchronous (coarse-grained) and synchronous (fine-grained) execution models. In the asynchronous method, independent trajectory computations are assigned to separate CUDA threads, whereas the synchronous method further parallelizes the internal processing of each trajectory point, providing finer computational granularity. Experimental evaluations conducted on authentic industrial shoe last models reveal notable gains in computational efficiency. The manycore implementation achieves up to <span><math><mrow><mn>70</mn><mi>x</mi></mrow></math></span> acceleration on low-end GPUs, over <span><math><mrow><mn>80</mn><mi>x</mi></mrow></math></span> on high-range devices and over <span><math><mrow><mn>270</mn><mi>x</mi></mrow></math></span> on state-of-the-art GPU devices when compared to their respective CPU-based computations. Although the synchronous method introduces additional complexity, it delivers the best performance on powerful GPU platforms, whereas the asynchronous method is better suited for resource-constrained systems. Therefore, the study concludes that the optimal parallelization strategy depends on the available hardware.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100701"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-30DOI: 10.1016/j.array.2026.100684
Patrick Naivasha, George Musumba, Patrick Gikunda, John Wandeto
This research presents a behaviorally informed framework for synthesizing financial time-series data, specifically designed to emulate the complex dynamics of foreign exchange markets. Deviating from conventional generative adversarial networks (GANs) or purely statistical distribution-matching, the proposed methodology adopts a game-theoretic architecture. This framework integrates trader-interaction dynamics, stochastic strategies, and information asymmetry, treating the market as a strategic participant to reproduce authentic volatility patterns and structural dependencies. To ensure numerical stability across extensive simulations, the study introduces a uniform upward scaling procedure and controlled initialization, preventing pathological price behaviors without compromising the underlying statistical properties. The framework's analytical fidelity was rigorously evaluated against a suite of econometric and machine learning models, including ARIMA, XGBoost, LSTM, N-BEATS, and DLinear. Experimental results involving 12,960 hourly observations demonstrate that the synthetic data maintains strong alignment with empirical benchmarks. DLinear emerged as the superior model, exhibiting exceptional stability with an R2 frequently exceeding 0.98 and a Mean Absolute Scaled Error (MASE) near unity. While XGBoost and N-BEATS yielded competitive results, ARIMA and LSTM showed anticipated performance degradation due to temporal noise. Comprehensive residual diagnostics, including Ljung-Box tests and stationarity assessments, confirm that the generated series are behaviorally consistent and analytically reliable. This framework thus provides a robust foundation for comparative modeling and experimental financial research.
{"title":"Model-based evaluation of synthetic financial time series data: A comparative study with multi-metric validation","authors":"Patrick Naivasha, George Musumba, Patrick Gikunda, John Wandeto","doi":"10.1016/j.array.2026.100684","DOIUrl":"10.1016/j.array.2026.100684","url":null,"abstract":"<div><div>This research presents a behaviorally informed framework for synthesizing financial time-series data, specifically designed to emulate the complex dynamics of foreign exchange markets. Deviating from conventional generative adversarial networks (GANs) or purely statistical distribution-matching, the proposed methodology adopts a game-theoretic architecture. This framework integrates trader-interaction dynamics, stochastic strategies, and information asymmetry, treating the market as a strategic participant to reproduce authentic volatility patterns and structural dependencies. To ensure numerical stability across extensive simulations, the study introduces a uniform upward scaling procedure and controlled initialization, preventing pathological price behaviors without compromising the underlying statistical properties. The framework's analytical fidelity was rigorously evaluated against a suite of econometric and machine learning models, including ARIMA, XGBoost, LSTM, N-BEATS, and DLinear. Experimental results involving 12,960 hourly observations demonstrate that the synthetic data maintains strong alignment with empirical benchmarks. DLinear emerged as the superior model, exhibiting exceptional stability with an R<sup>2</sup> frequently exceeding 0.98 and a Mean Absolute Scaled Error (MASE) near unity. While XGBoost and N-BEATS yielded competitive results, ARIMA and LSTM showed anticipated performance degradation due to temporal noise. Comprehensive residual diagnostics, including Ljung-Box tests and stationarity assessments, confirm that the generated series are behaviorally consistent and analytically reliable. This framework thus provides a robust foundation for comparative modeling and experimental financial research.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100684"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-02-17DOI: 10.1016/j.array.2026.100718
Jie Ouyang , Peng Xiao , Jingxue Chen , Fried-Michael Dahlweid , Yiming Chen , Yuanwang Wei , Zou Lai
Manual sleep stage classification from polysomnography (PSG) is labor-intensive and subject to expert variability, motivating automated and deployment-oriented solutions for clinical use. We present a multi-channel self-supervised learning (SSL) contrastive framework combined with iterative self-distillation for accurate and label-efficient sleep staging. The approach employs a dual-branch convolutional network that processes electroencephalogram (EEG) channels independently and integrates complementary information via a cross-attention fusion module. During pre-training, a contrastive objective leverages temporal adjacency to form positive pairs and maintains hard negatives dynamically to learn robust representations from unlabeled data. Subsequent fine-tuning with minimal labels is enhanced by iterative self-distillation through pseudo-label refinement. On the Sleep-EDF Expanded (SleepEDF-v2) dataset, the method achieves strong performance with only 1% labeled data (accuracy 76.31%, macro-F1 66.53%), competitive against existing SSL baselines. The resulting compact model and single-site training setup align with practical constraints in hospitals and wearable scenarios, reducing annotation burden and supporting secure, scalable clinical deployment.
{"title":"Label-efficient sleep staging from multi-channel EEG with self-supervised contrastive learning and iterative self-distillation","authors":"Jie Ouyang , Peng Xiao , Jingxue Chen , Fried-Michael Dahlweid , Yiming Chen , Yuanwang Wei , Zou Lai","doi":"10.1016/j.array.2026.100718","DOIUrl":"10.1016/j.array.2026.100718","url":null,"abstract":"<div><div>Manual sleep stage classification from polysomnography (PSG) is labor-intensive and subject to expert variability, motivating automated and deployment-oriented solutions for clinical use. We present a multi-channel <em>self-supervised learning (SSL)</em> contrastive framework combined with iterative self-distillation for accurate and <em>label-efficient</em> sleep staging. The approach employs a dual-branch convolutional network that processes electroencephalogram (EEG) channels independently and integrates complementary information via a cross-attention fusion module. During pre-training, a contrastive objective leverages temporal adjacency to form positive pairs and maintains hard negatives dynamically to learn robust representations from unlabeled data. Subsequent fine-tuning with minimal labels is enhanced by iterative self-distillation through pseudo-label refinement. On the Sleep-EDF Expanded (SleepEDF-v2) dataset, the method achieves strong performance with only 1% labeled data (accuracy 76.31%, macro-F1 66.53%), competitive against existing SSL baselines. The resulting compact model and single-site training setup align with practical constraints in hospitals and wearable scenarios, reducing annotation burden and supporting secure, scalable clinical deployment.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100718"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-09DOI: 10.1016/j.array.2025.100663
Jingwen Zhang , Xiao Xie , Xiaodong Deng , Jing Wang , Xiaojun Hu , Yiping Wang , Hu Zhu , Fengyao Zhai , Yu Liu
Ladderpath, rooted in Algorithmic Information Theory (AIT), uncovers nested and hierarchical structures in symbolic sequences through minimal compositional reconstruction. It approximates Kolmogorov complexity by identifying reusable subsequences that enable efficient reconstruction of complex sequences. The proposed algorithm improves upon earlier implementations by introducing key optimizations in substring enumeration and reuse filtering, allowing it to scale to sequence systems with tens or even hundreds of millions of characters. Ladderpath produces a standardized JSON format that encodes compositional dependencies and hierarchies, and supports a variety of downstream tasks, including compression, shared motif extraction, cross-sequence similarity analysis, and structural visualization. Its domain-agnostic design enables broad applicability across areas such as genomics, natural language, symbolic computation, and program analysis. Beyond providing a practical approximation of complexity, Ladderpath also offers structural insight into the modular grammar of sequences, pointing to a deeper connection between algorithmic complexity and compositional hierarchies observed in real-world data.
{"title":"Ladderpath: An efficient algorithm for revealing nested hierarchy in sequences","authors":"Jingwen Zhang , Xiao Xie , Xiaodong Deng , Jing Wang , Xiaojun Hu , Yiping Wang , Hu Zhu , Fengyao Zhai , Yu Liu","doi":"10.1016/j.array.2025.100663","DOIUrl":"10.1016/j.array.2025.100663","url":null,"abstract":"<div><div>Ladderpath, rooted in Algorithmic Information Theory (AIT), uncovers nested and hierarchical structures in symbolic sequences through minimal compositional reconstruction. It approximates Kolmogorov complexity by identifying reusable subsequences that enable efficient reconstruction of complex sequences. The proposed algorithm improves upon earlier implementations by introducing key optimizations in substring enumeration and reuse filtering, allowing it to scale to sequence systems with tens or even hundreds of millions of characters. Ladderpath produces a standardized JSON format that encodes compositional dependencies and hierarchies, and supports a variety of downstream tasks, including compression, shared motif extraction, cross-sequence similarity analysis, and structural visualization. Its domain-agnostic design enables broad applicability across areas such as genomics, natural language, symbolic computation, and program analysis. Beyond providing a practical approximation of complexity, Ladderpath also offers structural insight into the modular grammar of sequences, pointing to a deeper connection between algorithmic complexity and compositional hierarchies observed in real-world data.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100663"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-20DOI: 10.1016/j.array.2025.100655
Xuehai Chen , Yantong Lin , Zhimin Liang , Zhenmin He
To realize the effective planning and regulation of smart grids, it is necessary to ensure the smart grid private data sharing's security. A smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption is proposed. This algorithm is based on the smart meters' private data collected at the device layer, and encrypts the data by using the multi-key homomorphic encryption method of the key generation center. The computing layer interacts with smart meters within its coverage area through fog nodes. After the data collected from smart meters are authenticated and aggregated, the data is transmitted to the cloud storage layer for storage. The data stored in the cloud storage layer is encrypted by using multi - key homomorphic encryption methods and transmitted to the server. After decryption, the server can obtain the details of the private data of each subarea and realize the encryption and sharing of the privacy data of the smart grid. The test results show that the algorithm has good encryption performance, with encryption times all within 700 ms. The data decryption probability is above 99.22 %, and the communication overhead required for shared transmission is above 2000bit in all cases. The intrusion rate is within 0.3 %, ensuring the safe sharing of private data.
{"title":"Smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption","authors":"Xuehai Chen , Yantong Lin , Zhimin Liang , Zhenmin He","doi":"10.1016/j.array.2025.100655","DOIUrl":"10.1016/j.array.2025.100655","url":null,"abstract":"<div><div>To realize the effective planning and regulation of smart grids, it is necessary to ensure the smart grid private data sharing's security. A smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption is proposed. This algorithm is based on the smart meters' private data collected at the device layer, and encrypts the data by using the multi-key homomorphic encryption method of the key generation center. The computing layer interacts with smart meters within its coverage area through fog nodes. After the data collected from smart meters are authenticated and aggregated, the data is transmitted to the cloud storage layer for storage. The data stored in the cloud storage layer is encrypted by using multi - key homomorphic encryption methods and transmitted to the server. After decryption, the server can obtain the details of the private data of each subarea and realize the encryption and sharing of the privacy data of the smart grid. The test results show that the algorithm has good encryption performance, with encryption times all within 700 ms. The data decryption probability is above 99.22 %, and the communication overhead required for shared transmission is above 2000bit in all cases. The intrusion rate is within 0.3 %, ensuring the safe sharing of private data.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100655"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-13DOI: 10.1016/j.array.2025.100626
Zohre Arabi , Ramin Rajabi Oskouei , Mehdi Hosseinzadeh
The rapid proliferation of the Internet of Things (IoT) has transformed modern technology by bridging the physical and digital realms. Yet, the explosive growth of connected devices—expected to surpass 50 billion by 2025—has introduced substantial security concerns. This study investigates critical vulnerabilities within IoT systems, particularly at the device and network levels, focusing on risks such as data breaches, unauthorized access, and distributed denial-of-service (DDoS) attacks. It explores the significance of implementing standardized security practices for interoperable internet-connected hardware within various environments. Despite the simplicity and feasibility of adopting such standards, many manufacturers neglect essential security protocols, leaving devices exposed. Much like pre-flight checklists in aviation, foundational security principles should be embedded into hardware design; however, innovation in this area has been largely overlooked.
We present an innovative two-phase methodology aimed at strengthening IoT security. Manufacturers often prioritize rapid deployment over protection, resulting in devices that are ill-equipped to handle sophisticated cyber threats. Conventional security approaches, based on static and generic rules, are ill-suited to the diverse, resource-constrained, and protocol-heavy IoT landscape. Our second phase involves detecting device vulnerabilities using advanced tools, such as Nmap for network probing and Binwalk for firmware analysis. Key protective measures—including secure boot processes, firmware hashing, and secure integrated circuits (ICs)—are employed to safeguard sensitive data and ensure firmware integrity. Experimental results validate the approach's effectiveness in identifying and mitigating vulnerabilities. Visual data, including port distribution charts and CVSS-based risk assessments, highlight the necessity of prioritizing high-impact threats. Although there are limitations, such as difficulties in updating legacy devices and analyzing large networks, the proposed framework significantly reduces cybersecurity risks, builds trust in IoT systems, and establishes a solid foundation for future security developments.
{"title":"Enhancing security in IoT networks: A multifaceted approach to vulnerability analysis and protection","authors":"Zohre Arabi , Ramin Rajabi Oskouei , Mehdi Hosseinzadeh","doi":"10.1016/j.array.2025.100626","DOIUrl":"10.1016/j.array.2025.100626","url":null,"abstract":"<div><div>The rapid proliferation of the Internet of Things (IoT) has transformed modern technology by bridging the physical and digital realms. Yet, the explosive growth of connected devices—expected to surpass 50 billion by 2025—has introduced substantial security concerns. This study investigates critical vulnerabilities within IoT systems, particularly at the device and network levels, focusing on risks such as data breaches, unauthorized access, and distributed denial-of-service (DDoS) attacks. It explores the significance of implementing standardized security practices for interoperable internet-connected hardware within various environments. Despite the simplicity and feasibility of adopting such standards, many manufacturers neglect essential security protocols, leaving devices exposed. Much like pre-flight checklists in aviation, foundational security principles should be embedded into hardware design; however, innovation in this area has been largely overlooked.</div><div>We present an innovative two-phase methodology aimed at strengthening IoT security. Manufacturers often prioritize rapid deployment over protection, resulting in devices that are ill-equipped to handle sophisticated cyber threats. Conventional security approaches, based on static and generic rules, are ill-suited to the diverse, resource-constrained, and protocol-heavy IoT landscape. Our second phase involves detecting device vulnerabilities using advanced tools, such as Nmap for network probing and Binwalk for firmware analysis. Key protective measures—including secure boot processes, firmware hashing, and secure integrated circuits (ICs)—are employed to safeguard sensitive data and ensure firmware integrity. Experimental results validate the approach's effectiveness in identifying and mitigating vulnerabilities. Visual data, including port distribution charts and CVSS-based risk assessments, highlight the necessity of prioritizing high-impact threats. Although there are limitations, such as difficulties in updating legacy devices and analyzing large networks, the proposed framework significantly reduces cybersecurity risks, builds trust in IoT systems, and establishes a solid foundation for future security developments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100626"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-27DOI: 10.1016/j.array.2025.100654
Yi Jiang, Ye Bi, Yinng Li, Pengfei Li, Shengnan Qin, Zichao Shu, Chengrui Le
Near-eye display (NED) technology constitutes a fundamental component of head-mounted display (HMD) systems. The compact form factor required by HMDs imposes stringent constraints on optical design, often resulting in pronounced wavefront aberrations that significantly degrade visual fidelity. In addition, natural eye movements dynamically induce varying blur that further compromises image quality. To mitigate these challenges, a gaze-contingent neural network framework has been developed to compensate for aberrations within the foveal region. The network is trained in an end-to-end manner to minimize the discrepancy between the optically degraded system output and the corresponding ground truth image. A forward imaging model is employed, in which the network output is convolved with a spatially varying point spread function (PSF) to accurately simulate the degradation introduced by the optical system. To accommodate dynamic changes in gaze direction, a foveated attention-guided module is incorporated to adaptively modulate the pre-correction process, enabling localized compensation centered on the fovea. Additionally, an end-to-end trainable architecture has been designed to integrate gaze-informed blur priors. Both simulation and experimental validations confirm that the proposed method substantially reduces gaze-dependent aberrations and enhances retinal image clarity within the foveal region, while maintaining high computational efficiency. The presented framework offers a practical and scalable solution for improving visual performance in aberration-sensitive NED systems.
{"title":"Gaze-adaptive neural pre-correction for mitigating spatially varying optical aberrations in near-eye displays","authors":"Yi Jiang, Ye Bi, Yinng Li, Pengfei Li, Shengnan Qin, Zichao Shu, Chengrui Le","doi":"10.1016/j.array.2025.100654","DOIUrl":"10.1016/j.array.2025.100654","url":null,"abstract":"<div><div>Near-eye display (NED) technology constitutes a fundamental component of head-mounted display (HMD) systems. The compact form factor required by HMDs imposes stringent constraints on optical design, often resulting in pronounced wavefront aberrations that significantly degrade visual fidelity. In addition, natural eye movements dynamically induce varying blur that further compromises image quality. To mitigate these challenges, a gaze-contingent neural network framework has been developed to compensate for aberrations within the foveal region. The network is trained in an end-to-end manner to minimize the discrepancy between the optically degraded system output and the corresponding ground truth image. A forward imaging model is employed, in which the network output is convolved with a spatially varying point spread function (PSF) to accurately simulate the degradation introduced by the optical system. To accommodate dynamic changes in gaze direction, a foveated attention-guided module is incorporated to adaptively modulate the pre-correction process, enabling localized compensation centered on the fovea. Additionally, an end-to-end trainable architecture has been designed to integrate gaze-informed blur priors. Both simulation and experimental validations confirm that the proposed method substantially reduces gaze-dependent aberrations and enhances retinal image clarity within the foveal region, while maintaining high computational efficiency. The presented framework offers a practical and scalable solution for improving visual performance in aberration-sensitive NED systems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100654"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-19DOI: 10.1016/j.array.2025.100665
Utkarsh Mishra, Narayanan Ganesh
Project management methodologies like Agile, Waterfall, etc., play an impactful role in key performance indicators of the project, such as cost variance, schedule variance, etc. In this work, we deep dive into these variances with data-driven techniques and discover machine learning models for cost estimation. To demonstrate the efficacy of our approach, we processed a dataset with Agile and Waterfall project attributes which was collected by means of survey conducted online about 100 developers from various companies. We had through categorical encoding, statistical analysis, hypothesis testing, and predictive modeling to predict and compare the projects which can be successful. In the initial stages of Exploratory Data Analysis (EDA), it can be observed that the distribution of cost and schedule variance is not uniform across the waterfall and agile approaches, whereby the mean cost and schedule variances is 2.14 and SD is 1.32 for Agile projects and the mean cost and schedule variances for waterfall projects is higher at 3.87 with SD of 1.89. A T-test conducted to compare the methodologies results in a test statistic of −4.72 and a p-value of 0.00002, indicating a statistically significant difference in cost and schedule variances between Agile and Waterfall projects. Additionally, the use of project attributes to train a linear regression model for predicting cost variance and schedule variance for both waterfall and agile approaches achieves an average MAE of 0.98 and an average MSE of 1.54, indicating moderate predictive accuracy in the models. They emphasize that, on average, Agile projects have a lower cost and schedule variance than Waterfall projects and strengthen the impact of the project methodology on effort deviations. The study highlights the role of predictive analytics in project management and advocates the adoption of machine learning for more accurate cost estimation. The next step is to investigate more advanced modeling techniques and the use of additional project parameters to improve predictive performance and project planning.
项目管理方法,如敏捷、瀑布等,在项目的关键绩效指标(如成本差异、进度差异等)中发挥着重要作用。在这项工作中,我们使用数据驱动技术深入研究这些差异,并发现用于成本估算的机器学习模型。为了证明我们方法的有效性,我们处理了一个包含敏捷和瀑布项目属性的数据集,该数据集是通过对来自不同公司的100名开发人员进行在线调查收集的。我们通过分类编码、统计分析、假设检验和预测建模来预测和比较可能成功的项目。在探索性数据分析(Exploratory Data Analysis, EDA)的初始阶段,可以观察到瀑布式和敏捷式项目的成本和进度方差分布并不均匀,敏捷式项目的平均成本和进度方差为2.14,SD为1.32,而瀑布式项目的平均成本和进度方差更高,为3.87,SD为1.89。进行t检验以比较方法的结果,检验统计量为- 4.72,p值为0.00002,表明在敏捷和瀑布项目之间的成本和进度差异在统计上有显著差异。此外,使用项目属性来训练线性回归模型来预测瀑布方法和敏捷方法的成本方差和进度方差,平均MAE为0.98,平均MSE为1.54,表明模型的预测精度适中。他们强调,平均而言,敏捷项目比瀑布项目具有更低的成本和进度差异,并加强了项目方法对工作偏差的影响。该研究强调了预测分析在项目管理中的作用,并提倡采用机器学习来进行更准确的成本估算。下一步是研究更高级的建模技术和使用额外的项目参数来改进预测性能和项目计划。
{"title":"A data-driven comparative analysis of Agile and Waterfall methodologies: Predicting cost and schedule variances using statistical and machine learning approaches","authors":"Utkarsh Mishra, Narayanan Ganesh","doi":"10.1016/j.array.2025.100665","DOIUrl":"10.1016/j.array.2025.100665","url":null,"abstract":"<div><div>Project management methodologies like Agile, Waterfall, etc., play an impactful role in key performance indicators of the project, such as cost variance, schedule variance, etc. In this work, we deep dive into these variances with data-driven techniques and discover machine learning models for cost estimation. To demonstrate the efficacy of our approach, we processed a dataset with Agile and Waterfall project attributes which was collected by means of survey conducted online about 100 developers from various companies. We had through categorical encoding, statistical analysis, hypothesis testing, and predictive modeling to predict and compare the projects which can be successful. In the initial stages of Exploratory Data Analysis (EDA), it can be observed that the distribution of cost and schedule variance is not uniform across the waterfall and agile approaches, whereby the mean cost and schedule variances is 2.14 and SD is 1.32 for Agile projects and the mean cost and schedule variances for waterfall projects is higher at 3.87 with SD of 1.89. A T-test conducted to compare the methodologies results in a test statistic of −4.72 and a p-value of 0.00002, indicating a statistically significant difference in cost and schedule variances between Agile and Waterfall projects. Additionally, the use of project attributes to train a linear regression model for predicting cost variance and schedule variance for both waterfall and agile approaches achieves an average MAE of 0.98 and an average MSE of 1.54, indicating moderate predictive accuracy in the models. They emphasize that, on average, Agile projects have a lower cost and schedule variance than Waterfall projects and strengthen the impact of the project methodology on effort deviations. The study highlights the role of predictive analytics in project management and advocates the adoption of machine learning for more accurate cost estimation. The next step is to investigate more advanced modeling techniques and the use of additional project parameters to improve predictive performance and project planning.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100665"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}