首页 > 最新文献

Array最新文献

英文 中文
Evaluation of tissue-engineered blood vessel with ultrasound computed tomography 超声计算机断层扫描评价组织工程血管
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-01-21 DOI: 10.1016/j.array.2026.100682
Yichuan Tang , Enxhi Jaupi , Srikar Nekkanti , William G. DeMaria , Marsha W. Rolle , Haichong K. Zhang
Tissue-engineered blood vessels (TEBVs) represent a critical advancement in vascular medicine, offering transformative potential in drug testing, regenerative therapies, and disease modeling. Current evaluation methods, however, rely heavily on destructive techniques such as histology, which preclude further use of samples and limit real-time monitoring. Ultrasound Computed Tomography (USCT) emerges as a promising alternative, enabling non-destructive, high-resolution imaging within bioreactors. While prior work has demonstrated the feasibility of USCT for TEBV monitoring using needle and tubing phantoms, this study advances the field by imaging real TEBV samples and employing histological analysis as the ground truth for validation. This paper utilizes a prototype USCT system that achieve comprehensive 360-degree reconstructions of TEBV cross-sections. Validated through both needle-phantom studies and histology comparisons, the system demonstrates high accuracy with an average measurement error of 0.03 mm and adaptability within bioreactor environments. Our results underscore USCT’s capacity for non-destructive TEBV evaluation, paving the way for enhanced monitoring during cultivation. Future developments aim to refine image reconstruction and expand clinical applications.
组织工程血管(tebv)代表了血管医学的重要进步,在药物测试、再生治疗和疾病建模方面提供了变革潜力。然而,目前的评估方法严重依赖于破坏性技术,如组织学,这妨碍了进一步使用样本并限制了实时监测。超声计算机断层扫描(USCT)作为一种有前途的替代方案出现,在生物反应器中实现非破坏性的高分辨率成像。虽然之前的工作已经证明了使用针和管模型的USCT监测TEBV的可行性,但本研究通过对真实TEBV样本进行成像并采用组织学分析作为验证的基础事实,从而推进了该领域的发展。本文利用USCT原型系统实现了TEBV截面的360度全面重建。通过针模研究和组织学比较验证,该系统具有较高的精度,平均测量误差为0.03 mm,并且在生物反应器环境中具有适应性。我们的结果强调了USCT非破坏性TEBV评估的能力,为加强栽培过程中的监测铺平了道路。未来的发展目标是改进图像重建和扩大临床应用。
{"title":"Evaluation of tissue-engineered blood vessel with ultrasound computed tomography","authors":"Yichuan Tang ,&nbsp;Enxhi Jaupi ,&nbsp;Srikar Nekkanti ,&nbsp;William G. DeMaria ,&nbsp;Marsha W. Rolle ,&nbsp;Haichong K. Zhang","doi":"10.1016/j.array.2026.100682","DOIUrl":"10.1016/j.array.2026.100682","url":null,"abstract":"<div><div>Tissue-engineered blood vessels (TEBVs) represent a critical advancement in vascular medicine, offering transformative potential in drug testing, regenerative therapies, and disease modeling. Current evaluation methods, however, rely heavily on destructive techniques such as histology, which preclude further use of samples and limit real-time monitoring. Ultrasound Computed Tomography (USCT) emerges as a promising alternative, enabling non-destructive, high-resolution imaging within bioreactors. While prior work has demonstrated the feasibility of USCT for TEBV monitoring using needle and tubing phantoms, this study advances the field by imaging real TEBV samples and employing histological analysis as the ground truth for validation. This paper utilizes a prototype USCT system that achieve comprehensive 360-degree reconstructions of TEBV cross-sections. Validated through both needle-phantom studies and histology comparisons, the system demonstrates high accuracy with an average measurement error of 0.03 mm and adaptability within bioreactor environments. Our results underscore USCT’s capacity for non-destructive TEBV evaluation, paving the way for enhanced monitoring during cultivation. Future developments aim to refine image reconstruction and expand clinical applications.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100682"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning models for straddle carriers: Predictive maintenance 跨运车的深度学习模型:预测性维护
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-02-06 DOI: 10.1016/j.array.2026.100706
Pooja Mudbhatkal , Martti Juhola , Mikko Asikainen , SantoshKumar Patel
The secret to decreasing downtime, guaranteeing smooth operations, and raising productivity in machine maintenance is predictive maintenance. With predictive maintenance, the need for emergency maintenance decreases. The goal of this study was to forecast spreader problems with the straddle carriers that Cargotec (Kalmar) uses. Machines called "straddle carriers" are used to pick and place shipping containers. The pick and ground action is carried out by the spreader, which is a part of the straddle carrier. The investigation was conducted using straddle carrier logs from their on-board automation systems. With different training times, all four of the advanced deep learning models were able to minimize false positives and false negatives and accurately forecast failures.
This study gives a thorough overview of different deep learning models in the context of predictive maintenance, as well as a comprehension of the advantages and disadvantages of the models that were employed.
在机器维护中,减少停机时间、保证平稳运行和提高生产率的秘诀是预测性维护。有了预测性维护,对紧急维护的需求就会减少。本研究的目的是预测Cargotec (Kalmar)使用的跨式运输工具的传播问题。被称为“跨运车”的机器被用来挑选和放置集装箱。pick - and - ground的动作是由吊具进行的,它是跨骑式运输车的一部分。调查使用了机载自动化系统的跨运工具日志。通过不同的训练时间,所有四种高级深度学习模型都能够最大限度地减少误报和误报,并准确预测故障。本研究对预测性维护背景下的不同深度学习模型进行了全面概述,并对所采用模型的优缺点进行了理解。
{"title":"Deep learning models for straddle carriers: Predictive maintenance","authors":"Pooja Mudbhatkal ,&nbsp;Martti Juhola ,&nbsp;Mikko Asikainen ,&nbsp;SantoshKumar Patel","doi":"10.1016/j.array.2026.100706","DOIUrl":"10.1016/j.array.2026.100706","url":null,"abstract":"<div><div>The secret to decreasing downtime, guaranteeing smooth operations, and raising productivity in machine maintenance is predictive maintenance. With predictive maintenance, the need for emergency maintenance decreases. The goal of this study was to forecast spreader problems with the straddle carriers that Cargotec (Kalmar) uses. Machines called \"straddle carriers\" are used to pick and place shipping containers. The pick and ground action is carried out by the spreader, which is a part of the straddle carrier. The investigation was conducted using straddle carrier logs from their on-board automation systems. With different training times, all four of the advanced deep learning models were able to minimize false positives and false negatives and accurately forecast failures.</div><div>This study gives a thorough overview of different deep learning models in the context of predictive maintenance, as well as a comprehension of the advantages and disadvantages of the models that were employed.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100706"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient tool path computing for Industry 5.0: Application to turning lathe machining 面向工业5.0的高效刀具轨迹计算:在车床加工中的应用
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-01-31 DOI: 10.1016/j.array.2026.100701
Héctor Migallón , Antonio Jimeno-Morenilla , Eduard Duta-Costache , José-Luis Sánchez-Romero
This paper presents an efficient approach to toolpath generation tailored to the needs of Industry 5.0, with a focus on turning lathe machining. The study addresses the challenge of rapidly and accurately generating helical toolpaths in personalized manufacturing, where traditional sequential methods often become computational bottlenecks. To overcome this limitation, we propose efficient parallel implementations of the Virtual Digitizing (VD) algorithm, specifically designed to accelerate the computation of machining trajectories on both multicore and manycore architectures. The multicore implementation achieves notable speedups, especially when execution is properly tuned. The manycore strategy explores both asynchronous (coarse-grained) and synchronous (fine-grained) execution models. In the asynchronous method, independent trajectory computations are assigned to separate CUDA threads, whereas the synchronous method further parallelizes the internal processing of each trajectory point, providing finer computational granularity. Experimental evaluations conducted on authentic industrial shoe last models reveal notable gains in computational efficiency. The manycore implementation achieves up to 70x acceleration on low-end GPUs, over 80x on high-range devices and over 270x on state-of-the-art GPU devices when compared to their respective CPU-based computations. Although the synchronous method introduces additional complexity, it delivers the best performance on powerful GPU platforms, whereas the asynchronous method is better suited for resource-constrained systems. Therefore, the study concludes that the optimal parallelization strategy depends on the available hardware.
本文提出了一种针对工业5.0需求的有效的刀具路径生成方法,重点是车床加工。该研究解决了在个性化制造中快速、准确地生成螺旋刀具路径的挑战,传统的顺序方法往往成为计算瓶颈。为了克服这一限制,我们提出了虚拟数字化(VD)算法的高效并行实现,专门设计用于加速多核和多核架构上加工轨迹的计算。多核实现实现了显著的加速,特别是在适当调整执行时。多核策略探索异步(粗粒度)和同步(细粒度)执行模型。在异步方法中,将独立的轨迹计算分配给单独的CUDA线程,而同步方法进一步并行化每个轨迹点的内部处理,提供更精细的计算粒度。对真实工业鞋楦模型进行的实验评估表明,计算效率显著提高。与各自基于cpu的计算相比,多核实现在低端GPU上实现高达70倍的加速,在高端设备上实现超过80倍的加速,在最先进的GPU设备上实现超过270倍的加速。尽管同步方法引入了额外的复杂性,但它在功能强大的GPU平台上提供了最佳性能,而异步方法更适合于资源受限的系统。因此,研究得出结论,最优并行化策略取决于可用的硬件。
{"title":"Efficient tool path computing for Industry 5.0: Application to turning lathe machining","authors":"Héctor Migallón ,&nbsp;Antonio Jimeno-Morenilla ,&nbsp;Eduard Duta-Costache ,&nbsp;José-Luis Sánchez-Romero","doi":"10.1016/j.array.2026.100701","DOIUrl":"10.1016/j.array.2026.100701","url":null,"abstract":"<div><div>This paper presents an efficient approach to toolpath generation tailored to the needs of Industry 5.0, with a focus on turning lathe machining. The study addresses the challenge of rapidly and accurately generating helical toolpaths in personalized manufacturing, where traditional sequential methods often become computational bottlenecks. To overcome this limitation, we propose efficient parallel implementations of the Virtual Digitizing (VD) algorithm, specifically designed to accelerate the computation of machining trajectories on both multicore and manycore architectures. The multicore implementation achieves notable speedups, especially when execution is properly tuned. The manycore strategy explores both asynchronous (coarse-grained) and synchronous (fine-grained) execution models. In the asynchronous method, independent trajectory computations are assigned to separate CUDA threads, whereas the synchronous method further parallelizes the internal processing of each trajectory point, providing finer computational granularity. Experimental evaluations conducted on authentic industrial shoe last models reveal notable gains in computational efficiency. The manycore implementation achieves up to <span><math><mrow><mn>70</mn><mi>x</mi></mrow></math></span> acceleration on low-end GPUs, over <span><math><mrow><mn>80</mn><mi>x</mi></mrow></math></span> on high-range devices and over <span><math><mrow><mn>270</mn><mi>x</mi></mrow></math></span> on state-of-the-art GPU devices when compared to their respective CPU-based computations. Although the synchronous method introduces additional complexity, it delivers the best performance on powerful GPU platforms, whereas the asynchronous method is better suited for resource-constrained systems. Therefore, the study concludes that the optimal parallelization strategy depends on the available hardware.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100701"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-based evaluation of synthetic financial time series data: A comparative study with multi-metric validation 基于模型的综合金融时间序列数据评价:多度量验证的比较研究
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-01-30 DOI: 10.1016/j.array.2026.100684
Patrick Naivasha, George Musumba, Patrick Gikunda, John Wandeto
This research presents a behaviorally informed framework for synthesizing financial time-series data, specifically designed to emulate the complex dynamics of foreign exchange markets. Deviating from conventional generative adversarial networks (GANs) or purely statistical distribution-matching, the proposed methodology adopts a game-theoretic architecture. This framework integrates trader-interaction dynamics, stochastic strategies, and information asymmetry, treating the market as a strategic participant to reproduce authentic volatility patterns and structural dependencies. To ensure numerical stability across extensive simulations, the study introduces a uniform upward scaling procedure and controlled initialization, preventing pathological price behaviors without compromising the underlying statistical properties. The framework's analytical fidelity was rigorously evaluated against a suite of econometric and machine learning models, including ARIMA, XGBoost, LSTM, N-BEATS, and DLinear. Experimental results involving 12,960 hourly observations demonstrate that the synthetic data maintains strong alignment with empirical benchmarks. DLinear emerged as the superior model, exhibiting exceptional stability with an R2 frequently exceeding 0.98 and a Mean Absolute Scaled Error (MASE) near unity. While XGBoost and N-BEATS yielded competitive results, ARIMA and LSTM showed anticipated performance degradation due to temporal noise. Comprehensive residual diagnostics, including Ljung-Box tests and stationarity assessments, confirm that the generated series are behaviorally consistent and analytically reliable. This framework thus provides a robust foundation for comparative modeling and experimental financial research.
本研究提出了一个行为信息框架,用于综合金融时间序列数据,专门用于模拟外汇市场的复杂动态。与传统的生成对抗网络(GANs)或纯粹的统计分布匹配不同,本文提出的方法采用博弈论架构。该框架整合了交易者互动动态、随机策略和信息不对称,将市场视为战略参与者,以再现真实的波动模式和结构依赖关系。为了确保大量模拟中的数值稳定性,该研究引入了统一的向上缩放过程和可控初始化,在不影响潜在统计特性的情况下防止病态价格行为。该框架的分析保真度根据一套计量经济学和机器学习模型进行了严格评估,包括ARIMA、XGBoost、LSTM、N-BEATS和DLinear。涉及12,960小时观测的实验结果表明,合成数据与经验基准保持强烈的一致性。DLinear模型表现出优异的稳定性,R2经常超过0.98,平均绝对缩放误差(MASE)接近统一。虽然XGBoost和N-BEATS产生了竞争结果,但ARIMA和LSTM由于时间噪声而表现出预期的性能下降。综合残差诊断,包括Ljung-Box测试和平稳性评估,确认生成的序列在行为上是一致的,在分析上是可靠的。因此,该框架为比较建模和实验金融研究提供了坚实的基础。
{"title":"Model-based evaluation of synthetic financial time series data: A comparative study with multi-metric validation","authors":"Patrick Naivasha,&nbsp;George Musumba,&nbsp;Patrick Gikunda,&nbsp;John Wandeto","doi":"10.1016/j.array.2026.100684","DOIUrl":"10.1016/j.array.2026.100684","url":null,"abstract":"<div><div>This research presents a behaviorally informed framework for synthesizing financial time-series data, specifically designed to emulate the complex dynamics of foreign exchange markets. Deviating from conventional generative adversarial networks (GANs) or purely statistical distribution-matching, the proposed methodology adopts a game-theoretic architecture. This framework integrates trader-interaction dynamics, stochastic strategies, and information asymmetry, treating the market as a strategic participant to reproduce authentic volatility patterns and structural dependencies. To ensure numerical stability across extensive simulations, the study introduces a uniform upward scaling procedure and controlled initialization, preventing pathological price behaviors without compromising the underlying statistical properties. The framework's analytical fidelity was rigorously evaluated against a suite of econometric and machine learning models, including ARIMA, XGBoost, LSTM, N-BEATS, and DLinear. Experimental results involving 12,960 hourly observations demonstrate that the synthetic data maintains strong alignment with empirical benchmarks. DLinear emerged as the superior model, exhibiting exceptional stability with an R<sup>2</sup> frequently exceeding 0.98 and a Mean Absolute Scaled Error (MASE) near unity. While XGBoost and N-BEATS yielded competitive results, ARIMA and LSTM showed anticipated performance degradation due to temporal noise. Comprehensive residual diagnostics, including Ljung-Box tests and stationarity assessments, confirm that the generated series are behaviorally consistent and analytically reliable. This framework thus provides a robust foundation for comparative modeling and experimental financial research.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100684"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146184708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label-efficient sleep staging from multi-channel EEG with self-supervised contrastive learning and iterative self-distillation 基于自监督对比学习和迭代自蒸馏的多通道脑电图标签有效睡眠分期
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-02-17 DOI: 10.1016/j.array.2026.100718
Jie Ouyang , Peng Xiao , Jingxue Chen , Fried-Michael Dahlweid , Yiming Chen , Yuanwang Wei , Zou Lai
Manual sleep stage classification from polysomnography (PSG) is labor-intensive and subject to expert variability, motivating automated and deployment-oriented solutions for clinical use. We present a multi-channel self-supervised learning (SSL) contrastive framework combined with iterative self-distillation for accurate and label-efficient sleep staging. The approach employs a dual-branch convolutional network that processes electroencephalogram (EEG) channels independently and integrates complementary information via a cross-attention fusion module. During pre-training, a contrastive objective leverages temporal adjacency to form positive pairs and maintains hard negatives dynamically to learn robust representations from unlabeled data. Subsequent fine-tuning with minimal labels is enhanced by iterative self-distillation through pseudo-label refinement. On the Sleep-EDF Expanded (SleepEDF-v2) dataset, the method achieves strong performance with only 1% labeled data (accuracy 76.31%, macro-F1 66.53%), competitive against existing SSL baselines. The resulting compact model and single-site training setup align with practical constraints in hospitals and wearable scenarios, reducing annotation burden and supporting secure, scalable clinical deployment.
从多导睡眠图(PSG)中手动分类睡眠阶段是劳动密集型的,并且受专家变化的影响,激励临床使用自动化和面向部署的解决方案。我们提出了一种结合迭代自蒸馏的多通道自监督学习(SSL)对比框架,用于准确和标记高效的睡眠分期。该方法采用双分支卷积网络,独立处理脑电图(EEG)通道,并通过交叉注意融合模块整合互补信息。在预训练过程中,对比目标利用时间邻接性形成正对,并动态保持硬负,从未标记的数据中学习鲁棒表示。通过伪标签细化的迭代自蒸馏增强了最小标签的后续微调。在Sleep-EDF Expanded (sleeppef -v2)数据集上,该方法仅使用1%的标记数据(准确率76.31%,宏观f1 66.53%)就获得了较强的性能,与现有的SSL基线具有竞争力。由此产生的紧凑模型和单站点培训设置符合医院和可穿戴场景的实际限制,减少了注释负担,并支持安全、可扩展的临床部署。
{"title":"Label-efficient sleep staging from multi-channel EEG with self-supervised contrastive learning and iterative self-distillation","authors":"Jie Ouyang ,&nbsp;Peng Xiao ,&nbsp;Jingxue Chen ,&nbsp;Fried-Michael Dahlweid ,&nbsp;Yiming Chen ,&nbsp;Yuanwang Wei ,&nbsp;Zou Lai","doi":"10.1016/j.array.2026.100718","DOIUrl":"10.1016/j.array.2026.100718","url":null,"abstract":"<div><div>Manual sleep stage classification from polysomnography (PSG) is labor-intensive and subject to expert variability, motivating automated and deployment-oriented solutions for clinical use. We present a multi-channel <em>self-supervised learning (SSL)</em> contrastive framework combined with iterative self-distillation for accurate and <em>label-efficient</em> sleep staging. The approach employs a dual-branch convolutional network that processes electroencephalogram (EEG) channels independently and integrates complementary information via a cross-attention fusion module. During pre-training, a contrastive objective leverages temporal adjacency to form positive pairs and maintains hard negatives dynamically to learn robust representations from unlabeled data. Subsequent fine-tuning with minimal labels is enhanced by iterative self-distillation through pseudo-label refinement. On the Sleep-EDF Expanded (SleepEDF-v2) dataset, the method achieves strong performance with only 1% labeled data (accuracy 76.31%, macro-F1 66.53%), competitive against existing SSL baselines. The resulting compact model and single-site training setup align with practical constraints in hospitals and wearable scenarios, reducing annotation burden and supporting secure, scalable clinical deployment.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100718"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ladderpath: An efficient algorithm for revealing nested hierarchy in sequences Ladderpath:在序列中显示嵌套层次结构的有效算法
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-01-09 DOI: 10.1016/j.array.2025.100663
Jingwen Zhang , Xiao Xie , Xiaodong Deng , Jing Wang , Xiaojun Hu , Yiping Wang , Hu Zhu , Fengyao Zhai , Yu Liu
Ladderpath, rooted in Algorithmic Information Theory (AIT), uncovers nested and hierarchical structures in symbolic sequences through minimal compositional reconstruction. It approximates Kolmogorov complexity by identifying reusable subsequences that enable efficient reconstruction of complex sequences. The proposed algorithm improves upon earlier implementations by introducing key optimizations in substring enumeration and reuse filtering, allowing it to scale to sequence systems with tens or even hundreds of millions of characters. Ladderpath produces a standardized JSON format that encodes compositional dependencies and hierarchies, and supports a variety of downstream tasks, including compression, shared motif extraction, cross-sequence similarity analysis, and structural visualization. Its domain-agnostic design enables broad applicability across areas such as genomics, natural language, symbolic computation, and program analysis. Beyond providing a practical approximation of complexity, Ladderpath also offers structural insight into the modular grammar of sequences, pointing to a deeper connection between algorithmic complexity and compositional hierarchies observed in real-world data.
Ladderpath根植于算法信息理论(AIT),通过最小的组成重构揭示符号序列中的嵌套和分层结构。它通过识别可重用的子序列来近似Kolmogorov复杂度,从而能够有效地重建复杂序列。提出的算法通过在子字符串枚举和重用过滤中引入关键优化,改进了早期的实现,允许它扩展到具有数千万甚至数亿个字符的序列系统。Ladderpath生成一种标准化的JSON格式,对组合依赖关系和层次结构进行编码,并支持各种下游任务,包括压缩、共享基序提取、交叉序列相似性分析和结构可视化。它的领域不可知的设计使其在基因组学、自然语言、符号计算和程序分析等领域具有广泛的适用性。除了提供实际的复杂性近似值之外,Ladderpath还提供了对序列模块化语法的结构洞察,指出了在现实世界数据中观察到的算法复杂性和组合层次结构之间的更深层次的联系。
{"title":"Ladderpath: An efficient algorithm for revealing nested hierarchy in sequences","authors":"Jingwen Zhang ,&nbsp;Xiao Xie ,&nbsp;Xiaodong Deng ,&nbsp;Jing Wang ,&nbsp;Xiaojun Hu ,&nbsp;Yiping Wang ,&nbsp;Hu Zhu ,&nbsp;Fengyao Zhai ,&nbsp;Yu Liu","doi":"10.1016/j.array.2025.100663","DOIUrl":"10.1016/j.array.2025.100663","url":null,"abstract":"<div><div>Ladderpath, rooted in Algorithmic Information Theory (AIT), uncovers nested and hierarchical structures in symbolic sequences through minimal compositional reconstruction. It approximates Kolmogorov complexity by identifying reusable subsequences that enable efficient reconstruction of complex sequences. The proposed algorithm improves upon earlier implementations by introducing key optimizations in substring enumeration and reuse filtering, allowing it to scale to sequence systems with tens or even hundreds of millions of characters. Ladderpath produces a standardized JSON format that encodes compositional dependencies and hierarchies, and supports a variety of downstream tasks, including compression, shared motif extraction, cross-sequence similarity analysis, and structural visualization. Its domain-agnostic design enables broad applicability across areas such as genomics, natural language, symbolic computation, and program analysis. Beyond providing a practical approximation of complexity, Ladderpath also offers structural insight into the modular grammar of sequences, pointing to a deeper connection between algorithmic complexity and compositional hierarchies observed in real-world data.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100663"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption 基于多密钥同态加密的智能电网隐私数据加密与共享算法
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2025-12-20 DOI: 10.1016/j.array.2025.100655
Xuehai Chen , Yantong Lin , Zhimin Liang , Zhenmin He
To realize the effective planning and regulation of smart grids, it is necessary to ensure the smart grid private data sharing's security. A smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption is proposed. This algorithm is based on the smart meters' private data collected at the device layer, and encrypts the data by using the multi-key homomorphic encryption method of the key generation center. The computing layer interacts with smart meters within its coverage area through fog nodes. After the data collected from smart meters are authenticated and aggregated, the data is transmitted to the cloud storage layer for storage. The data stored in the cloud storage layer is encrypted by using multi - key homomorphic encryption methods and transmitted to the server. After decryption, the server can obtain the details of the private data of each subarea and realize the encryption and sharing of the privacy data of the smart grid. The test results show that the algorithm has good encryption performance, with encryption times all within 700 ms. The data decryption probability is above 99.22 %, and the communication overhead required for shared transmission is above 2000bit in all cases. The intrusion rate is within 0.3 %, ensuring the safe sharing of private data.
要实现智能电网的有效规划和监管,必须保证智能电网私有数据共享的安全性。提出了一种基于多密钥同态加密的智能电网隐私数据加密与共享算法。该算法以智能电表在设备层采集的私有数据为基础,采用密钥生成中心的多密钥同态加密方法对数据进行加密。计算层通过雾节点与其覆盖区域内的智能电表交互。智能电表采集的数据经过认证和聚合后,传输到云存储层进行存储。存储在云存储层的数据采用多密钥同态加密方法进行加密,传输到服务器端。解密后,服务器可以获得各子区域的隐私数据的详细信息,实现智能电网隐私数据的加密和共享。测试结果表明,该算法具有良好的加密性能,加密时间均在700ms以内。数据解密概率在99.22%以上,共享传输所需的通信开销在2000bit以上。入侵率在0.3%以内,保证隐私数据的安全共享。
{"title":"Smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption","authors":"Xuehai Chen ,&nbsp;Yantong Lin ,&nbsp;Zhimin Liang ,&nbsp;Zhenmin He","doi":"10.1016/j.array.2025.100655","DOIUrl":"10.1016/j.array.2025.100655","url":null,"abstract":"<div><div>To realize the effective planning and regulation of smart grids, it is necessary to ensure the smart grid private data sharing's security. A smart grid privacy data encryption and sharing algorithm based on multi-key homomorphic encryption is proposed. This algorithm is based on the smart meters' private data collected at the device layer, and encrypts the data by using the multi-key homomorphic encryption method of the key generation center. The computing layer interacts with smart meters within its coverage area through fog nodes. After the data collected from smart meters are authenticated and aggregated, the data is transmitted to the cloud storage layer for storage. The data stored in the cloud storage layer is encrypted by using multi - key homomorphic encryption methods and transmitted to the server. After decryption, the server can obtain the details of the private data of each subarea and realize the encryption and sharing of the privacy data of the smart grid. The test results show that the algorithm has good encryption performance, with encryption times all within 700 ms. The data decryption probability is above 99.22 %, and the communication overhead required for shared transmission is above 2000bit in all cases. The intrusion rate is within 0.3 %, ensuring the safe sharing of private data.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100655"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145973366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing security in IoT networks: A multifaceted approach to vulnerability analysis and protection 增强物联网网络的安全性:漏洞分析和保护的多方面方法
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2025-12-13 DOI: 10.1016/j.array.2025.100626
Zohre Arabi , Ramin Rajabi Oskouei , Mehdi Hosseinzadeh
The rapid proliferation of the Internet of Things (IoT) has transformed modern technology by bridging the physical and digital realms. Yet, the explosive growth of connected devices—expected to surpass 50 billion by 2025—has introduced substantial security concerns. This study investigates critical vulnerabilities within IoT systems, particularly at the device and network levels, focusing on risks such as data breaches, unauthorized access, and distributed denial-of-service (DDoS) attacks. It explores the significance of implementing standardized security practices for interoperable internet-connected hardware within various environments. Despite the simplicity and feasibility of adopting such standards, many manufacturers neglect essential security protocols, leaving devices exposed. Much like pre-flight checklists in aviation, foundational security principles should be embedded into hardware design; however, innovation in this area has been largely overlooked.
We present an innovative two-phase methodology aimed at strengthening IoT security. Manufacturers often prioritize rapid deployment over protection, resulting in devices that are ill-equipped to handle sophisticated cyber threats. Conventional security approaches, based on static and generic rules, are ill-suited to the diverse, resource-constrained, and protocol-heavy IoT landscape. Our second phase involves detecting device vulnerabilities using advanced tools, such as Nmap for network probing and Binwalk for firmware analysis. Key protective measures—including secure boot processes, firmware hashing, and secure integrated circuits (ICs)—are employed to safeguard sensitive data and ensure firmware integrity. Experimental results validate the approach's effectiveness in identifying and mitigating vulnerabilities. Visual data, including port distribution charts and CVSS-based risk assessments, highlight the necessity of prioritizing high-impact threats. Although there are limitations, such as difficulties in updating legacy devices and analyzing large networks, the proposed framework significantly reduces cybersecurity risks, builds trust in IoT systems, and establishes a solid foundation for future security developments.
物联网(IoT)的快速发展通过连接物理和数字领域改变了现代技术。然而,互联设备的爆炸式增长——预计到2025年将超过500亿——引发了严重的安全问题。本研究调查了物联网系统中的关键漏洞,特别是在设备和网络层面,重点关注数据泄露、未经授权访问和分布式拒绝服务(DDoS)攻击等风险。它探讨了在各种环境中为可互操作的互联网连接硬件实现标准化安全实践的重要性。尽管采用这些标准简单可行,但许多制造商忽视了基本的安全协议,使设备暴露在外。就像航空业的飞行前检查表一样,基本的安全原则应该嵌入到硬件设计中;然而,这一领域的创新在很大程度上被忽视了。我们提出了一种创新的两阶段方法,旨在加强物联网安全。制造商往往优先考虑快速部署而不是保护,导致设备无法应对复杂的网络威胁。基于静态和通用规则的传统安全方法不适合多样化、资源受限和协议繁重的物联网环境。我们的第二阶段涉及使用高级工具检测设备漏洞,例如用于网络探测的Nmap和用于固件分析的Binwalk。关键的保护措施——包括安全引导过程、固件散列和安全集成电路(ic)——被用来保护敏感数据和确保固件完整性。实验结果验证了该方法在识别和缓解漏洞方面的有效性。可视化数据,包括港口分布图和基于cvss的风险评估,强调了优先考虑高影响威胁的必要性。尽管存在一些局限性,例如在更新旧设备和分析大型网络方面存在困难,但所提出的框架显著降低了网络安全风险,建立了对物联网系统的信任,并为未来的安全发展奠定了坚实的基础。
{"title":"Enhancing security in IoT networks: A multifaceted approach to vulnerability analysis and protection","authors":"Zohre Arabi ,&nbsp;Ramin Rajabi Oskouei ,&nbsp;Mehdi Hosseinzadeh","doi":"10.1016/j.array.2025.100626","DOIUrl":"10.1016/j.array.2025.100626","url":null,"abstract":"<div><div>The rapid proliferation of the Internet of Things (IoT) has transformed modern technology by bridging the physical and digital realms. Yet, the explosive growth of connected devices—expected to surpass 50 billion by 2025—has introduced substantial security concerns. This study investigates critical vulnerabilities within IoT systems, particularly at the device and network levels, focusing on risks such as data breaches, unauthorized access, and distributed denial-of-service (DDoS) attacks. It explores the significance of implementing standardized security practices for interoperable internet-connected hardware within various environments. Despite the simplicity and feasibility of adopting such standards, many manufacturers neglect essential security protocols, leaving devices exposed. Much like pre-flight checklists in aviation, foundational security principles should be embedded into hardware design; however, innovation in this area has been largely overlooked.</div><div>We present an innovative two-phase methodology aimed at strengthening IoT security. Manufacturers often prioritize rapid deployment over protection, resulting in devices that are ill-equipped to handle sophisticated cyber threats. Conventional security approaches, based on static and generic rules, are ill-suited to the diverse, resource-constrained, and protocol-heavy IoT landscape. Our second phase involves detecting device vulnerabilities using advanced tools, such as Nmap for network probing and Binwalk for firmware analysis. Key protective measures—including secure boot processes, firmware hashing, and secure integrated circuits (ICs)—are employed to safeguard sensitive data and ensure firmware integrity. Experimental results validate the approach's effectiveness in identifying and mitigating vulnerabilities. Visual data, including port distribution charts and CVSS-based risk assessments, highlight the necessity of prioritizing high-impact threats. Although there are limitations, such as difficulties in updating legacy devices and analyzing large networks, the proposed framework significantly reduces cybersecurity risks, builds trust in IoT systems, and establishes a solid foundation for future security developments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100626"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze-adaptive neural pre-correction for mitigating spatially varying optical aberrations in near-eye displays 用于减轻近眼显示中空间变化光学像差的注视自适应神经预校正
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2025-12-27 DOI: 10.1016/j.array.2025.100654
Yi Jiang, Ye Bi, Yinng Li, Pengfei Li, Shengnan Qin, Zichao Shu, Chengrui Le
Near-eye display (NED) technology constitutes a fundamental component of head-mounted display (HMD) systems. The compact form factor required by HMDs imposes stringent constraints on optical design, often resulting in pronounced wavefront aberrations that significantly degrade visual fidelity. In addition, natural eye movements dynamically induce varying blur that further compromises image quality. To mitigate these challenges, a gaze-contingent neural network framework has been developed to compensate for aberrations within the foveal region. The network is trained in an end-to-end manner to minimize the discrepancy between the optically degraded system output and the corresponding ground truth image. A forward imaging model is employed, in which the network output is convolved with a spatially varying point spread function (PSF) to accurately simulate the degradation introduced by the optical system. To accommodate dynamic changes in gaze direction, a foveated attention-guided module is incorporated to adaptively modulate the pre-correction process, enabling localized compensation centered on the fovea. Additionally, an end-to-end trainable architecture has been designed to integrate gaze-informed blur priors. Both simulation and experimental validations confirm that the proposed method substantially reduces gaze-dependent aberrations and enhances retinal image clarity within the foveal region, while maintaining high computational efficiency. The presented framework offers a practical and scalable solution for improving visual performance in aberration-sensitive NED systems.
近眼显示(NED)技术是头戴式显示(HMD)系统的基本组成部分。hmd所要求的紧凑外形因素对光学设计施加了严格的限制,通常会导致明显的波前像差,从而显著降低视觉保真度。此外,自然的眼球运动动态地诱导不同的模糊,进一步损害图像质量。为了减轻这些挑战,研究人员开发了一种基于注视的神经网络框架来补偿中央凹区域内的像差。该网络以端到端方式进行训练,以最小化光学退化系统输出与相应的地面真值图像之间的差异。采用前向成像模型,将网络输出与空间变化点扩展函数(PSF)进行卷积,以精确模拟光学系统引入的退化。为了适应注视方向的动态变化,我们采用了一个注视点注意引导模块来自适应地调节预校正过程,从而实现以中央凹为中心的局部补偿。此外,端到端可训练的架构已被设计集成的视线通知模糊先验。仿真和实验验证表明,该方法在保持较高的计算效率的同时,大大降低了注视相关像差,提高了中央凹区域内视网膜图像的清晰度。所提出的框架为改善像差敏感NED系统的视觉性能提供了一个实用且可扩展的解决方案。
{"title":"Gaze-adaptive neural pre-correction for mitigating spatially varying optical aberrations in near-eye displays","authors":"Yi Jiang,&nbsp;Ye Bi,&nbsp;Yinng Li,&nbsp;Pengfei Li,&nbsp;Shengnan Qin,&nbsp;Zichao Shu,&nbsp;Chengrui Le","doi":"10.1016/j.array.2025.100654","DOIUrl":"10.1016/j.array.2025.100654","url":null,"abstract":"<div><div>Near-eye display (NED) technology constitutes a fundamental component of head-mounted display (HMD) systems. The compact form factor required by HMDs imposes stringent constraints on optical design, often resulting in pronounced wavefront aberrations that significantly degrade visual fidelity. In addition, natural eye movements dynamically induce varying blur that further compromises image quality. To mitigate these challenges, a gaze-contingent neural network framework has been developed to compensate for aberrations within the foveal region. The network is trained in an end-to-end manner to minimize the discrepancy between the optically degraded system output and the corresponding ground truth image. A forward imaging model is employed, in which the network output is convolved with a spatially varying point spread function (PSF) to accurately simulate the degradation introduced by the optical system. To accommodate dynamic changes in gaze direction, a foveated attention-guided module is incorporated to adaptively modulate the pre-correction process, enabling localized compensation centered on the fovea. Additionally, an end-to-end trainable architecture has been designed to integrate gaze-informed blur priors. Both simulation and experimental validations confirm that the proposed method substantially reduces gaze-dependent aberrations and enhances retinal image clarity within the foveal region, while maintaining high computational efficiency. The presented framework offers a practical and scalable solution for improving visual performance in aberration-sensitive NED systems.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100654"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data-driven comparative analysis of Agile and Waterfall methodologies: Predicting cost and schedule variances using statistical and machine learning approaches 敏捷和瀑布方法的数据驱动比较分析:使用统计和机器学习方法预测成本和进度差异
IF 4.5 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-03-01 Epub Date: 2026-01-19 DOI: 10.1016/j.array.2025.100665
Utkarsh Mishra, Narayanan Ganesh
Project management methodologies like Agile, Waterfall, etc., play an impactful role in key performance indicators of the project, such as cost variance, schedule variance, etc. In this work, we deep dive into these variances with data-driven techniques and discover machine learning models for cost estimation. To demonstrate the efficacy of our approach, we processed a dataset with Agile and Waterfall project attributes which was collected by means of survey conducted online about 100 developers from various companies. We had through categorical encoding, statistical analysis, hypothesis testing, and predictive modeling to predict and compare the projects which can be successful. In the initial stages of Exploratory Data Analysis (EDA), it can be observed that the distribution of cost and schedule variance is not uniform across the waterfall and agile approaches, whereby the mean cost and schedule variances is 2.14 and SD is 1.32 for Agile projects and the mean cost and schedule variances for waterfall projects is higher at 3.87 with SD of 1.89. A T-test conducted to compare the methodologies results in a test statistic of −4.72 and a p-value of 0.00002, indicating a statistically significant difference in cost and schedule variances between Agile and Waterfall projects. Additionally, the use of project attributes to train a linear regression model for predicting cost variance and schedule variance for both waterfall and agile approaches achieves an average MAE of 0.98 and an average MSE of 1.54, indicating moderate predictive accuracy in the models. They emphasize that, on average, Agile projects have a lower cost and schedule variance than Waterfall projects and strengthen the impact of the project methodology on effort deviations. The study highlights the role of predictive analytics in project management and advocates the adoption of machine learning for more accurate cost estimation. The next step is to investigate more advanced modeling techniques and the use of additional project parameters to improve predictive performance and project planning.
项目管理方法,如敏捷、瀑布等,在项目的关键绩效指标(如成本差异、进度差异等)中发挥着重要作用。在这项工作中,我们使用数据驱动技术深入研究这些差异,并发现用于成本估算的机器学习模型。为了证明我们方法的有效性,我们处理了一个包含敏捷和瀑布项目属性的数据集,该数据集是通过对来自不同公司的100名开发人员进行在线调查收集的。我们通过分类编码、统计分析、假设检验和预测建模来预测和比较可能成功的项目。在探索性数据分析(Exploratory Data Analysis, EDA)的初始阶段,可以观察到瀑布式和敏捷式项目的成本和进度方差分布并不均匀,敏捷式项目的平均成本和进度方差为2.14,SD为1.32,而瀑布式项目的平均成本和进度方差更高,为3.87,SD为1.89。进行t检验以比较方法的结果,检验统计量为- 4.72,p值为0.00002,表明在敏捷和瀑布项目之间的成本和进度差异在统计上有显著差异。此外,使用项目属性来训练线性回归模型来预测瀑布方法和敏捷方法的成本方差和进度方差,平均MAE为0.98,平均MSE为1.54,表明模型的预测精度适中。他们强调,平均而言,敏捷项目比瀑布项目具有更低的成本和进度差异,并加强了项目方法对工作偏差的影响。该研究强调了预测分析在项目管理中的作用,并提倡采用机器学习来进行更准确的成本估算。下一步是研究更高级的建模技术和使用额外的项目参数来改进预测性能和项目计划。
{"title":"A data-driven comparative analysis of Agile and Waterfall methodologies: Predicting cost and schedule variances using statistical and machine learning approaches","authors":"Utkarsh Mishra,&nbsp;Narayanan Ganesh","doi":"10.1016/j.array.2025.100665","DOIUrl":"10.1016/j.array.2025.100665","url":null,"abstract":"<div><div>Project management methodologies like Agile, Waterfall, etc., play an impactful role in key performance indicators of the project, such as cost variance, schedule variance, etc. In this work, we deep dive into these variances with data-driven techniques and discover machine learning models for cost estimation. To demonstrate the efficacy of our approach, we processed a dataset with Agile and Waterfall project attributes which was collected by means of survey conducted online about 100 developers from various companies. We had through categorical encoding, statistical analysis, hypothesis testing, and predictive modeling to predict and compare the projects which can be successful. In the initial stages of Exploratory Data Analysis (EDA), it can be observed that the distribution of cost and schedule variance is not uniform across the waterfall and agile approaches, whereby the mean cost and schedule variances is 2.14 and SD is 1.32 for Agile projects and the mean cost and schedule variances for waterfall projects is higher at 3.87 with SD of 1.89. A T-test conducted to compare the methodologies results in a test statistic of −4.72 and a p-value of 0.00002, indicating a statistically significant difference in cost and schedule variances between Agile and Waterfall projects. Additionally, the use of project attributes to train a linear regression model for predicting cost variance and schedule variance for both waterfall and agile approaches achieves an average MAE of 0.98 and an average MSE of 1.54, indicating moderate predictive accuracy in the models. They emphasize that, on average, Agile projects have a lower cost and schedule variance than Waterfall projects and strengthen the impact of the project methodology on effort deviations. The study highlights the role of predictive analytics in project management and advocates the adoption of machine learning for more accurate cost estimation. The next step is to investigate more advanced modeling techniques and the use of additional project parameters to improve predictive performance and project planning.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100665"},"PeriodicalIF":4.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Array
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1