Pub Date : 2025-12-08DOI: 10.1016/j.array.2025.100636
Abderrahim Waga , Said Benhlima , Ali Bekri , Fatima Zahrae Saber , Jawad Abdouni , Toufik Mzili , Ahmed Regragui
Trajectory planning is critical to autonomous navigation systems, working in conjunction with perception, localization, and obstacle avoidance. Traditional path planning algorithms often struggle in large or complex environments due to extensive memory usage and long computation times. In this paper, we propose a hierarchical planning, a multi-level approach where a high-level planner sets general goals for a low-level planner to execute, framework that combines the reasoning capabilities of a large language model (LLM) with the efficiency of a machine learning-based local planner. The LLM acts as a high-level planner by suggesting intermediate waypoints that guide the robot toward its goal. A machine learning-based trajectory planner then uses these waypoints to compute feasible and efficient paths at the local level. This approach significantly reduces the number of states explored during planning and accelerates decision-making. To validate our method, we tested it in 100 simulated environments of varying difficulty levels (easy and hard). The results show that our approach reduces the explored space by 73.2%, 96.9%, 91.6%, and 77.4%, and the length of trajectory required to reach the goal by 5.9%, 5.7%, 2.69%, and 21.1%, respectively, when compared to methods such as A*, Dijkstra, as well as other advanced methods such as an LLM-assisted A* and an improved A* algorithm.
{"title":"Waypoint-guided trajectory planning for mobile robots using GPT-4.1 mini and ensemble learning-based action prediction","authors":"Abderrahim Waga , Said Benhlima , Ali Bekri , Fatima Zahrae Saber , Jawad Abdouni , Toufik Mzili , Ahmed Regragui","doi":"10.1016/j.array.2025.100636","DOIUrl":"10.1016/j.array.2025.100636","url":null,"abstract":"<div><div>Trajectory planning is critical to autonomous navigation systems, working in conjunction with perception, localization, and obstacle avoidance. Traditional path planning algorithms often struggle in large or complex environments due to extensive memory usage and long computation times. In this paper, we propose a hierarchical planning, a multi-level approach where a high-level planner sets general goals for a low-level planner to execute, framework that combines the reasoning capabilities of a large language model (LLM) with the efficiency of a machine learning-based local planner. The LLM acts as a high-level planner by suggesting intermediate waypoints that guide the robot toward its goal. A machine learning-based trajectory planner then uses these waypoints to compute feasible and efficient paths at the local level. This approach significantly reduces the number of states explored during planning and accelerates decision-making. To validate our method, we tested it in 100 simulated environments of varying difficulty levels (easy and hard). The results show that our approach reduces the explored space by 73.2%, 96.9%, 91.6%, and 77.4%, and the length of trajectory required to reach the goal by 5.9%, 5.7%, 2.69%, and 21.1%, respectively, when compared to methods such as A*, Dijkstra, as well as other advanced methods such as an LLM-assisted A* and an improved A* algorithm.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100636"},"PeriodicalIF":4.5,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145735329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-06DOI: 10.1016/j.array.2025.100633
Ahmed Abdelaziz , Alia Nabil Mahmoud , Vitor Santos
Determining the optimal configuration for wireless sensor networks (WSNs) can be challenging due to the multitude of possible setups. To address this issue, our team has developed the Parallel Particle Swarm Optimization-based Self-Organizing Network Clustering (PPSOPM) method. By taking into account variables like remaining node energy, predictable energy usage, proximity to the base station, and number of nearby nodes, PPSOPM dynamically enhances wireless sensor node clusters. Achieving a balance between these factors is crucial to effectively organize nodes into clusters and select a surrogate node as the cluster's head. In comparison to alternative methods, PPSOPM significantly improves network structure by 44.39 % and extends network lifespan. However, node density may impact network longevity by increasing the distance between nodes. Also, when the base station is far from the sensor area, creating additional clusters can help conserve energy. On average, PPSOPM requires 0.57 s to complete, with a standard deviation of 0.04.
{"title":"A parallel particle swarm optimization for improving wireless sensor networks longevity-based dynamic clustering method","authors":"Ahmed Abdelaziz , Alia Nabil Mahmoud , Vitor Santos","doi":"10.1016/j.array.2025.100633","DOIUrl":"10.1016/j.array.2025.100633","url":null,"abstract":"<div><div>Determining the optimal configuration for wireless sensor networks (WSNs) can be challenging due to the multitude of possible setups. To address this issue, our team has developed the Parallel Particle Swarm Optimization-based Self-Organizing Network Clustering (PPSOPM) method. By taking into account variables like remaining node energy, predictable energy usage, proximity to the base station, and number of nearby nodes, PPSOPM dynamically enhances wireless sensor node clusters. Achieving a balance between these factors is crucial to effectively organize nodes into clusters and select a surrogate node as the cluster's head. In comparison to alternative methods, PPSOPM significantly improves network structure by 44.39 % and extends network lifespan. However, node density may impact network longevity by increasing the distance between nodes. Also, when the base station is far from the sensor area, creating additional clusters can help conserve energy. On average, PPSOPM requires 0.57 s to complete, with a standard deviation of 0.04.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100633"},"PeriodicalIF":4.5,"publicationDate":"2025-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145735332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Artificial Intelligence (GAI) is revolutionizing digital marketing by auto-content creation, personalized customer experience, and data-driven decisions. This study conducts a systematic literature review and case study analysis to explore GAI applications, benefits, and challenges in modern digital marketing. Drawing on an extensive analysis of academic journals and industry publications, the current research examines leading GAI software such as ChatGPT, DALL-E, MidJourney, Jasper.ai, and Synthesia based on how they aid in content creation, visual design, and video production. The research also provides real-world case studies in multiple industries, such as retail and fashion, food and beverages, and travel and tourism. The case findings illustrated how GAI augments marketing automation, facilitates customer engagement, and amplifies brand engagement, resulting in greater customer satisfaction, higher conversion rates, and better campaign performance. Although it has several benefits, the adoption of GAI is hampered by several critical barriers, such as data privacy, ethical risks, worker resistance, quality control issues, and infrastructure constraints. This research pinpoints these essential challenges and offers practical solutions. It provides actionable insights for businesses seeking to leverage GAI for competitive advantage in the evolving digital landscape by bridging the gap between theory and practice. The findings contribute to the growing discourse on AI-driven marketing strategies and lay the foundation for future research on GAI's long-term impact on consumer engagement and brand loyalty.
{"title":"From ideation to execution: Unleashing the power of generative AI in modern digital marketing and customer engagement- A systematic literature review and case study","authors":"Sayeed Salih , Omayma Husain , Refan Mohamed Almohamedh , Hayfaa tajelsier , Aisha Hassan Abdalla Hashim , Hashim Elshafie , Abdelwahed Motwakel","doi":"10.1016/j.array.2025.100630","DOIUrl":"10.1016/j.array.2025.100630","url":null,"abstract":"<div><div>Generative Artificial Intelligence (GAI) is revolutionizing digital marketing by auto-content creation, personalized customer experience, and data-driven decisions. This study conducts a systematic literature review and case study analysis to explore GAI applications, benefits, and challenges in modern digital marketing. Drawing on an extensive analysis of academic journals and industry publications, the current research examines leading GAI software such as ChatGPT, DALL-E, MidJourney, Jasper.ai, and Synthesia based on how they aid in content creation, visual design, and video production. The research also provides real-world case studies in multiple industries, such as retail and fashion, food and beverages, and travel and tourism. The case findings illustrated how GAI augments marketing automation, facilitates customer engagement, and amplifies brand engagement, resulting in greater customer satisfaction, higher conversion rates, and better campaign performance. Although it has several benefits, the adoption of GAI is hampered by several critical barriers, such as data privacy, ethical risks, worker resistance, quality control issues, and infrastructure constraints. This research pinpoints these essential challenges and offers practical solutions. It provides actionable insights for businesses seeking to leverage GAI for competitive advantage in the evolving digital landscape by bridging the gap between theory and practice. The findings contribute to the growing discourse on AI-driven marketing strategies and lay the foundation for future research on GAI's long-term impact on consumer engagement and brand loyalty.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100630"},"PeriodicalIF":4.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.array.2025.100627
Tang Zhou , Le Wang , Minxian Liang
There is a large amount of redundant data among users of cloud storage services. Client-side deduplication helps reduce the cost for service providers by avoiding repeated uploads and storage. However, this technique brings new security risks. Malicious users may use illegally obtained deduplication tags, such as file fingerprints, to fake ownership of other users’ files. Proof of Ownership (PoW) can require users to prove they have the full file, but existing methods are inefficient. They often need multiple rounds of interaction or complex computation over the whole file. As a result, the verification time increases with file size. To solve this problem, we propose a non-interactive PoW scheme based on zk-STARK. The system selects a number of challenge blocks that meet cryptographic security. It uses arithmetic circuits to encode block selection, hash computation, and the correctness of accumulators. Users only need to generate a zero-knowledge proof on these blocks. This allows them to prove they own the full file without revealing its content. The verification time does not depend on file size and appears near-constant in practice. In tests on files from 64 MB to 1 GB, our scheme is 1.2 to 46 times faster than existing methods. Security analysis shows that only a small number of blocks need to be verified. Even if an attacker knows 90% of the file, the chance of forgery is still lower than . This scheme provides an efficient and practical solution for deduplication in cloud storage with strong privacy protection.
{"title":"ZKNiS-PoW: A privacy-preserving proof of ownership scheme for secure cloud storage","authors":"Tang Zhou , Le Wang , Minxian Liang","doi":"10.1016/j.array.2025.100627","DOIUrl":"10.1016/j.array.2025.100627","url":null,"abstract":"<div><div>There is a large amount of redundant data among users of cloud storage services. Client-side deduplication helps reduce the cost for service providers by avoiding repeated uploads and storage. However, this technique brings new security risks. Malicious users may use illegally obtained deduplication tags, such as file fingerprints, to fake ownership of other users’ files. Proof of Ownership (PoW) can require users to prove they have the full file, but existing methods are inefficient. They often need multiple rounds of interaction or complex computation over the whole file. As a result, the verification time increases with file size. To solve this problem, we propose a non-interactive PoW scheme based on zk-STARK. The system selects a number of challenge blocks that meet cryptographic security. It uses arithmetic circuits to encode block selection, hash computation, and the correctness of accumulators. Users only need to generate a zero-knowledge proof on these blocks. This allows them to prove they own the full file without revealing its content. The verification time does not depend on file size and appears near-constant in practice. In tests on files from 64 MB to 1 GB, our scheme is 1.2 to 46 times faster than existing methods. Security analysis shows that only a small number of blocks need to be verified. Even if an attacker knows 90% of the file, the chance of forgery is still lower than <span><math><msup><mrow><mn>2</mn></mrow><mrow><mo>−</mo><mn>80</mn></mrow></msup></math></span>. This scheme provides an efficient and practical solution for deduplication in cloud storage with strong privacy protection.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100627"},"PeriodicalIF":4.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145788380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.array.2025.100621
Blanca Méndez, Paula Lamo
The work presents an Internet of Things (IoT) monitoring node designed to convert shipping containers into smart greenhouses, thereby optimizing the transportation of live plants over long distances. This node utilizes specialized sensors to measure ambient temperature, soil temperature, relative humidity, soil moisture, and luminosity. Communication is via LoRa for local transmission and a low-Earth orbit (LEO) satellite infrastructure to ensure global connectivity. The collected data is stored and verified on the IOTA Tangle, ensuring traceability and immutability. Field tests in controlled environments demonstrated measurement accuracy, operational stability under varying environmental conditions, and optimized energy consumption. Additionally, the system is accessible through the Blynk platform, which provides real-time monitoring and customizable alert configurations. The paper also analyzes system costs, scalability, and specific conditions for installation on ships and containers. Limitations, such as dependence on satellite connectivity and structural barriers in maritime environments, are discussed. Finally, future lines of research focused on integrating artificial intelligence, energy optimization and connection with global logistics chains are identified.
{"title":"IoT node for monitoring and traceability of live plants in maritime transport","authors":"Blanca Méndez, Paula Lamo","doi":"10.1016/j.array.2025.100621","DOIUrl":"10.1016/j.array.2025.100621","url":null,"abstract":"<div><div>The work presents an Internet of Things (IoT) monitoring node designed to convert shipping containers into smart greenhouses, thereby optimizing the transportation of live plants over long distances. This node utilizes specialized sensors to measure ambient temperature, soil temperature, relative humidity, soil moisture, and luminosity. Communication is via LoRa for local transmission and a low-Earth orbit (LEO) satellite infrastructure to ensure global connectivity. The collected data is stored and verified on the IOTA Tangle, ensuring traceability and immutability. Field tests in controlled environments demonstrated measurement accuracy, operational stability under varying environmental conditions, and optimized energy consumption. Additionally, the system is accessible through the Blynk platform, which provides real-time monitoring and customizable alert configurations. The paper also analyzes system costs, scalability, and specific conditions for installation on ships and containers. Limitations, such as dependence on satellite connectivity and structural barriers in maritime environments, are discussed. Finally, future lines of research focused on integrating artificial intelligence, energy optimization and connection with global logistics chains are identified.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100621"},"PeriodicalIF":4.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.array.2025.100632
Juan M. Núñez V. , Carlos Alberto Peláez , Andrés Solano , Juan M. Corchado , Fernando De la Prieta
This study proposes a framework for designing a GenAI-UX layer in edge computing systems, integrating user experience (UX) principles and small language models (SLM) to optimize human–machine interaction in agricultural environments. Applied to indoor coriander cultivation, the enhanced UX design reduced the harvest cycle from 45 to 32 days by optimizing irrigation and environmental conditions through a voice-controlled interface with audio generation. The implementation of a structured UX approach enabled more efficient interaction with the system, facilitating data-driven decision-making processed at the edge. The study highlights the importance of UX design guidelines aligned with system requirements to ensure accessibility, intuitiveness, and efficiency in embedded environments. The results demonstrate that using SLM in edge systems improves personalization and response speed, enhancing agricultural productivity. This work underscores the need for a reference framework for UX design in embedded systems, ensuring effective interactions and a greater impact on the efficiency of controlled crop environments.
{"title":"Design of a GenAI UX layer with small language models for edge computing in smart agriculture","authors":"Juan M. Núñez V. , Carlos Alberto Peláez , Andrés Solano , Juan M. Corchado , Fernando De la Prieta","doi":"10.1016/j.array.2025.100632","DOIUrl":"10.1016/j.array.2025.100632","url":null,"abstract":"<div><div>This study proposes a framework for designing a GenAI-UX layer in edge computing systems, integrating user experience (UX) principles and small language models (SLM) to optimize human–machine interaction in agricultural environments. Applied to indoor coriander cultivation, the enhanced UX design reduced the harvest cycle from 45 to 32 days by optimizing irrigation and environmental conditions through a voice-controlled interface with audio generation. The implementation of a structured UX approach enabled more efficient interaction with the system, facilitating data-driven decision-making processed at the edge. The study highlights the importance of UX design guidelines aligned with system requirements to ensure accessibility, intuitiveness, and efficiency in embedded environments. The results demonstrate that using SLM in edge systems improves personalization and response speed, enhancing agricultural productivity. This work underscores the need for a reference framework for UX design in embedded systems, ensuring effective interactions and a greater impact on the efficiency of controlled crop environments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100632"},"PeriodicalIF":4.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.array.2025.100625
Sarah Altowairqi , Suhuai Luo , Peter Greer , Shan Chen
The rising deployment of surveillance systems in urban environments necessitates efficient automated anomaly detection methods. While showing promise, current deep learning approaches struggle with computational complexity and real-time performance in processing spatiotemporal information. This paper presents a hybrid framework integrating Convolutional 3D Networks (C3D), Long Short-Term Memory (LSTM) networks, and attention mechanisms for anomaly detection. Through a systematic evaluation of four attention mechanisms—self-attention, multi-head attention, Bahdanau attention, and Luong attention—we demonstrate their operational differences and their differential impact on feature extraction and classification performance across three diverse benchmark datasets. Our multi-head attention variant achieves state-of-the-art results with 99.40 % accuracy and 99.96 % Area Under the Curve (AUC) on Violent Flows, while maintaining robust performance across varying dataset complexities, achieving 91.87 % accuracy on the ShanghaiTech Campus and 79.7 % accuracy on the UCF-Crime dataset. Comprehensive cross-dataset evaluation demonstrates consistent improvements of 2.4 %–3.5 % over baseline approaches, with all attention mechanisms outperforming traditional spatiotemporal models. The proposed architecture effectively balances computational requirements with detection performance, maintaining real-time processing capabilities suitable for operational deployment. This framework advances the technical capabilities of anomaly detection systems while providing a validated foundation for practical deployment in diverse surveillance environments, from controlled scenarios to challenging real-world conditions.
{"title":"Efficient crowd anomaly detection using C3D-LSTM networks with enhanced attention mechanisms","authors":"Sarah Altowairqi , Suhuai Luo , Peter Greer , Shan Chen","doi":"10.1016/j.array.2025.100625","DOIUrl":"10.1016/j.array.2025.100625","url":null,"abstract":"<div><div>The rising deployment of surveillance systems in urban environments necessitates efficient automated anomaly detection methods. While showing promise, current deep learning approaches struggle with computational complexity and real-time performance in processing spatiotemporal information. This paper presents a hybrid framework integrating Convolutional 3D Networks (C3D), Long Short-Term Memory (LSTM) networks, and attention mechanisms for anomaly detection. Through a systematic evaluation of four attention mechanisms—self-attention, multi-head attention, Bahdanau attention, and Luong attention—we demonstrate their operational differences and their differential impact on feature extraction and classification performance across three diverse benchmark datasets. Our multi-head attention variant achieves state-of-the-art results with 99.40 % accuracy and 99.96 % Area Under the Curve (AUC) on Violent Flows, while maintaining robust performance across varying dataset complexities, achieving 91.87 % accuracy on the ShanghaiTech Campus and 79.7 % accuracy on the UCF-Crime dataset. Comprehensive cross-dataset evaluation demonstrates consistent improvements of 2.4 %–3.5 % over baseline approaches, with all attention mechanisms outperforming traditional spatiotemporal models. The proposed architecture effectively balances computational requirements with detection performance, maintaining real-time processing capabilities suitable for operational deployment. This framework advances the technical capabilities of anomaly detection systems while providing a validated foundation for practical deployment in diverse surveillance environments, from controlled scenarios to challenging real-world conditions.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100625"},"PeriodicalIF":4.5,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.array.2025.100589
Jing He, Jianfei Jiang, Changfan Zhang
As social production technologies develop, object detection becomes vital in sectors such as agriculture, industry, and healthcare. It decreases dependence on manual labour and enhances accuracy and efficiency. However, edge devices confront limitations in computational power, storage, and energy, creating a trade-off between accuracy and model size. To tackle this, academia and industry have proposed solutions including hardware-coordinated acceleration, adaptive task lightweighting, and hybrid compression. This paper reviews research from 2020 to 2025 on lightweight object detection, providing a systematic overview of efficient architecture and model compression techniques, explaining their mechanisms, challenges, and future directions to support ongoing progress.
{"title":"A survey of lightweight methods for object detection networks","authors":"Jing He, Jianfei Jiang, Changfan Zhang","doi":"10.1016/j.array.2025.100589","DOIUrl":"10.1016/j.array.2025.100589","url":null,"abstract":"<div><div>As social production technologies develop, object detection becomes vital in sectors such as agriculture, industry, and healthcare. It decreases dependence on manual labour and enhances accuracy and efficiency. However, edge devices confront limitations in computational power, storage, and energy, creating a trade-off between accuracy and model size. To tackle this, academia and industry have proposed solutions including hardware-coordinated acceleration, adaptive task lightweighting, and hybrid compression. This paper reviews research from 2020 to 2025 on lightweight object detection, providing a systematic overview of efficient architecture and model compression techniques, explaining their mechanisms, challenges, and future directions to support ongoing progress.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100589"},"PeriodicalIF":4.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.array.2025.100615
Javed Sayyad, Khush Attarde
Object detection is essential in several industries, including defense, autonomous vehicles, and surveillance. These applications rely on various devices equipped with cameras, such as vehicles, drones, and satellites; primarily operating in the visible spectral domain rather than infrared or other spectral ranges. Deep Learning (DL) techniques have significantly advanced the field of object detection, enabling the identification of various objects. However, detecting tiny objects remains a challenging task. Despite its difficulty, identifying small objects in images captured by these devices in the visible spectrum is crucial. It is essential to explore hybrid techniques and modifications in feature architectures to address the challenge of detecting tiny objects. Simple architectures often fall short in this regard, necessitating more sophisticated approaches. This paper systematically reviews different DL-based approaches researchers have previously employed to tackle this issue. A systematic literature review on SOD and DL techniques uses the ”Preferred Reporting Items for Systematic Reviews and Meta-Analysis” (PRISMA) methodology. It discusses various DL-based theoretical frameworks, including Reinforcement Learning and Generative Adversarial Networks, specifically for Small Object Detection (SOD) in visible spectral images. The review begins by defining a small object and identifying the datasets available for various applications, such as remote sensing and autonomous vehicles. It then examines the implementation of models according to these datasets and analyzes the findings from other researchers. The analysis reveals that, for most datasets, the average precision (AP) for SOD ranges from 20% to 40% and showcases the need for the advancement and focus.
{"title":"A systematic literature review on deep learning approaches for small object detection","authors":"Javed Sayyad, Khush Attarde","doi":"10.1016/j.array.2025.100615","DOIUrl":"10.1016/j.array.2025.100615","url":null,"abstract":"<div><div>Object detection is essential in several industries, including defense, autonomous vehicles, and surveillance. These applications rely on various devices equipped with cameras, such as vehicles, drones, and satellites; primarily operating in the visible spectral domain rather than infrared or other spectral ranges. Deep Learning (DL) techniques have significantly advanced the field of object detection, enabling the identification of various objects. However, detecting tiny objects remains a challenging task. Despite its difficulty, identifying small objects in images captured by these devices in the visible spectrum is crucial. It is essential to explore hybrid techniques and modifications in feature architectures to address the challenge of detecting tiny objects. Simple architectures often fall short in this regard, necessitating more sophisticated approaches. This paper systematically reviews different DL-based approaches researchers have previously employed to tackle this issue. A systematic literature review on SOD and DL techniques uses the ”Preferred Reporting Items for Systematic Reviews and Meta-Analysis” (PRISMA) methodology. It discusses various DL-based theoretical frameworks, including Reinforcement Learning and Generative Adversarial Networks, specifically for Small Object Detection (SOD) in visible spectral images. The review begins by defining a small object and identifying the datasets available for various applications, such as remote sensing and autonomous vehicles. It then examines the implementation of models according to these datasets and analyzes the findings from other researchers. The analysis reveals that, for most datasets, the average precision (AP) for SOD ranges from 20% to 40% and showcases the need for the advancement and focus.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"29 ","pages":"Article 100615"},"PeriodicalIF":4.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145921199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.array.2025.100607
Yonglin Zhang , Shiqi Wu , Runyu Jing , Jiesi Luo
Understanding the mechanisms governing transcription factor (TF) binding specificity remains a fundamental challenge in computational biology. This study aims to improve the prediction of TF binding specificity, addressing key limitations in current computational models that often struggle to accurately predict TF binding affinities due to issues such as batch effects and non-uniform data distributions. Here, we present a novel approach that utilizes regression models trained on high-throughput SELEX (HT-SELEX) data, employing a deep forest algorithm to model the binding specificity of 215 mammalian TFs, integrating DNA sequences, composition, physicochemical properties, and structural features. The predicted binding affinities show strong concordance with experimental HT-SELEX measurements, confirming the model's accuracy and predictive reliability. Additionally, we apply this method to analyze the effects of genetic variations on TF binding, as well as to identify TF binding sites across the genome. Both applications rely on the same underlying methodology, with cumulative density function-based regression calibration improving the precision of predictions. This ensures reliable predictions of TF binding changes, both for sequence variations and genome-wide binding patterns. This study presents a novel approach that enhances the accuracy of TF binding specificity predictions and provides deeper insights into gene regulation and the impact of genetic variations on TF binding, with important implications for understanding gene regulation and disease mechanisms.
{"title":"A quantitative approach to modeling transcription factor binding specificity from DNA sequences","authors":"Yonglin Zhang , Shiqi Wu , Runyu Jing , Jiesi Luo","doi":"10.1016/j.array.2025.100607","DOIUrl":"10.1016/j.array.2025.100607","url":null,"abstract":"<div><div>Understanding the mechanisms governing transcription factor (TF) binding specificity remains a fundamental challenge in computational biology. This study aims to improve the prediction of TF binding specificity, addressing key limitations in current computational models that often struggle to accurately predict TF binding affinities due to issues such as batch effects and non-uniform data distributions. Here, we present a novel approach that utilizes regression models trained on high-throughput SELEX (HT-SELEX) data, employing a deep forest algorithm to model the binding specificity of 215 mammalian TFs, integrating DNA sequences, composition, physicochemical properties, and structural features. The predicted binding affinities show strong concordance with experimental HT-SELEX measurements, confirming the model's accuracy and predictive reliability. Additionally, we apply this method to analyze the effects of genetic variations on TF binding, as well as to identify TF binding sites across the genome. Both applications rely on the same underlying methodology, with cumulative density function-based regression calibration improving the precision of predictions. This ensures reliable predictions of TF binding changes, both for sequence variations and genome-wide binding patterns. This study presents a novel approach that enhances the accuracy of TF binding specificity predictions and provides deeper insights into gene regulation and the impact of genetic variations on TF binding, with important implications for understanding gene regulation and disease mechanisms.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"28 ","pages":"Article 100607"},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}