首页 > 最新文献

Journal of Imaging最新文献

英文 中文
Machine Learning for Human Activity Recognition: State-of-the-Art Techniques and Emerging Trends.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-20 DOI: 10.3390/jimaging11030091
Md Amran Hossen, Pg Emeroylariffion Abas

Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. It explores the strengths and limitations of commonly used modalities, such as RGB images/videos, depth sensors, motion capture systems, wearable devices, and emerging technologies like radar and Wi-Fi channel state information. The review also discusses traditional machine learning approaches, including supervised and unsupervised learning, alongside cutting-edge advancements in deep learning, such as convolutional and recurrent neural networks, attention mechanisms, and reinforcement learning frameworks. Despite significant progress, HAR still faces critical challenges, including handling environmental variability, ensuring model interpretability, and achieving high recognition accuracy in complex, real-world scenarios. Future research directions emphasise the need for improved multimodal sensor fusion, adaptive and personalised models, and the integration of edge computing for real-time analysis. Additionally, addressing ethical considerations, such as privacy and algorithmic fairness, remains a priority as HAR systems become more pervasive. This study highlights the evolving landscape of HAR and outlines strategies for future advancements that can enhance the reliability and applicability of HAR technologies in diverse domains.

{"title":"Machine Learning for Human Activity Recognition: State-of-the-Art Techniques and Emerging Trends.","authors":"Md Amran Hossen, Pg Emeroylariffion Abas","doi":"10.3390/jimaging11030091","DOIUrl":"10.3390/jimaging11030091","url":null,"abstract":"<p><p>Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. It explores the strengths and limitations of commonly used modalities, such as RGB images/videos, depth sensors, motion capture systems, wearable devices, and emerging technologies like radar and Wi-Fi channel state information. The review also discusses traditional machine learning approaches, including supervised and unsupervised learning, alongside cutting-edge advancements in deep learning, such as convolutional and recurrent neural networks, attention mechanisms, and reinforcement learning frameworks. Despite significant progress, HAR still faces critical challenges, including handling environmental variability, ensuring model interpretability, and achieving high recognition accuracy in complex, real-world scenarios. Future research directions emphasise the need for improved multimodal sensor fusion, adaptive and personalised models, and the integration of edge computing for real-time analysis. Additionally, addressing ethical considerations, such as privacy and algorithmic fairness, remains a priority as HAR systems become more pervasive. This study highlights the evolving landscape of HAR and outlines strategies for future advancements that can enhance the reliability and applicability of HAR technologies in diverse domains.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11943402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovering Image Quality in Low-Dose Pediatric Renal Scintigraphy Using Deep Learning.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-19 DOI: 10.3390/jimaging11030088
Marta Arsénio, Ricardo Vigário, Ana M Mota

The objective of this study is to propose an advanced image enhancement strategy to address the challenge of reducing radiation doses in pediatric renal scintigraphy. Data from a public dynamic renal scintigraphy database were used. Based on noisier images, four denoising neural networks (DnCNN, UDnCNN, DUDnCNN, and AttnGAN) were evaluated. To evaluate the quality of the noise reduction, with minimal detail loss, the kidney signal-to-noise ratio (SNR) and multiscale structural similarity (MS-SSIM) were used. Although all the networks reduced noise, UDnCNN achieved the best balance between SNR and MS-SSIM, leading to the most notable improvements in image quality. In clinical practice, 100% of the acquired data are summed to produce the final image. To simulate the dose reduction, we summed only 50%, simulating a proportional decrease in radiation. The proposed deep-learning approach for image enhancement ensured that half of all the frames acquired may yield results that are comparable to those of the complete dataset, suggesting that it is feasible to reduce patients' exposure to radiation. This study demonstrates that the neural networks evaluated can markedly improve the renal scintigraphic image quality, facilitating high-quality imaging with lower radiation doses, which will benefit the pediatric population considerably.

{"title":"Recovering Image Quality in Low-Dose Pediatric Renal Scintigraphy Using Deep Learning.","authors":"Marta Arsénio, Ricardo Vigário, Ana M Mota","doi":"10.3390/jimaging11030088","DOIUrl":"10.3390/jimaging11030088","url":null,"abstract":"<p><p>The objective of this study is to propose an advanced image enhancement strategy to address the challenge of reducing radiation doses in pediatric renal scintigraphy. Data from a public dynamic renal scintigraphy database were used. Based on noisier images, four denoising neural networks (DnCNN, UDnCNN, DUDnCNN, and AttnGAN) were evaluated. To evaluate the quality of the noise reduction, with minimal detail loss, the kidney signal-to-noise ratio (SNR) and multiscale structural similarity (MS-SSIM) were used. Although all the networks reduced noise, UDnCNN achieved the best balance between SNR and MS-SSIM, leading to the most notable improvements in image quality. In clinical practice, 100% of the acquired data are summed to produce the final image. To simulate the dose reduction, we summed only 50%, simulating a proportional decrease in radiation. The proposed deep-learning approach for image enhancement ensured that half of all the frames acquired may yield results that are comparable to those of the complete dataset, suggesting that it is feasible to reduce patients' exposure to radiation. This study demonstrates that the neural networks evaluated can markedly improve the renal scintigraphic image quality, facilitating high-quality imaging with lower radiation doses, which will benefit the pediatric population considerably.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11942829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Simulated Dose Reduction on the Performance of Artificial Intelligence in Chest Radiography.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-19 DOI: 10.3390/jimaging11030090
Hendrik Erenstein, Wim P Krijnen, Annemieke van der Heij-Meijer, Peter van Ooijen

Chest imaging plays a pivotal role in screening and monitoring patients, and various predictive artificial intelligence (AI) models have been developed in support of this. However, little is known about the effect of decreasing the radiation dose and, thus, image quality on AI performance. This study aims to design a low-dose simulation and evaluate the effect of this simulation on the performance of CNNs in plain chest radiography. Seven pathology labels and corresponding images from Medical Information Mart for Intensive Care datasets were used to train AI models at two spatial resolutions. These 14 models were tested using the original images, 50% and 75% low-dose simulations. We compared the area under the receiver operator characteristic (AUROC) of the original images and both simulations using DeLong testing. The average absolute change in AUROC related to simulated dose reduction for both resolutions was <0.005, and none exceeded a change of 0.014. Of the 28 test sets, 6 were significantly different. An assessment of predictions, performed through the splitting of the data by gender and patient positioning, showed a similar trend. The effect of simulated dose reductions on CNN performance, although significant in 6 of 28 cases, has minimal clinical impact. The effect of patient positioning exceeds that of dose reduction.

{"title":"The Effect of Simulated Dose Reduction on the Performance of Artificial Intelligence in Chest Radiography.","authors":"Hendrik Erenstein, Wim P Krijnen, Annemieke van der Heij-Meijer, Peter van Ooijen","doi":"10.3390/jimaging11030090","DOIUrl":"10.3390/jimaging11030090","url":null,"abstract":"<p><p>Chest imaging plays a pivotal role in screening and monitoring patients, and various predictive artificial intelligence (AI) models have been developed in support of this. However, little is known about the effect of decreasing the radiation dose and, thus, image quality on AI performance. This study aims to design a low-dose simulation and evaluate the effect of this simulation on the performance of CNNs in plain chest radiography. Seven pathology labels and corresponding images from Medical Information Mart for Intensive Care datasets were used to train AI models at two spatial resolutions. These 14 models were tested using the original images, 50% and 75% low-dose simulations. We compared the area under the receiver operator characteristic (AUROC) of the original images and both simulations using DeLong testing. The average absolute change in AUROC related to simulated dose reduction for both resolutions was <0.005, and none exceeded a change of 0.014. Of the 28 test sets, 6 were significantly different. An assessment of predictions, performed through the splitting of the data by gender and patient positioning, showed a similar trend. The effect of simulated dose reductions on CNN performance, although significant in 6 of 28 cases, has minimal clinical impact. The effect of patient positioning exceeds that of dose reduction.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11943096/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergy of Art, Science, and Technology: A Case Study of Augmented Reality and Artificial Intelligence in Enhancing Cultural Heritage Engagement.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-19 DOI: 10.3390/jimaging11030089
Ailin Chen, Rui Jesus, Márcia Vilarigues

In recent years, there has been growing interest in taking advantage of the technological progress in information technology and computer science to enhance the synergy between multidisciplinary organisations with a mutual objective of improving scientific knowledge and engaging society in cultural activities. Such an example of collaboration networks includes those where governmental, scientific and cultural institutions work in unison to provide services that support research through the use of technology while disseminating information and promoting cultural heritage. Here, we present a case study implementing the results of the work between multidisciplinary departments of the NOVA University Lisbon and third-party cultural heritage organisations. In particular, a mobile and desktop PC application uses augmented reality to showcase results obtained from analysis of artwork by Amadeo de Souza-Cardoso using artificial intelligence. The mobile application is intended to be used to enhance museum visitors' experience and strengthen the link between scientific, governmental, and heritage organisations.

{"title":"Synergy of Art, Science, and Technology: A Case Study of Augmented Reality and Artificial Intelligence in Enhancing Cultural Heritage Engagement.","authors":"Ailin Chen, Rui Jesus, Márcia Vilarigues","doi":"10.3390/jimaging11030089","DOIUrl":"10.3390/jimaging11030089","url":null,"abstract":"<p><p>In recent years, there has been growing interest in taking advantage of the technological progress in information technology and computer science to enhance the synergy between multidisciplinary organisations with a mutual objective of improving scientific knowledge and engaging society in cultural activities. Such an example of collaboration networks includes those where governmental, scientific and cultural institutions work in unison to provide services that support research through the use of technology while disseminating information and promoting cultural heritage. Here, we present a case study implementing the results of the work between multidisciplinary departments of the NOVA University Lisbon and third-party cultural heritage organisations. In particular, a mobile and desktop PC application uses augmented reality to showcase results obtained from analysis of artwork by Amadeo de Souza-Cardoso using artificial intelligence. The mobile application is intended to be used to enhance museum visitors' experience and strengthen the link between scientific, governmental, and heritage organisations.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11942812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Segmentation of Plants and Weeds in Wide-Band Multispectral Imaging (WMI).
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-18 DOI: 10.3390/jimaging11030085
Sovi Guillaume Sodjinou, Amadou Tidjani Sanda Mahama, Pierre Gouton

Semantic segmentation in deep learning is a crucial area of research within computer vision, aimed at assigning specific labels to each pixel in an image. The segmentation of crops, plants, and weeds has significantly advanced the application of deep learning in precision agriculture, leading to the development of sophisticated architectures based on convolutional neural networks (CNNs). This study proposes a segmentation algorithm for identifying plants and weeds using broadband multispectral images. In the first part of this algorithm, we utilize the PIF-Net model for feature extraction and fusion. The resulting feature map is then employed to enhance an optimized U-Net model for semantic segmentation within a broadband system. Our investigation focuses specifically on scenes from the CAVIAR dataset of multispectral images. The proposed algorithm has enabled us to effectively capture complex details while regulating the learning process, achieving an impressive overall accuracy of 98.2%. The results demonstrate that our approach to semantic segmentation and the differentiation between plants and weeds yields accurate and compelling outcomes.

{"title":"Automatic Segmentation of Plants and Weeds in Wide-Band Multispectral Imaging (WMI).","authors":"Sovi Guillaume Sodjinou, Amadou Tidjani Sanda Mahama, Pierre Gouton","doi":"10.3390/jimaging11030085","DOIUrl":"10.3390/jimaging11030085","url":null,"abstract":"<p><p>Semantic segmentation in deep learning is a crucial area of research within computer vision, aimed at assigning specific labels to each pixel in an image. The segmentation of crops, plants, and weeds has significantly advanced the application of deep learning in precision agriculture, leading to the development of sophisticated architectures based on convolutional neural networks (CNNs). This study proposes a segmentation algorithm for identifying plants and weeds using broadband multispectral images. In the first part of this algorithm, we utilize the PIF-Net model for feature extraction and fusion. The resulting feature map is then employed to enhance an optimized U-Net model for semantic segmentation within a broadband system. Our investigation focuses specifically on scenes from the CAVIAR dataset of multispectral images. The proposed algorithm has enabled us to effectively capture complex details while regulating the learning process, achieving an impressive overall accuracy of 98.2%. The results demonstrate that our approach to semantic segmentation and the differentiation between plants and weeds yields accurate and compelling outcomes.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11943369/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Semantic Segmentation for Objective Colonoscopy Quality Assessment.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-18 DOI: 10.3390/jimaging11030084
Radu Alexandru Vulpoi, Adrian Ciobanu, Vasile Liviu Drug, Catalina Mihai, Oana Bogdana Barboi, Diana Elena Floria, Alexandru Ionut Coseru, Andrei Olteanu, Vadim Rosca, Mihaela Luca

Background: This study aims to objectively evaluate the overall quality of colonoscopies using a specially trained deep learning-based semantic segmentation neural network. This represents a modern and valuable approach for the analysis of colonoscopy frames. Methods: We collected thousands of colonoscopy frames extracted from a set of video colonoscopy files. A color-based image processing method was used to extract color features from specific regions of each colonoscopy frame, namely, the intestinal mucosa, residues, artifacts, and lumen. With these features, we automatically annotated all the colonoscopy frames and then selected the best of them to train a semantic segmentation network. This trained network was used to classify the four region types in a different set of test colonoscopy frames and extract pixel statistics that are relevant to quality evaluation. The test colonoscopies were also evaluated by colonoscopy experts using the Boston scale. Results: The deep learning semantic segmentation method obtained good results, in terms of classifying the four key regions in colonoscopy frames, and produced pixel statistics that are efficient in terms of objective quality assessment. The Spearman correlation results were as follows: BBPS vs. pixel scores: 0.69; BBPS vs. mucosa pixel percentage: 0.63; BBPS vs. residue pixel percentage: -0.47; BBPS vs. Artifact Pixel Percentage: -0.65. The agreement analysis using Cohen's Kappa yielded a value of 0.28. The colonoscopy evaluation based on the extracted pixel statistics showed a fair level of compatibility with the experts' evaluations. Conclusions: Our proposed deep learning semantic segmentation approach is shown to be a promising tool for evaluating the overall quality of colonoscopies and goes beyond the Boston Bowel Preparation Scale in terms of assessing colonoscopy quality. In particular, while the Boston scale focuses solely on the amount of residual content, our method can identify and quantify the percentage of colonic mucosa, residues, and artifacts, providing a more comprehensive and objective evaluation.

{"title":"Deep Learning-Based Semantic Segmentation for Objective Colonoscopy Quality Assessment.","authors":"Radu Alexandru Vulpoi, Adrian Ciobanu, Vasile Liviu Drug, Catalina Mihai, Oana Bogdana Barboi, Diana Elena Floria, Alexandru Ionut Coseru, Andrei Olteanu, Vadim Rosca, Mihaela Luca","doi":"10.3390/jimaging11030084","DOIUrl":"10.3390/jimaging11030084","url":null,"abstract":"<p><p><b>Background:</b> This study aims to objectively evaluate the overall quality of colonoscopies using a specially trained deep learning-based semantic segmentation neural network. This represents a modern and valuable approach for the analysis of colonoscopy frames. <b>Methods:</b> We collected thousands of colonoscopy frames extracted from a set of video colonoscopy files. A color-based image processing method was used to extract color features from specific regions of each colonoscopy frame, namely, the intestinal mucosa, residues, artifacts, and lumen. With these features, we automatically annotated all the colonoscopy frames and then selected the best of them to train a semantic segmentation network. This trained network was used to classify the four region types in a different set of test colonoscopy frames and extract pixel statistics that are relevant to quality evaluation. The test colonoscopies were also evaluated by colonoscopy experts using the Boston scale. <b>Results:</b> The deep learning semantic segmentation method obtained good results, in terms of classifying the four key regions in colonoscopy frames, and produced pixel statistics that are efficient in terms of objective quality assessment. The Spearman correlation results were as follows: BBPS vs. pixel scores: 0.69; BBPS vs. mucosa pixel percentage: 0.63; BBPS vs. residue pixel percentage: -0.47; BBPS vs. Artifact Pixel Percentage: -0.65. The agreement analysis using Cohen's Kappa yielded a value of 0.28. The colonoscopy evaluation based on the extracted pixel statistics showed a fair level of compatibility with the experts' evaluations. <b>Conclusions:</b> Our proposed deep learning semantic segmentation approach is shown to be a promising tool for evaluating the overall quality of colonoscopies and goes beyond the Boston Bowel Preparation Scale in terms of assessing colonoscopy quality. In particular, while the Boston scale focuses solely on the amount of residual content, our method can identify and quantify the percentage of colonic mucosa, residues, and artifacts, providing a more comprehensive and objective evaluation.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11943454/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Dynamic Changes in Sedimentation in the Coastal Area of Amir-Abad Port Using High-Resolution Satellite Images.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-18 DOI: 10.3390/jimaging11030086
Ali Sam-Khaniani, Giacomo Viccione, Meisam Qorbani Fouladi, Rahman Hesabi-Fard

Sediment transport and shoreline changes causing shoreline morphodynamic evolution are key indicators of a coastal structure's operational continuity. To reduce the computational costs associated with sediment transport modelling tools, a novel procedure based on the combination of a support vector machine for image classification and a trained neural network to extrapolate the shore evolution is presented here. The current study focuses on the coastal area over the Amir-Abad port, using high-resolution satellite images. The real conditions of the study domain between 2004 and 2023 are analysed, with the aim of investigating changes in the shore area, shoreline position, and sediment appearance in the harbour basin. The measurements show that sediment accumulation increases by approximately 49,000 m2/y. A portion of the longshore sediment load is also trapped and deposited in the harbour basin, disrupting the normal operation of the port. Afterwards, satellite images were used to quantitatively analyse shoreline changes. A neural network is trained to predict the remaining time until the reservoir is filled (less than a decade), which is behind the west arm of the rubble-mound breakwaters. Harbour utility services will no longer be offered if actions are not taken to prevent sediment accumulation.

{"title":"Analysis of Dynamic Changes in Sedimentation in the Coastal Area of Amir-Abad Port Using High-Resolution Satellite Images.","authors":"Ali Sam-Khaniani, Giacomo Viccione, Meisam Qorbani Fouladi, Rahman Hesabi-Fard","doi":"10.3390/jimaging11030086","DOIUrl":"10.3390/jimaging11030086","url":null,"abstract":"<p><p>Sediment transport and shoreline changes causing shoreline morphodynamic evolution are key indicators of a coastal structure's operational continuity. To reduce the computational costs associated with sediment transport modelling tools, a novel procedure based on the combination of a support vector machine for image classification and a trained neural network to extrapolate the shore evolution is presented here. The current study focuses on the coastal area over the Amir-Abad port, using high-resolution satellite images. The real conditions of the study domain between 2004 and 2023 are analysed, with the aim of investigating changes in the shore area, shoreline position, and sediment appearance in the harbour basin. The measurements show that sediment accumulation increases by approximately 49,000 m<sup>2</sup>/y. A portion of the longshore sediment load is also trapped and deposited in the harbour basin, disrupting the normal operation of the port. Afterwards, satellite images were used to quantitatively analyse shoreline changes. A neural network is trained to predict the remaining time until the reservoir is filled (less than a decade), which is behind the west arm of the rubble-mound breakwaters. Harbour utility services will no longer be offered if actions are not taken to prevent sediment accumulation.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11942754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in Optical Contrast Agents for Medical Imaging: Fluorescent Probes and Molecular Imaging.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-18 DOI: 10.3390/jimaging11030087
Divya Tripathi, Mayurakshi Hardaniya, Suchita Pande, Dipak Maity

Optical imaging is an excellent non-invasive method for viewing visceral organs. Most importantly, it is safer as compared to ionizing radiation-based methods like X-rays. By making use of the properties of photons, this technique generates high-resolution images of cells, molecules, organs, and tissues using visible, ultraviolet, and infrared light. Moreover, optical imaging enables real-time evaluation of soft tissue properties, metabolic alterations, and early disease markers in real time by utilizing a variety of techniques, including fluorescence and bioluminescence. Innovative biocompatible fluorescent probes that may provide disease-specific optical signals are being used to improve diagnostic capabilities in a variety of clinical applications. However, despite these promising advancements, several challenges remain unresolved. The primary obstacle includes the difficulty of developing efficient fluorescent probes, and the tissue autofluorescence, which complicates signal detection. Furthermore, the depth penetration restrictions of several imaging modalities limit their use in imaging of deeper tissues. Additionally, enhancing biocompatibility, boosting fluorescent probe signal-to-noise ratios, and utilizing cutting-edge imaging technologies like machine learning for better image processing should be the main goals of future research. Overcoming these challenges and establishing optical imaging as a fundamental component of modern medical diagnoses and therapeutic treatments would require cooperation between scientists, physicians, and regulatory bodies.

光学成像是一种观察内脏器官的绝佳非侵入性方法。最重要的是,与 X 射线等基于电离辐射的方法相比,它更加安全。这项技术利用光子的特性,通过可见光、紫外线和红外线生成细胞、分子、器官和组织的高分辨率图像。此外,光学成像还能利用荧光和生物发光等多种技术实时评估软组织特性、新陈代谢变化和早期疾病标志物。可提供疾病特异性光学信号的创新型生物兼容荧光探针正被用于提高各种临床应用的诊断能力。然而,尽管取得了这些令人鼓舞的进展,但仍有一些挑战尚未解决。主要障碍包括开发高效荧光探针的难度,以及使信号检测复杂化的组织自发荧光。此外,几种成像模式的穿透深度限制了它们在深层组织成像中的应用。此外,增强生物兼容性、提高荧光探针的信噪比以及利用机器学习等尖端成像技术进行更好的图像处理,也是未来研究的主要目标。要克服这些挑战,并将光学成像作为现代医学诊断和治疗的基本组成部分,需要科学家、医生和监管机构之间的合作。
{"title":"Advances in Optical Contrast Agents for Medical Imaging: Fluorescent Probes and Molecular Imaging.","authors":"Divya Tripathi, Mayurakshi Hardaniya, Suchita Pande, Dipak Maity","doi":"10.3390/jimaging11030087","DOIUrl":"10.3390/jimaging11030087","url":null,"abstract":"<p><p>Optical imaging is an excellent non-invasive method for viewing visceral organs. Most importantly, it is safer as compared to ionizing radiation-based methods like X-rays. By making use of the properties of photons, this technique generates high-resolution images of cells, molecules, organs, and tissues using visible, ultraviolet, and infrared light. Moreover, optical imaging enables real-time evaluation of soft tissue properties, metabolic alterations, and early disease markers in real time by utilizing a variety of techniques, including fluorescence and bioluminescence. Innovative biocompatible fluorescent probes that may provide disease-specific optical signals are being used to improve diagnostic capabilities in a variety of clinical applications. However, despite these promising advancements, several challenges remain unresolved. The primary obstacle includes the difficulty of developing efficient fluorescent probes, and the tissue autofluorescence, which complicates signal detection. Furthermore, the depth penetration restrictions of several imaging modalities limit their use in imaging of deeper tissues. Additionally, enhancing biocompatibility, boosting fluorescent probe signal-to-noise ratios, and utilizing cutting-edge imaging technologies like machine learning for better image processing should be the main goals of future research. Overcoming these challenges and establishing optical imaging as a fundamental component of modern medical diagnoses and therapeutic treatments would require cooperation between scientists, physicians, and regulatory bodies.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11942650/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Battle Royale Optimization for Optimal Band Selection in Predicting Soil Nutrients Using Visible and Near-Infrared Reflectance Spectroscopy and PLSR Algorithm.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-17 DOI: 10.3390/jimaging11030083
Jagadeeswaran Ramasamy, Anand Raju, Kavitha Krishnasamy Ranganathan, Muthumanickam Dhanaraju, Backiyathu Saliha, Kumaraperumal Ramalingam, Sathishkumar Samiappan

An attempt was made to quantify soil properties using hyperspectral remote-sensing techniques and machine-learning algorithms. In total, 100 soil samples representing various locations and soil-nutrient statuses were collected, and the samples were analyzed for soil pH, EC, soil organic carbon, available nitrogen (AN), available phosphorus (AP), and available potassium (AK) by following standard methods. Soil had a wide range of properties, i.e., pH varied from 5.62 to 8.49, EC varied from 0.08 to 1.78 dS/m, soil organic carbon varied from 0.23 to 0.94%, available nitrogen varied from 154 to 344 kg/ha, available phosphorus varied from 9.5 to 25.5 kg/ha, and available potassium varied from 131 to 747 kg/ha. The same set of soil samples were subjected to spectral reflectance measurement using SVC GER 1500 Spectroradiometer (spectral range: 350 to 1050 nm). The measured spectral signatures of various soils were organized for developing a spectral library and for deriving various spectral indices to correlate with soil properties to quantify the nutrients. The soil samples were partitioned into 60:40 ratios for training and validation, respectively. In order to select optimum bands (wavelength) from the soil spectra, we have employed metaheuristic algorithms i.e., Particle Swarm Optimization (PSO), Moth-Flame optimization (MFO), Flower Pollination Optimization (FPO), and Battle Royale Optimization (BRO) algorithm. Further partial least square regression (PLSR) was used to find the latent variable and to evaluate various algorithms for their performance in predicting soil properties. The results indicated that nutrients could be quantified from spectral reflectance measurement with fair to good accuracy through the Battle Royale Optimization technique with a R2 value of 0.45, 0.32, 0.48, 0.21, 0.71, and 0.35 for pH, EC, soil organic carbon, available-N, available-P, and available-K, respectively.

尝试利用高光谱遥感技术和机器学习算法对土壤特性进行量化。共收集了 100 份代表不同地点和土壤养分状况的土壤样本,并按照标准方法分析了土壤 pH 值、导电率、土壤有机碳、可利用氮(AN)、可利用磷(AP)和可利用钾(AK)。土壤的性质范围很广,即 pH 值从 5.62 到 8.49,EC 值从 0.08 到 1.78 dS/m,土壤有机碳从 0.23 到 0.94%,可利用氮从 154 到 344 千克/公顷,可利用磷从 9.5 到 25.5 千克/公顷,可利用钾从 131 到 747 千克/公顷。使用 SVC GER 1500 光谱辐射计(光谱范围:350 至 1050 纳米)对同一组土壤样本进行了光谱反射率测量。对测量到的各种土壤的光谱特征进行整理,以建立光谱库,并推导出各种光谱指数,将其与土壤特性相关联,从而量化养分。土壤样本按 60:40 的比例分别用于训练和验证。为了从土壤光谱中选择最佳波段(波长),我们采用了元启发式算法,即粒子群优化(PSO)、飞蛾-火焰优化(MFO)、授粉优化(FPO)和大逃杀优化(BRO)算法。此外,还使用偏最小二乘法回归(PLSR)来寻找潜在变量,并评估各种算法在预测土壤特性方面的性能。结果表明,通过 Battle Royale 优化技术,可以从光谱反射测量中量化养分,其准确性从一般到良好,pH 值、EC 值、土壤有机碳、可利用氮、可利用磷和可利用钾的 R2 值分别为 0.45、0.32、0.48、0.21、0.71 和 0.35。
{"title":"Battle Royale Optimization for Optimal Band Selection in Predicting Soil Nutrients Using Visible and Near-Infrared Reflectance Spectroscopy and PLSR Algorithm.","authors":"Jagadeeswaran Ramasamy, Anand Raju, Kavitha Krishnasamy Ranganathan, Muthumanickam Dhanaraju, Backiyathu Saliha, Kumaraperumal Ramalingam, Sathishkumar Samiappan","doi":"10.3390/jimaging11030083","DOIUrl":"10.3390/jimaging11030083","url":null,"abstract":"<p><p>An attempt was made to quantify soil properties using hyperspectral remote-sensing techniques and machine-learning algorithms. In total, 100 soil samples representing various locations and soil-nutrient statuses were collected, and the samples were analyzed for soil pH, EC, soil organic carbon, available nitrogen (AN), available phosphorus (AP), and available potassium (AK) by following standard methods. Soil had a wide range of properties, i.e., pH varied from 5.62 to 8.49, EC varied from 0.08 to 1.78 dS/m, soil organic carbon varied from 0.23 to 0.94%, available nitrogen varied from 154 to 344 kg/ha, available phosphorus varied from 9.5 to 25.5 kg/ha, and available potassium varied from 131 to 747 kg/ha. The same set of soil samples were subjected to spectral reflectance measurement using SVC GER 1500 Spectroradiometer (spectral range: 350 to 1050 nm). The measured spectral signatures of various soils were organized for developing a spectral library and for deriving various spectral indices to correlate with soil properties to quantify the nutrients. The soil samples were partitioned into 60:40 ratios for training and validation, respectively. In order to select optimum bands (wavelength) from the soil spectra, we have employed metaheuristic algorithms i.e., Particle Swarm Optimization (PSO), Moth-Flame optimization (MFO), Flower Pollination Optimization (FPO), and Battle Royale Optimization (BRO) algorithm. Further partial least square regression (PLSR) was used to find the latent variable and to evaluate various algorithms for their performance in predicting soil properties. The results indicated that nutrients could be quantified from spectral reflectance measurement with fair to good accuracy through the Battle Royale Optimization technique with a R2 value of 0.45, 0.32, 0.48, 0.21, 0.71, and 0.35 for pH, EC, soil organic carbon, available-N, available-P, and available-K, respectively.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11943028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morphodynamic Features of Contrast-Enhanced Mammography and Their Correlation with Breast Cancer Histopathology.
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2025-03-13 DOI: 10.3390/jimaging11030080
Claudio Ventura, Marco Fogante, Elisabetta Marconi, Barbara Franca Simonetti, Silvia Borgoforte Gradassi, Nicola Carboni, Enrico Lenti, Giulio Argalia

Contrast-enhanced mammography (CEM) combines morphological and functional imaging, enhancing breast cancer (BC) diagnosis. This study investigates the relationship between CEM morphodynamic features and histopathological characteristics of BC. In this prospective study, 50 female patients (mean age: 57.2 ± 13.7 years) with BI-RADS 4-5 lesions underwent CEM followed by surgical excision between December 2022 and May 2024. Low-energy and recombined CEM images were analyzed for breast composition, lesion characteristics, and enhancement patterns, while histopathological evaluation included tumor size, histotype, grade, lymphovascular invasion, and immunophenotype. Spearman rank correlation and multivariable regression analysis were used to evaluate the relationship between CEM findings and histopathological characteristics. Tumor size on CEM strongly correlated with histopathological tumor size (ρ = 0.788, p < 0.001) and was associated with high-grade lesions (p = 0.017). Non-circumscribed margins were linked to a Luminal-B subtype (p = 0.001), while high lesion conspicuity was associated with Luminal-B and triple-negative BC (p = 0.001) and correlated with larger tumors (ρ = 0.517, p < 0.001). Background parenchymal enhancement was negatively correlated with age (ρ = -0.286, p = 0.049). CEM provides critical insights into BC, demonstrating significant relationship between imaging features and histopathological characteristics. These findings highlight CEM's potential as a reliable tool for tumor size estimation, subtype characterization, and prognostic assessment, suggesting its role as an alternative to MRI, particularly for patients with contraindications.

{"title":"Morphodynamic Features of Contrast-Enhanced Mammography and Their Correlation with Breast Cancer Histopathology.","authors":"Claudio Ventura, Marco Fogante, Elisabetta Marconi, Barbara Franca Simonetti, Silvia Borgoforte Gradassi, Nicola Carboni, Enrico Lenti, Giulio Argalia","doi":"10.3390/jimaging11030080","DOIUrl":"10.3390/jimaging11030080","url":null,"abstract":"<p><p>Contrast-enhanced mammography (CEM) combines morphological and functional imaging, enhancing breast cancer (BC) diagnosis. This study investigates the relationship between CEM morphodynamic features and histopathological characteristics of BC. In this prospective study, 50 female patients (mean age: 57.2 ± 13.7 years) with BI-RADS 4-5 lesions underwent CEM followed by surgical excision between December 2022 and May 2024. Low-energy and recombined CEM images were analyzed for breast composition, lesion characteristics, and enhancement patterns, while histopathological evaluation included tumor size, histotype, grade, lymphovascular invasion, and immunophenotype. Spearman rank correlation and multivariable regression analysis were used to evaluate the relationship between CEM findings and histopathological characteristics. Tumor size on CEM strongly correlated with histopathological tumor size (ρ = 0.788, <i>p</i> < 0.001) and was associated with high-grade lesions (<i>p</i> = 0.017). Non-circumscribed margins were linked to a Luminal-B subtype (<i>p</i> = 0.001), while high lesion conspicuity was associated with Luminal-B and triple-negative BC (<i>p</i> = 0.001) and correlated with larger tumors (ρ = 0.517, <i>p</i> < 0.001). Background parenchymal enhancement was negatively correlated with age (ρ = -0.286, <i>p</i> = 0.049). CEM provides critical insights into BC, demonstrating significant relationship between imaging features and histopathological characteristics. These findings highlight CEM's potential as a reliable tool for tumor size estimation, subtype characterization, and prognostic assessment, suggesting its role as an alternative to MRI, particularly for patients with contraindications.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11942963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1