Pub Date : 2026-01-05DOI: 10.1088/2057-1976/ae300a
Dong Wang, Shan Lin Liu, Shuai Li, Hai Sha Liu, Yu Ling Heng Wang
Accurate detection and segmentation of polyps during colonoscopy are of great significance for the early prevention and treatment of colorectal cancer. However, due to the considerable variations in polyp size and shape, as well as their blurred boundaries with surrounding tissues, polyps are often difficult to detect, making precise segmentation a challenging task. Although numerous deep learning (DL) based segmentation methods have been proposed in recent years and achieved certain progress, their results remain unstable and often unsatisfactory. To address these challenges, we propose PGMNet, an accurate and efficient network for polyp segmentation, which consists of a PVTv2 encoder, a Global-Local Interactive Relation Module (GLIRM), and a Multi-stage Feature Aggregation Module (MFAM). The PVTv2 encoder is capable of capturing both fine-grained details and global semantic representations, making it well-suited for complex medical image segmentation tasks. GLIRM performs multi-scale information fusion during upsampling to restore fine-grained details and global semantic context, while simultaneously introducing a bit-slice mechanism to effectively suppress noise. MFAM leverages a gating mechanism to efficiently aggregate GLIRM information from different stages, thereby improving the quality of the final predictions.Extensive experiments were conducted on five publicly available polyp datasets, and the results demonstrate that PGMNet achieved very promising performance in terms of segmentation accuracy and generalization ability. In particular, on the challenging ETIS dataset, PGMNet achieved an mDice of 82.33% and an mIoU of 74.29%, highlighting its superior performance.
{"title":"PGMNet: a polyp segmentation network based on bit-plane slicing and multi-scale adaptive fusion.","authors":"Dong Wang, Shan Lin Liu, Shuai Li, Hai Sha Liu, Yu Ling Heng Wang","doi":"10.1088/2057-1976/ae300a","DOIUrl":"10.1088/2057-1976/ae300a","url":null,"abstract":"<p><p>Accurate detection and segmentation of polyps during colonoscopy are of great significance for the early prevention and treatment of colorectal cancer. However, due to the considerable variations in polyp size and shape, as well as their blurred boundaries with surrounding tissues, polyps are often difficult to detect, making precise segmentation a challenging task. Although numerous deep learning (DL) based segmentation methods have been proposed in recent years and achieved certain progress, their results remain unstable and often unsatisfactory. To address these challenges, we propose PGMNet, an accurate and efficient network for polyp segmentation, which consists of a PVTv2 encoder, a Global-Local Interactive Relation Module (GLIRM), and a Multi-stage Feature Aggregation Module (MFAM). The PVTv2 encoder is capable of capturing both fine-grained details and global semantic representations, making it well-suited for complex medical image segmentation tasks. GLIRM performs multi-scale information fusion during upsampling to restore fine-grained details and global semantic context, while simultaneously introducing a bit-slice mechanism to effectively suppress noise. MFAM leverages a gating mechanism to efficiently aggregate GLIRM information from different stages, thereby improving the quality of the final predictions.Extensive experiments were conducted on five publicly available polyp datasets, and the results demonstrate that PGMNet achieved very promising performance in terms of segmentation accuracy and generalization ability. In particular, on the challenging ETIS dataset, PGMNet achieved an mDice of 82.33% and an mIoU of 74.29%, highlighting its superior performance.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145809201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A brain-computer interface (BCI) establishes a pathway for information transmission between a human (or animal) and an external device. It can be used to control devices such as prosthetic limbs and robotic arms, which in turn assist, rehabilitate, and enhance human limb function. At present, although most studies focus on brain signal acquisition, feature extraction and recognition, and further explore the use of brain signals to control external devices, the features obtained via noninvasive approaches are fewer and less robust, which makes it difficult to directly control devices with more degrees of freedom such as robotic arms. To address these issues, we propose an extended instruction set based on motor imagery that fuses eye-movement signals and electroencephalogram (EEG) signals for motion control of a dual collaborative robotic arm. The method incorporates spatio-temporal convolution and attention mechanisms for brain-signal classification. Starting from a small base of control commands, the hybrid BCI combining eye-movement signals and EEG expands the command set, enabling motion control of the dual cooperative manipulator. On the Webots simulation platform, we carried out kinematic control and three-dimensional motion simulation of a dual 6-degree-of-freedom collaborative robotic arm (UR3e). The experimental results demonstrate the feasibility of the proposed method. Our algorithm achieves an average accuracy of 83.8% with only 8.8k parameters, and the simulation results are within the expected range. The results demonstrate that the proposed extended instruction set based on motor imagery is effective not only for controlling dual collaborative robotic arms to perform grasping tasks in complex scenarios, but also for operating other multi-degree-of-freedom peripheral devices.
{"title":"Hybrid BCI-based instruction set for dual robotic arm control using EEG and eye movement signals.","authors":"Lingyue Zhang, Baojiang Li, Xingbin Shi, Cheng Peng","doi":"10.1088/2057-1976/ae2c8f","DOIUrl":"10.1088/2057-1976/ae2c8f","url":null,"abstract":"<p><p>A brain-computer interface (BCI) establishes a pathway for information transmission between a human (or animal) and an external device. It can be used to control devices such as prosthetic limbs and robotic arms, which in turn assist, rehabilitate, and enhance human limb function. At present, although most studies focus on brain signal acquisition, feature extraction and recognition, and further explore the use of brain signals to control external devices, the features obtained via noninvasive approaches are fewer and less robust, which makes it difficult to directly control devices with more degrees of freedom such as robotic arms. To address these issues, we propose an extended instruction set based on motor imagery that fuses eye-movement signals and electroencephalogram (EEG) signals for motion control of a dual collaborative robotic arm. The method incorporates spatio-temporal convolution and attention mechanisms for brain-signal classification. Starting from a small base of control commands, the hybrid BCI combining eye-movement signals and EEG expands the command set, enabling motion control of the dual cooperative manipulator. On the Webots simulation platform, we carried out kinematic control and three-dimensional motion simulation of a dual 6-degree-of-freedom collaborative robotic arm (UR3e). The experimental results demonstrate the feasibility of the proposed method. Our algorithm achieves an average accuracy of 83.8% with only 8.8k parameters, and the simulation results are within the expected range. The results demonstrate that the proposed extended instruction set based on motor imagery is effective not only for controlling dual collaborative robotic arms to perform grasping tasks in complex scenarios, but also for operating other multi-degree-of-freedom peripheral devices.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1088/2057-1976/ae291d
Jian Zhang, Ze Ji, Changdong Zhao, Meng Huang, Ming Li, Heng Zhang
Objective. Endoscopic imaging is vital in Minimally Invasive Surgery (MIS), but its utility is often compromised by specular reflections that obscure important details and hinder diagnostic accuracy. Existing methods to address these reflections face limitations, particularly those relying on color-based thresholding and the underutilization of deep learning for highlight detection.Approach. To tackle these challenges, we propose the Specular Detection Median Filtering Fusion Network (SDMFFN), a novel framework designed to detect and remove specular reflections in endoscopic images. The SDMFFN employs a two-stage process: detection and removal. In the detection phase, we utilize the enhanced Specular Transformer Unet (S-TransUnet) model integrating Atrous Spatial Pyramid Pooling (ASPP), Information Bottleneck (IB) and Convolutional Block Attention Module (CBAM) to optimize multi-scale feature extraction, which helps to achieve accurate highlight detection. In the removal phase, we improve the advanced median filtering to smooth reflective areas and integrate color information for a natural restoration.Main results. Experimental results show that our proposed SDMFFN has outperformed other methods. Our method improves visual clarity and diagnostic precision, ultimately enhancing surgical outcomes and reducing the risk of misdiagnosis by delivering high-quality, reflection-free endoscopic images.Significance. The robust performance of SDMFFN suggests its adaptability to other medical imaging modalities, paving the way for broader clinical and research applications in robotic surgery, diagnostic endoscopy and telemedicine. To promote further progress in the research, we will make the code publicly available at:https://github.com/jize123457/SDMFFN.
{"title":"SDMFFN: a novel specular detection median filtering fusion network for specular reflection removal in endoscopic images.","authors":"Jian Zhang, Ze Ji, Changdong Zhao, Meng Huang, Ming Li, Heng Zhang","doi":"10.1088/2057-1976/ae291d","DOIUrl":"10.1088/2057-1976/ae291d","url":null,"abstract":"<p><p><i>Objective</i>. Endoscopic imaging is vital in Minimally Invasive Surgery (MIS), but its utility is often compromised by specular reflections that obscure important details and hinder diagnostic accuracy. Existing methods to address these reflections face limitations, particularly those relying on color-based thresholding and the underutilization of deep learning for highlight detection.<i>Approach</i>. To tackle these challenges, we propose the Specular Detection Median Filtering Fusion Network (SDMFFN), a novel framework designed to detect and remove specular reflections in endoscopic images. The SDMFFN employs a two-stage process: detection and removal. In the detection phase, we utilize the enhanced Specular Transformer Unet (S-TransUnet) model integrating Atrous Spatial Pyramid Pooling (ASPP), Information Bottleneck (IB) and Convolutional Block Attention Module (CBAM) to optimize multi-scale feature extraction, which helps to achieve accurate highlight detection. In the removal phase, we improve the advanced median filtering to smooth reflective areas and integrate color information for a natural restoration.<i>Main results</i>. Experimental results show that our proposed SDMFFN has outperformed other methods. Our method improves visual clarity and diagnostic precision, ultimately enhancing surgical outcomes and reducing the risk of misdiagnosis by delivering high-quality, reflection-free endoscopic images.<i>Significance</i>. The robust performance of SDMFFN suggests its adaptability to other medical imaging modalities, paving the way for broader clinical and research applications in robotic surgery, diagnostic endoscopy and telemedicine. To promote further progress in the research, we will make the code publicly available at:https://github.com/jize123457/SDMFFN.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field excitatory postsynaptic potentials (fEPSPs) plays a crucial role in neural signal transmission and synaptic plasticity. Achieving high-precision acquisition and long-term reliable recording of neuronal fEPSPs is a key challenge. This paper presents the design of a analog front-end (AFE) system for the Schaffer-CA1 pyramidal neurons in the hippocampus, based on FPGA. The system employs a capacitance-free chopper front-end amplifier with a current-balanced architecture and a digitally controlled two-stage amplifier to achieve dynamic gain adjustment. A combination of a digital FIR filter and the filtfilt algorithm is used to implement zero-phase filtering. Experimental evaluations of long-term stability, frequency response, and dynamic response were conducted, demonstrating that the AFE can accurately acquire weak signals in the range of 160-360 μV. It achieves a high gain of 72-74 dB within the 1-300 Hz frequency band, with a theoretical gain error of less than 2.5%. Based on this system, fEPSPs acquisition experiments were conducted on synapses of Schaffer-CA1 neurons inex vivohippocampal slices. The results show that the AFE accurately captures fEPSPs and long-term potentiation (LTP) before and after induction. Compared with commercial MEA systems, the normalized amplitude difference was less than 5%, the correlation coefficient was greater than 0.82, and the normalized mean square error was less than 0.01. These results confirm that the designed AFE meets the requirements for precise acquisition and stable long-term recording of neuronal fEPSPs signals.
{"title":"Design of a analog front-end for high-precision acquiring excitatory postsynaptic field potentials in the hippocampal Schaffer-CA1 neuronal pathway.","authors":"Yu Zheng, Jiayi Pang, Rujuan Song, Qiwen Liu, Jiayi Wang, Lei Dong","doi":"10.1088/2057-1976/ae2ae2","DOIUrl":"10.1088/2057-1976/ae2ae2","url":null,"abstract":"<p><p>The field excitatory postsynaptic potentials (fEPSPs) plays a crucial role in neural signal transmission and synaptic plasticity. Achieving high-precision acquisition and long-term reliable recording of neuronal fEPSPs is a key challenge. This paper presents the design of a analog front-end (AFE) system for the Schaffer-CA1 pyramidal neurons in the hippocampus, based on FPGA. The system employs a capacitance-free chopper front-end amplifier with a current-balanced architecture and a digitally controlled two-stage amplifier to achieve dynamic gain adjustment. A combination of a digital FIR filter and the filtfilt algorithm is used to implement zero-phase filtering. Experimental evaluations of long-term stability, frequency response, and dynamic response were conducted, demonstrating that the AFE can accurately acquire weak signals in the range of 160-360 μV. It achieves a high gain of 72-74 dB within the 1-300 Hz frequency band, with a theoretical gain error of less than 2.5%. Based on this system, fEPSPs acquisition experiments were conducted on synapses of Schaffer-CA1 neurons in<i>ex vivo</i>hippocampal slices. The results show that the AFE accurately captures fEPSPs and long-term potentiation (LTP) before and after induction. Compared with commercial MEA systems, the normalized amplitude difference was less than 5%, the correlation coefficient was greater than 0.82, and the normalized mean square error was less than 0.01. These results confirm that the designed AFE meets the requirements for precise acquisition and stable long-term recording of neuronal fEPSPs signals.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145721041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective.Intelligent computer-aided diagnosis techniques enable inspection of invisible electrocardiogram (ECG) pathological changes for early detection of latent heart diseases. This study concentrates on latent pathological changes within non-episodic ECG data, describes a cardiac dynamics based methodology for the detection of paroxysmal atrial fibrillation (PAF).Approach.Three-dimensional dominated components of routine 12-lead ECG signals are extracted without complex signal segmentation operations. Cardiac dynamics features are captured using deterministic learning algorithm and represented as the three-dimensional graphic. This kind of nonlinear dynamics representation is shown to have high discriminative power for PAF detection even before pathologic changes can be observed visibly in ECG signals. Nonlinear dynamics measures are extracted and finally fed into different machine learning methods for the PAF detection task. Suspected PAF patients undergoing Holter monitoring are studied. Cardiac dynamics measures are calcuated simultaneously with routine rest ECG examination, in which Holter monitoring results are collected as the gold standard.Main results.The proposed method yielded a sensitivity of 97%, a specificity of 91%, and an overall accuracy of 92%.Significance.Abnormal cardiac dynamics induced by PAF can be detected using cardiac dynamics features and different classification models before obvious pathological changes are present. The proposed method is expected to provide a complementary tool to the commonly used ECG examination for PAF detection, which are crucial for identifying patients at risk of latent PAF.
{"title":"Early detection of paroxysmal atrial fibrillation from non-episodic ECG data using cardiac dynamics features and different classification models.","authors":"Kengren Chen, Muqing Deng, Dehua Huang, Dandan Liang, Yanjiao Wang, Xiaoyu Huang","doi":"10.1088/2057-1976/ae2b76","DOIUrl":"10.1088/2057-1976/ae2b76","url":null,"abstract":"<p><p><i>Objective.</i>Intelligent computer-aided diagnosis techniques enable inspection of invisible electrocardiogram (ECG) pathological changes for early detection of latent heart diseases. This study concentrates on latent pathological changes within non-episodic ECG data, describes a cardiac dynamics based methodology for the detection of paroxysmal atrial fibrillation (PAF).<i>Approach.</i>Three-dimensional dominated components of routine 12-lead ECG signals are extracted without complex signal segmentation operations. Cardiac dynamics features are captured using deterministic learning algorithm and represented as the three-dimensional graphic. This kind of nonlinear dynamics representation is shown to have high discriminative power for PAF detection even before pathologic changes can be observed visibly in ECG signals. Nonlinear dynamics measures are extracted and finally fed into different machine learning methods for the PAF detection task. Suspected PAF patients undergoing Holter monitoring are studied. Cardiac dynamics measures are calcuated simultaneously with routine rest ECG examination, in which Holter monitoring results are collected as the gold standard.<i>Main results.</i>The proposed method yielded a sensitivity of 97%, a specificity of 91%, and an overall accuracy of 92%.<i>Significance.</i>Abnormal cardiac dynamics induced by PAF can be detected using cardiac dynamics features and different classification models before obvious pathological changes are present. The proposed method is expected to provide a complementary tool to the commonly used ECG examination for PAF detection, which are crucial for identifying patients at risk of latent PAF.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145740771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the present study, electrospinning was used to create a new wound dressing consisting of hydroxyapatite nanoparticles, in which curcumin was encapsulated and prepared as a nanocomposite in gelatin and polycaprolactone solution. Physicochemical and biological properties of the prepared wound dressing were evaluated under laboratory conditions. The findings demonstrated that curcumin-HA increases the tensile strength and elongation at break while decreasing elastic modulus. In contrast, when the curcumin-HA structure was added to PCL, swelling capacity and degradation rate were significantly improved. In addition, a disk diffusion test onStaphylococcus aureusandEscherichia coliconfirmed the effectiveness of the antibacterial properties of this wound dressing. In addition, sustained release of curcumin for up to 15 days was achieved in Gel (curcumin-HA)/PCL nanofibers which could be a positive option in the performance of this wound dressing. According toin vitrocell viability tests conducted on the L929 fibroblast cell line, the (curcumin-HA)/PCL gel nanofibers not only did not have cytotoxicity but also improved the cell repair process within three days, confirming their potential for use as wound dressings.
{"title":"Electrospun gelatin/PCL nanofibers incorporating curcumin loaded hydroxyapatite: a dual function antibacterial wound dressing for controlled drug release and accelerated skin repair.","authors":"Diba Dadkhah, Homeira Zare Chavoshy, Negar Nasri, Razieh Ghasemi","doi":"10.1088/2057-1976/ae2c8d","DOIUrl":"10.1088/2057-1976/ae2c8d","url":null,"abstract":"<p><p>In the present study, electrospinning was used to create a new wound dressing consisting of hydroxyapatite nanoparticles, in which curcumin was encapsulated and prepared as a nanocomposite in gelatin and polycaprolactone solution. Physicochemical and biological properties of the prepared wound dressing were evaluated under laboratory conditions. The findings demonstrated that curcumin-HA increases the tensile strength and elongation at break while decreasing elastic modulus. In contrast, when the curcumin-HA structure was added to PCL, swelling capacity and degradation rate were significantly improved. In addition, a disk diffusion test on<i>Staphylococcus aureus</i>and<i>Escherichia coli</i>confirmed the effectiveness of the antibacterial properties of this wound dressing. In addition, sustained release of curcumin for up to 15 days was achieved in Gel (curcumin-HA)/PCL nanofibers which could be a positive option in the performance of this wound dressing. According to<i>in vitro</i>cell viability tests conducted on the L929 fibroblast cell line, the (curcumin-HA)/PCL gel nanofibers not only did not have cytotoxicity but also improved the cell repair process within three days, confirming their potential for use as wound dressings.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1088/2057-1976/ae2b77
Yuxiang Duan, Jili Long, Shunyi Zhao, Hao Wang, Jun Qian
Accurate myocardial segmentation in myocardial contrast echocardiography (MCE) images remains challenging due to the scarcity of publicly available labeled datasets and the pervasive presence of speckle noise.Currently, echocardiographers must manually delineate myocardial contours, a clinical workflow step that is both labor-intensive and prone to variability. To address these limitations, we propose SSMCE, a novel semi-supervised learning framework specifically designed for myocardial segmentation in MCE images. The proposed framework adopts a tri-model architecture comprising two structurally distinct student models and an adaptively assembled teacher model. This design inherently introduces model-level perturbations to promote output diversity, thereby reducing overfitting and improving generalization performance. In addition, a specialized loss function is designed to guide the model's self-correction behavior by increasing uncertainty in misclassified bias regions and reinforcing confidence in accurate ones, facilitating convergence. Experimental results on our self-constructed dataset demonstrate that the proposed loss function improves the primary evaluation metric by 1.75%. Furthermore, the proposed method achieves state-of-the-art performance when compared with existing approaches. The results demonstrate that SSMCE provides a robust and efficient approach for rapid myocardial detection and precise segmentation, offering significant potential to streamline clinical workflows in MCE imaging.
{"title":"SSMCE: A semi-supervised learning framework for myocardial segmentation in myocardial contrast echocardiography.","authors":"Yuxiang Duan, Jili Long, Shunyi Zhao, Hao Wang, Jun Qian","doi":"10.1088/2057-1976/ae2b77","DOIUrl":"10.1088/2057-1976/ae2b77","url":null,"abstract":"<p><p>Accurate myocardial segmentation in myocardial contrast echocardiography (MCE) images remains challenging due to the scarcity of publicly available labeled datasets and the pervasive presence of speckle noise.Currently, echocardiographers must manually delineate myocardial contours, a clinical workflow step that is both labor-intensive and prone to variability. To address these limitations, we propose SSMCE, a novel semi-supervised learning framework specifically designed for myocardial segmentation in MCE images. The proposed framework adopts a tri-model architecture comprising two structurally distinct student models and an adaptively assembled teacher model. This design inherently introduces model-level perturbations to promote output diversity, thereby reducing overfitting and improving generalization performance. In addition, a specialized loss function is designed to guide the model's self-correction behavior by increasing uncertainty in misclassified bias regions and reinforcing confidence in accurate ones, facilitating convergence. Experimental results on our self-constructed dataset demonstrate that the proposed loss function improves the primary evaluation metric by 1.75%. Furthermore, the proposed method achieves state-of-the-art performance when compared with existing approaches. The results demonstrate that SSMCE provides a robust and efficient approach for rapid myocardial detection and precise segmentation, offering significant potential to streamline clinical workflows in MCE imaging.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145740815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1088/2057-1976/ae2b75
Arun Mayya, Akshatha Chatra, Vinita Dsouza, Raviraja N Seetharam, Shashi Rashmi Acharya, Kirthanashri S Vasanthan
Scaffold systems are fundamental to regenerative endodontics, functioning as structural frameworks and delivery vehicles for bioactive cues essential to tissue regeneration. This review comprehensively examines scaffold types, functions, and translational challenges in endodontic regeneration. Scaffolds are classified into natural, synthetic, and hybrid matrices with unique mechanical and biological profiles. Advances in nanotechnology, 3D and 4D bioprinting, and smart biomaterials have significantly improved scaffold functionality. Smart scaffolds enable the controlled release of growth factors, antimicrobial agents, and gene-functionalized molecules, facilitating angiogenesis, stem cell differentiation, and infection control. Hybrid scaffolds, such as those combining collagen and gelatin methacryloyl (GelMA), provide customized degradation, biocompatibility, and mechanical strength. Innovative systems such as magnetic nanoparticle-triggered release and responsive hydrogels address vascularization and immune modulation limitations. Clinically, platelet-rich fibrin (PRF), concentrated growth factor (CGF), and decellularized extracellular matrix (dECM) have shown success in promoting root development, pulp vitality, and periapical healing. Despite these advances, obstacles remain, including regulatory hurdles, standardization of protocols, and long-term clinical validation. Integrating AI-driven scaffold design, digital twin simulations, and organ-on-chip models holds promise for personalized therapies. Establishing scaffold-based regeneration as a standard clinical approach will require harmonized practices, scalable biomaterial production, and robust clinical outcome assessments.
{"title":"Biomaterials to biofabrication: advanced scaffold technologies for regenerative endodontics.","authors":"Arun Mayya, Akshatha Chatra, Vinita Dsouza, Raviraja N Seetharam, Shashi Rashmi Acharya, Kirthanashri S Vasanthan","doi":"10.1088/2057-1976/ae2b75","DOIUrl":"10.1088/2057-1976/ae2b75","url":null,"abstract":"<p><p>Scaffold systems are fundamental to regenerative endodontics, functioning as structural frameworks and delivery vehicles for bioactive cues essential to tissue regeneration. This review comprehensively examines scaffold types, functions, and translational challenges in endodontic regeneration. Scaffolds are classified into natural, synthetic, and hybrid matrices with unique mechanical and biological profiles. Advances in nanotechnology, 3D and 4D bioprinting, and smart biomaterials have significantly improved scaffold functionality. Smart scaffolds enable the controlled release of growth factors, antimicrobial agents, and gene-functionalized molecules, facilitating angiogenesis, stem cell differentiation, and infection control. Hybrid scaffolds, such as those combining collagen and gelatin methacryloyl (GelMA), provide customized degradation, biocompatibility, and mechanical strength. Innovative systems such as magnetic nanoparticle-triggered release and responsive hydrogels address vascularization and immune modulation limitations. Clinically, platelet-rich fibrin (PRF), concentrated growth factor (CGF), and decellularized extracellular matrix (dECM) have shown success in promoting root development, pulp vitality, and periapical healing. Despite these advances, obstacles remain, including regulatory hurdles, standardization of protocols, and long-term clinical validation. Integrating AI-driven scaffold design, digital twin simulations, and organ-on-chip models holds promise for personalized therapies. Establishing scaffold-based regeneration as a standard clinical approach will require harmonized practices, scalable biomaterial production, and robust clinical outcome assessments.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145740808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1088/2057-1976/ae268a
Qian Zhang, Zeya Sun, Longxin Yan, Haibin Sun
Deep learning methods have been widely adopted for classifying benign and malignant pulmonary nodules. However, existing models often suffer from high memory usage, computational cost, and large parameter counts. As a result, the development of lightweight classification methods for pulmonary nodules has become a major research focus. This paper proposes a lightweight classification framework specifically designed to distinguish between benign and malignant pulmonary nodules. The model contains only 119,245 parameters and occupies just 0.45 MB, offering significant advantages in terms of computational efficiency. The proposed approach integrates an attention mechanism, residual learning, and an improved DWSGhost module to construct the GAS (Ghost-Attention Separation) network. A teacher-free knowledge distillation strategy is employed to build a lightweight classification model based on GAS. Extensive experiments were conducted on three datasets-LIDC-IDRI, LungX Challenge, and Zhengzhou Ninth People's Hospital-which demonstrated the model's effectiveness in classifying pulmonary nodules. The proposed method exhibits strong competitiveness among lightweight models and achieves promising classification performance. By incorporating depthwise separable convolutions and teacher-free knowledge distillation, along with attention mechanisms and residual learning, the model achieves enhanced performance in terms of lightweight design, discriminative power, adaptability, and generalization ability.The full code is available inhttps://github.com/s1371897388-ctrl/GAS-Pulmonary-Nodule-Classification.
{"title":"A teacherless lightweight classification framework for benign and malignant pulmonary nodules based on GAS.","authors":"Qian Zhang, Zeya Sun, Longxin Yan, Haibin Sun","doi":"10.1088/2057-1976/ae268a","DOIUrl":"10.1088/2057-1976/ae268a","url":null,"abstract":"<p><p>Deep learning methods have been widely adopted for classifying benign and malignant pulmonary nodules. However, existing models often suffer from high memory usage, computational cost, and large parameter counts. As a result, the development of lightweight classification methods for pulmonary nodules has become a major research focus. This paper proposes a lightweight classification framework specifically designed to distinguish between benign and malignant pulmonary nodules. The model contains only 119,245 parameters and occupies just 0.45 MB, offering significant advantages in terms of computational efficiency. The proposed approach integrates an attention mechanism, residual learning, and an improved DWSGhost module to construct the GAS (Ghost-Attention Separation) network. A teacher-free knowledge distillation strategy is employed to build a lightweight classification model based on GAS. Extensive experiments were conducted on three datasets-LIDC-IDRI, LungX Challenge, and Zhengzhou Ninth People's Hospital-which demonstrated the model's effectiveness in classifying pulmonary nodules. The proposed method exhibits strong competitiveness among lightweight models and achieves promising classification performance. By incorporating depthwise separable convolutions and teacher-free knowledge distillation, along with attention mechanisms and residual learning, the model achieves enhanced performance in terms of lightweight design, discriminative power, adaptability, and generalization ability.The full code is available inhttps://github.com/s1371897388-ctrl/GAS-Pulmonary-Nodule-Classification.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1088/2057-1976/ae2a37
Wenjia Song, Fangfang Tang, Henry Marshall, Kwun M Fong, Feng Liu
Early and accurate detection of pulmonary nodules in computed tomography (CT) scans is critical for reducing lung cancer mortality. While convolutional neural networks (CNNs) and Transformer-based architectures have been widely used for this task, they often suffer from insufficient global context awareness, quadratic complexity, and dependence on post-processing steps such as non-maximum suppression (NMS). This study aims to develop a novel 3D lung nodule detection framework that balances local and global contextual awareness with low computational complexity, while minimizing reliance on manual threshold tuning and redundant post-processing. We propose FCMamba, a flexible connected visual state-space model adapted from the recently introduced Mamba architecture. To enhance spatial modelling, we introduce a flexible path encoding strategy that reorders 3D feature sequences adaptively based on input relevance. In addition, a Top Query Matcher, guided by the Hungarian matching algorithm, is integrated into the training process to replace traditional NMS and enable end-to-end one-to-one nodule matching. The model is trained and evaluated using 10-fold cross-validation on the LIDC-IDRI dataset, which contains 888 CT scans. FCMamba outperforms several state-of-the-art methods, including CNN, Transformer, and hybrid models, across seven predefined false positives per scan (FPs/scan) levels. It achieves a sensitivity improvement of 2.6% to 20.3% at low FPs/scan (0.125) and delivers the highest CPM and FROC-AUC scores. The proposed method demonstrates balanced performance across nodule sizes, reduced false positives, and improved robustness, particularly in high-confidence predictions. FCMamba provides an efficient, scalable and accurate solution for 3D lung nodule detection. Its flexible spatial modeling and elimination of post-processing make it well-suited for clinical usage and adaptable to other medical imaging tasks.
{"title":"Flexible state space modelling for accurate and efficient 3D lung nodule detection.","authors":"Wenjia Song, Fangfang Tang, Henry Marshall, Kwun M Fong, Feng Liu","doi":"10.1088/2057-1976/ae2a37","DOIUrl":"10.1088/2057-1976/ae2a37","url":null,"abstract":"<p><p>Early and accurate detection of pulmonary nodules in computed tomography (CT) scans is critical for reducing lung cancer mortality. While convolutional neural networks (CNNs) and Transformer-based architectures have been widely used for this task, they often suffer from insufficient global context awareness, quadratic complexity, and dependence on post-processing steps such as non-maximum suppression (NMS). This study aims to develop a novel 3D lung nodule detection framework that balances local and global contextual awareness with low computational complexity, while minimizing reliance on manual threshold tuning and redundant post-processing. We propose FCMamba, a flexible connected visual state-space model adapted from the recently introduced Mamba architecture. To enhance spatial modelling, we introduce a flexible path encoding strategy that reorders 3D feature sequences adaptively based on input relevance. In addition, a Top Query Matcher, guided by the Hungarian matching algorithm, is integrated into the training process to replace traditional NMS and enable end-to-end one-to-one nodule matching. The model is trained and evaluated using 10-fold cross-validation on the LIDC-IDRI dataset, which contains 888 CT scans. FCMamba outperforms several state-of-the-art methods, including CNN, Transformer, and hybrid models, across seven predefined false positives per scan (FPs/scan) levels. It achieves a sensitivity improvement of 2.6% to 20.3% at low FPs/scan (0.125) and delivers the highest CPM and FROC-AUC scores. The proposed method demonstrates balanced performance across nodule sizes, reduced false positives, and improved robustness, particularly in high-confidence predictions. FCMamba provides an efficient, scalable and accurate solution for 3D lung nodule detection. Its flexible spatial modeling and elimination of post-processing make it well-suited for clinical usage and adaptable to other medical imaging tasks.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145713172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}