Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R2 for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.
Breast cancer remains a leading cause of cancer mortality worldwide, with early detection crucial for improving outcomes. This systematic review evaluates recent advances in portable non-invasive technologies for early breast cancer detection, assessing their methods, performance, and potential for clinical implementation. A comprehensive literature search was conducted across major databases for relevant studies published between 2015 and 2024. Data on technology types, detection methods, and diagnostic performance were extracted and synthesized from 41 included studies. The review examined microwave imaging, electrical impedance tomography (EIT), thermography, bioimpedance spectroscopy (BIS), and pressure sensing technologies. Microwave imaging and EIT showed the most promise, with some studies reporting sensitivities and specificities over 90 %. However, most technologies are still in early stages of development with limited large-scale clinical validation. These innovations could complement existing gold standards, potentially improving screening rates and outcomes, especially in underserved populations, whiles decreasing screening waiting times in developed countries. Further research is therefore needed to validate their clinical efficacy, address implementation challenges, and assess their impact on patient outcomes before widespread adoption can be recommended.
Cellular senescence (CS) is characterized by the irreversible cell cycle arrest and plays a key role in aging and diseases, such as cancer. Recent years have witnessed the burgeoning exploration of the intricate relationship between CS and cancer, with CS recognized as either a suppressing or promoting factor and officially acknowledged as one of the 14 cancer hallmarks. However, a comprehensive characterization remains absent from elucidating the divergences of this relationship across different cancer types and its involvement in the multi-facets of tumor development. Here we systematically assessed the cellular senescence of over 10,000 tumor samples from 33 cancer types, starting by defining a set of cancer-associated CS signatures and deriving a quantitative metric representing the CS status, called CS score. We then investigated the CS heterogeneity and its intricate relationship with the prognosis, immune infiltration, and therapeutic responses across different cancers. As a result, cellular senescence demonstrated two distinct prognostic groups: the protective group with eleven cancers, such as LIHC, and the risky group with four cancers, including STAD. Subsequent in-depth investigations between these two groups unveiled the potential molecular and cellular mechanisms underlying the distinct effects of cellular senescence, involving the divergent activation of specific pathways and variances in immune cell infiltrations. These results were further supported by the disparate associations of CS status with the responses to immuno- and chemo-therapies observed between the two groups. Overall, our study offers a deeper understanding of inter-tumor heterogeneity of cellular senescence associated with the tumor microenvironment and cancer prognosis.
Efficient extraction and analysis of histopathological images are crucial for accurate medical diagnoses, particularly for prostate cancer. This research enhances histopathological image reclamation by integrating Visual-Based Image Reclamation (VBIR) techniques with contrast-limited adaptive Histogram Equalization (CLAHE) and the Gray-Level Co-occurrence Matrix (GLCM) algorithm. The proposed method leverages CLAHE to improve image contrast and visibility, crucial for regions with varying illumination, and employs a non-linear Support Vector Machine (SVM) to incorporate GLCM features. Our approach achieved a notable success rate of 89.6%, demonstrating significant improvement in image analysis. The average execution time for matched tissues was 41.23 s (standard deviation 36.87 s), and for unmatched tissues, 21.22 s (standard deviation 29.18 s). These results underscore the method's efficiency and reliability in processing histopathological images. The findings from this study highlight the potential of our method to enhance image reclamation processes, paving the way for further research and advancements in medical image analysis. The superior performance of our approach signifies its capability to significantly improve histopathological image analysis, contributing to more accurate and efficient diagnostic practices.
In the field of computer-aided medical diagnosis, it is crucial to adapt medical image segmentation to limited computing resources. There is tremendous value in developing accurate, real-time vision processing models that require minimal computational resources. When building lightweight models, there is always a trade-off between computational cost and segmentation performance. Performance often suffers when applying models to meet resource-constrained scenarios characterized by computation, memory, or storage constraints. This remains an ongoing challenge. This paper proposes a lightweight network for medical image segmentation. It introduces a lightweight transformer, proposes a simplified core feature extraction network to capture more semantic information, and builds a multi-scale feature interaction guidance framework. The fusion module embedded in this framework is designed to address spatial and channel complexities. Through the multi-scale feature interaction guidance framework and fusion module, the proposed network achieves robust semantic information extraction from low-resolution feature maps and rich spatial information retrieval from high-resolution feature maps while ensuring segmentation performance. This significantly reduces the parameter requirements for maintaining deep features within the network, resulting in faster inference and reduced floating-point operations (FLOPs) and parameter counts. Experimental results on ISIC2017 and ISIC2018 datasets confirm the effectiveness of the proposed network in medical image segmentation tasks. For instance, on the ISIC2017 dataset, the proposed network achieved a segmentation accuracy of 82.33 % mIoU, and a speed of 71.26 FPS on 256 × 256 images using a GeForce GTX 3090 GPU. Furthermore, the proposed network is tremendously lightweight, containing only 0.524M parameters. The corresponding source codes are available at https://github.com/CurbUni/LMIS-lightweight-network.