As color images have been widely used in many fields, their restoration problem has received wide attention from researchers. This study proposed two solutions for denoising and low illuminance enhancement problems of existing color image restoration methods. At first, this paper built a colour image denoising model of weighted Schatten-p norm based on deep learning, which fully considers differences in the noise level of each channel of colour images, and could give a better denoising effect. Then, this study proposed a low illuminance color image enhancement algorithm that combines Gamma transform and Contrast Limited Adaptive Histogram Equalization (CLAHE), which could better balance image contrast enhancement and noise suppression. Studies of these two parts have both gained good results in terms of theory and experiment, and they could push the progress of colour image restoration technology and provide valuable references for related fields.
{"title":"A New Deep Learning-Based Restoration Method for Colour Images","authors":"Songshan Zu","doi":"10.18280/ts.400536","DOIUrl":"https://doi.org/10.18280/ts.400536","url":null,"abstract":"As color images have been widely used in many fields, their restoration problem has received wide attention from researchers. This study proposed two solutions for denoising and low illuminance enhancement problems of existing color image restoration methods. At first, this paper built a colour image denoising model of weighted Schatten-p norm based on deep learning, which fully considers differences in the noise level of each channel of colour images, and could give a better denoising effect. Then, this study proposed a low illuminance color image enhancement algorithm that combines Gamma transform and Contrast Limited Adaptive Histogram Equalization (CLAHE), which could better balance image contrast enhancement and noise suppression. Studies of these two parts have both gained good results in terms of theory and experiment, and they could push the progress of colour image restoration technology and provide valuable references for related fields.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"15 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed S. Salama, Rasha Shoitan, Mohamed S. Abdallah, Young Im Cho, Ahmad M. Nagm
.
{"title":"A Robust Algorithm for Digital Image Copyright Protection and Tampering Detection: Employing DWT, DCT, and Blowfish Techniques","authors":"Ahmed S. Salama, Rasha Shoitan, Mohamed S. Abdallah, Young Im Cho, Ahmad M. Nagm","doi":"10.18280/ts.400520","DOIUrl":"https://doi.org/10.18280/ts.400520","url":null,"abstract":".","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"206 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rashad J. Rasras, Mutaz Rasmi Abu Sara, Ziad Alqadi
.
{"title":"Enhanced Efficiency and Security in LSB2 Steganography: Burst Embedding and Private Key Integration","authors":"Rashad J. Rasras, Mutaz Rasmi Abu Sara, Ziad Alqadi","doi":"10.18280/ts.400502","DOIUrl":"https://doi.org/10.18280/ts.400502","url":null,"abstract":".","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"37 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136104472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
İdil Işıklı Esener, Onur Kılınç, Burak Urazel, Betül N. Yaman, Demet İ. Algın, Semih Ergin
Advancements in EEG biometric technologies have been hindered by two persistent challenges: the management of large data sizes and the unreliability of data resulting from various measurement environments. Addressing these challenges, this study introduces a novel methodology termed 'Cube-Code' for cognitive biometric authentication. As a preliminary step, Automatic Artifact Removal (AAR) leveraging wavelet Independent Component Analysis (wICA) is applied to EEG signals. This step transforms the signals into independent sub-components, effectively eliminating the effects of muscle movements and eye blinking. Subsequently, unique 3-Dimensional (3-D) Cube-Codes are generated, each representing an individual subject in the database. Each Cube-Code is constructed by stacking the alpha, beta, and theta sub-band partitions, obtained from each channel during each task, back-to-back. This forms a third-order tensor. The stacking of these three sub-bands within a Cube-Code not only prevents a dimension increase through concatenation but also permits the direct utilization of non-stationary data, bypassing the need for fiducial component detection. Higher-Order Singular Value Decomposition (HOSVD) is then applied to perform a subspace analysis on each Cube-Code, an approach supported by previous literature concerning its effectiveness on 3-D tensors. Upon completion of the decomposition process, a flattening operation is executed to extract lower-dimensional, task-independent feature matrices for each subject. These feature matrices are then employed in five distinct deep learning architectures. The Cube-Code methodology was tested on EEG signals, composed of different tasks, from the PhysioNet EEG Motor Movement/Imagery (EEGMMI) dataset. The results demonstrate an authentication accuracy rate of approximately 98%. In conclusion, the novel Cube-Code methodology provides highly accurate results for subject recognition, delivering a new level of reliability in EEG-based biometric
{"title":"High-Dimension EEG Biometric Authentication Leveraging Sub-Band Cube-Code Representation","authors":"İdil Işıklı Esener, Onur Kılınç, Burak Urazel, Betül N. Yaman, Demet İ. Algın, Semih Ergin","doi":"10.18280/ts.400517","DOIUrl":"https://doi.org/10.18280/ts.400517","url":null,"abstract":"Advancements in EEG biometric technologies have been hindered by two persistent challenges: the management of large data sizes and the unreliability of data resulting from various measurement environments. Addressing these challenges, this study introduces a novel methodology termed 'Cube-Code' for cognitive biometric authentication. As a preliminary step, Automatic Artifact Removal (AAR) leveraging wavelet Independent Component Analysis (wICA) is applied to EEG signals. This step transforms the signals into independent sub-components, effectively eliminating the effects of muscle movements and eye blinking. Subsequently, unique 3-Dimensional (3-D) Cube-Codes are generated, each representing an individual subject in the database. Each Cube-Code is constructed by stacking the alpha, beta, and theta sub-band partitions, obtained from each channel during each task, back-to-back. This forms a third-order tensor. The stacking of these three sub-bands within a Cube-Code not only prevents a dimension increase through concatenation but also permits the direct utilization of non-stationary data, bypassing the need for fiducial component detection. Higher-Order Singular Value Decomposition (HOSVD) is then applied to perform a subspace analysis on each Cube-Code, an approach supported by previous literature concerning its effectiveness on 3-D tensors. Upon completion of the decomposition process, a flattening operation is executed to extract lower-dimensional, task-independent feature matrices for each subject. These feature matrices are then employed in five distinct deep learning architectures. The Cube-Code methodology was tested on EEG signals, composed of different tasks, from the PhysioNet EEG Motor Movement/Imagery (EEGMMI) dataset. The results demonstrate an authentication accuracy rate of approximately 98%. In conclusion, the novel Cube-Code methodology provides highly accurate results for subject recognition, delivering a new level of reliability in EEG-based biometric","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136104744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of crowd movement and behavioral patterns typically relies on spatio-temporal localization data of pedestrians. While monocular cameras serve the purpose, industrial binocular cameras based on multi-view geometry offer heightened spatial accuracy. These cameras synchronize time through circuits and are calibrated for external parameters after fixing their relative positions. Yet, the flexibility and real-time adaptability of using two different cameras or smartphones in close proximity, forming a short-baseline binocular camera, presents challenges in camera time synchronization, external parameter calibration, and pedestrian feature matching. A method is introduced herein for jointly addressing these challenges. Images are abstracted into spatial-temporal point sets based on human head coordinates and frame numbers. Through point set registration, time synchronization and pedestrian matching are achieved concurrently, followed by the calibration of the short-baseline camera's external parameters. Numerical results from synthetic and real-world scenarios indicate the proposed model's capability in addressing the aforementioned fundamental challenges. With the sole reliance on crowd image data, devoid of external hardware, software, or manual calibrations, time synchronization precision reaches the sub-millisecond level, pedestrian matching averages a 92% accuracy rate, and the camera's external parameters align with the calibration board's precision. Ultimately, this research facilitates the self-calibration, automatic time synchronization, and pedestrian matching tasks for short-baseline camera assemblies observing crowds.
{"title":"Joint Solution for Temporal-Spatial Synchronization of Multi-View Videos and Pedestrian Matching in Crowd Scenes","authors":"Haidong Yang, Renyong Guo","doi":"10.18280/ts.400503","DOIUrl":"https://doi.org/10.18280/ts.400503","url":null,"abstract":"The study of crowd movement and behavioral patterns typically relies on spatio-temporal localization data of pedestrians. While monocular cameras serve the purpose, industrial binocular cameras based on multi-view geometry offer heightened spatial accuracy. These cameras synchronize time through circuits and are calibrated for external parameters after fixing their relative positions. Yet, the flexibility and real-time adaptability of using two different cameras or smartphones in close proximity, forming a short-baseline binocular camera, presents challenges in camera time synchronization, external parameter calibration, and pedestrian feature matching. A method is introduced herein for jointly addressing these challenges. Images are abstracted into spatial-temporal point sets based on human head coordinates and frame numbers. Through point set registration, time synchronization and pedestrian matching are achieved concurrently, followed by the calibration of the short-baseline camera's external parameters. Numerical results from synthetic and real-world scenarios indicate the proposed model's capability in addressing the aforementioned fundamental challenges. With the sole reliance on crowd image data, devoid of external hardware, software, or manual calibrations, time synchronization precision reaches the sub-millisecond level, pedestrian matching averages a 92% accuracy rate, and the camera's external parameters align with the calibration board's precision. Ultimately, this research facilitates the self-calibration, automatic time synchronization, and pedestrian matching tasks for short-baseline camera assemblies observing crowds.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"112 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unveiling Breast Tumor Characteristics: A ResNet152V2 and Mask R-CNN Based Approach for Type and Size Recognition in Mammograms","authors":"Chiman Haydar Salh, Abbas M. Ali","doi":"10.18280/ts.400504","DOIUrl":"https://doi.org/10.18280/ts.400504","url":null,"abstract":"ABSTRACT","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"141 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the rapidly evolving automobile industry, the safety and quality of individual vehicle components have gained paramount importance. Among these, aluminum wheels are particularly critical, given their susceptibility to internal casting defects. This study presents a novel approach to identify these defects non-destructively, employing X-ray inspection and harnessing the power of YOLO (You Only Look Once) object detection. Images of vehicle aluminum wheels were obtained via X-ray inspection, revealing the presence of internal defects. Subsequently, a periodic noise e.limination algorithm, underpinned by morphological filtering and adaptive image processing weights, was utilized to enhance the image clarity. The application of a composite cascade filter further improved the image resolution. The enhanced images were then processed using YOLO object detection, a cutting-edge technology renowned for its precision in object detection tasks. This study explores the efficacy of different YOLO model architectures in detecting and identifying internal casting defects in aluminum wheels. Our research contributes to the development of a highly accurate system for the detection of internal casting defects in vehicle wheels, offering potential improvements in vehicle safety. This methodology, pairing X-ray inspection with advanced object detection algorithms, provides a robust approach for defect identification in the production process, laying the groundwork for future advancements in vehicle component quality control.
{"title":"Enhanced Identification of Internal Casting Defects in Vehicle Wheels Using YOLO Object Detection and X-Ray Inspection","authors":"Jian Da Wu, Yu Hung Huang","doi":"10.18280/ts.400511","DOIUrl":"https://doi.org/10.18280/ts.400511","url":null,"abstract":"In the rapidly evolving automobile industry, the safety and quality of individual vehicle components have gained paramount importance. Among these, aluminum wheels are particularly critical, given their susceptibility to internal casting defects. This study presents a novel approach to identify these defects non-destructively, employing X-ray inspection and harnessing the power of YOLO (You Only Look Once) object detection. Images of vehicle aluminum wheels were obtained via X-ray inspection, revealing the presence of internal defects. Subsequently, a periodic noise e.limination algorithm, underpinned by morphological filtering and adaptive image processing weights, was utilized to enhance the image clarity. The application of a composite cascade filter further improved the image resolution. The enhanced images were then processed using YOLO object detection, a cutting-edge technology renowned for its precision in object detection tasks. This study explores the efficacy of different YOLO model architectures in detecting and identifying internal casting defects in aluminum wheels. Our research contributes to the development of a highly accurate system for the detection of internal casting defects in vehicle wheels, offering potential improvements in vehicle safety. This methodology, pairing X-ray inspection with advanced object detection algorithms, provides a robust approach for defect identification in the production process, laying the groundwork for future advancements in vehicle component quality control.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"321 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136067706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social recommendation, a technique aimed at predicting user preferences by harnessing social ties, has frequently employed collaborative filtering (CF) due to its demonstrated efficiency and scalability. Nonetheless, a decline in performance of most extant CF techniques has been observed when confronted with extreme sparsity in explicit feedback. Past investigations predominantly merged both explicit and implicit feedback to mitigate the data scarcity issue, embedding based solely on explicit characteristics and formulating objective functions founded on user-item associations. Such a paradigm signifies a dependency on these interactions to compensate for deficient embeddings. Notably, a considerable discrepancy exists between implicit feedback and genuine user satisfaction in social recommendations, attributed to pervasive false positive interactions devoid of detailed user/item attributes. Furthermore, the establishment of connectivity between users/items has been partially dependent on users' inclinations, suggesting that the aggregation procedure might overlook certain neighbourhood preferences. In response to these challenges, a hybrid neural graph model endowed with attributive features has been introduced. This model amalgamates explicit/implicit feedback, attribute data, and a user-item interaction graph. To counteract data sparsity, a variational graph framework has been devised to extract latent representations from both feedback and attribute data. For the effective and explicit discernment of collaborative signals, the embedding incorporates a user-item interaction graph, which offers a potent modelling of elevated-order connectivities and the detection of latent user-item associations. The user and item embeddings are derived via an attentive propagation method, with the ultimate item embeddings being sourced through a linear weighted sum, eschewing non-linear activation functions. Comparative analyses on four real-world datasets have demonstrated the superior efficacy of the proposed methodology in relation to leading contemporary recommendation systems.
{"title":"Attributed Graph Convolutional Network for Enhanced Social Recommendation Through Hybrid Feedback Integration","authors":"Xiaoyi Deng","doi":"10.18280/ts.400509","DOIUrl":"https://doi.org/10.18280/ts.400509","url":null,"abstract":"Social recommendation, a technique aimed at predicting user preferences by harnessing social ties, has frequently employed collaborative filtering (CF) due to its demonstrated efficiency and scalability. Nonetheless, a decline in performance of most extant CF techniques has been observed when confronted with extreme sparsity in explicit feedback. Past investigations predominantly merged both explicit and implicit feedback to mitigate the data scarcity issue, embedding based solely on explicit characteristics and formulating objective functions founded on user-item associations. Such a paradigm signifies a dependency on these interactions to compensate for deficient embeddings. Notably, a considerable discrepancy exists between implicit feedback and genuine user satisfaction in social recommendations, attributed to pervasive false positive interactions devoid of detailed user/item attributes. Furthermore, the establishment of connectivity between users/items has been partially dependent on users' inclinations, suggesting that the aggregation procedure might overlook certain neighbourhood preferences. In response to these challenges, a hybrid neural graph model endowed with attributive features has been introduced. This model amalgamates explicit/implicit feedback, attribute data, and a user-item interaction graph. To counteract data sparsity, a variational graph framework has been devised to extract latent representations from both feedback and attribute data. For the effective and explicit discernment of collaborative signals, the embedding incorporates a user-item interaction graph, which offers a potent modelling of elevated-order connectivities and the detection of latent user-item associations. The user and item embeddings are derived via an attentive propagation method, with the ultimate item embeddings being sourced through a linear weighted sum, eschewing non-linear activation functions. Comparative analyses on four real-world datasets have demonstrated the superior efficacy of the proposed methodology in relation to leading contemporary recommendation systems.","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136069906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ESP-UNet: Encoder-Decoder Convolutional Neural Network with Edge-Enhanced Features for Liver Segmentation","authors":"Kiran Napte, Anurag Mahajan, Shabana Urooj","doi":"10.18280/ts.400545","DOIUrl":"https://doi.org/10.18280/ts.400545","url":null,"abstract":".","PeriodicalId":49430,"journal":{"name":"Traitement Du Signal","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136103458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}