The morphological approach is used as a way of thinking, finding solutions and alternatives to the design problem. Designers sometimes use this approach as a part of the product design process, especially in the early formation of ideas and drawing sketches. Nowadays, the computer became a partner to the designer during the design process which is called (Computer Aided Product Design), so it is necessary to develop the morphological approach to be more integrative and consensual ...etc., with the use of the computer in the design process. So this paper aims to make a combination between the advantages of the computer as an assistant in the design process and those provided by the morphological approach which will help the designers to produce a large number of ideas in a relatively short time, and also save the industrial enterprises effort and money. On the other side, using the 3d morphological approach will make the design process easier and funnier.
{"title":"Applying the 3D Morphological Approach Using the Computer-Aided Product Design","authors":"T. Mohamed","doi":"10.1145/3316551.3316579","DOIUrl":"https://doi.org/10.1145/3316551.3316579","url":null,"abstract":"The morphological approach is used as a way of thinking, finding solutions and alternatives to the design problem. Designers sometimes use this approach as a part of the product design process, especially in the early formation of ideas and drawing sketches. Nowadays, the computer became a partner to the designer during the design process which is called (Computer Aided Product Design), so it is necessary to develop the morphological approach to be more integrative and consensual ...etc., with the use of the computer in the design process. So this paper aims to make a combination between the advantages of the computer as an assistant in the design process and those provided by the morphological approach which will help the designers to produce a large number of ideas in a relatively short time, and also save the industrial enterprises effort and money. On the other side, using the 3d morphological approach will make the design process easier and funnier.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129381427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Convolutional Neural Networks (CNNs) are able to learn basic and high level features hierarchically with the highlight that it implements an end-to-end learning method. However, lacking in the ability to utilize prior information and domain knowledge has led to the neural networks hard to train. In this paper, a method using prior information is proposed, which is by appending prior feature-maps through a bypass input structure. As an implementation, we evaluate a convolutional neural network integrating with the Self-Quotient Image (SQI) algorithm. Through the bypass, we import the feature-maps from the SQI algorithm and concat them with the output of the first convolution layer. With the help of traditional image processing methods, CNNs can directly improve the accuracy and training stability, while the bypass is exactly a consistent point. Finally, the necessity of this bypass pattern is that it avoids the direct modification of original images. As CNNs are able to focus on far richer features than basic image processing methods, it is advisable for us to expose CNNs to the original data. It is exactly the main design idea that we make the output from synergistic processing algorithm bypass from the side.
{"title":"Self-Quotient Image based CNN: A Basic Image Processing assisting Convolutional Neural Network","authors":"Xingrun Xing, Minrui Dong, Cheng Bi, Lin Yang","doi":"10.1145/3316551.3316567","DOIUrl":"https://doi.org/10.1145/3316551.3316567","url":null,"abstract":"The Convolutional Neural Networks (CNNs) are able to learn basic and high level features hierarchically with the highlight that it implements an end-to-end learning method. However, lacking in the ability to utilize prior information and domain knowledge has led to the neural networks hard to train. In this paper, a method using prior information is proposed, which is by appending prior feature-maps through a bypass input structure. As an implementation, we evaluate a convolutional neural network integrating with the Self-Quotient Image (SQI) algorithm. Through the bypass, we import the feature-maps from the SQI algorithm and concat them with the output of the first convolution layer. With the help of traditional image processing methods, CNNs can directly improve the accuracy and training stability, while the bypass is exactly a consistent point. Finally, the necessity of this bypass pattern is that it avoids the direct modification of original images. As CNNs are able to focus on far richer features than basic image processing methods, it is advisable for us to expose CNNs to the original data. It is exactly the main design idea that we make the output from synergistic processing algorithm bypass from the side.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123760622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinliang Ma, Zhiwei He, Jiye Huang, Yanhui Dong, ChuFeng you
Since the beginning of the 21st century, the exploration of marine resources has become increasingly frequent, it is increasingly recognized that marine resources play a vital role in human development. However, there are still some problems such as real-time, accurancy and validity, and many places worth exploring in depth analysis of seabed mineral resources. The main purpose of this paper is to apply image process and filter technology, and then analysis of seabed image clarity, accurate statistical coverage indicators seabed mineral resources, so as to realize forecasting undersea resources distribution in the area. The focus of this paper is to solve the problem of the coverage accuracy of seabed black connected domain by adjusting the brightness equalization algorithm and setting the Setting Region Of(ROI) area and the window Histogram Equalization(HE). In order to achieve the purpose of evaluation of sea area resources, a series of such as color correction, bilater filter, window HE and binarization processing such as image preprocessing algorithm, accurate statistical coverage of seabed mineral resources. In this article, video image processing based on the qt environment, including export processing of video streams and index data, generate clarity evaluation and black pieces connected domain coverage rate curve, can achieve more accurate and stable the indicators of seabed image detection the prediction of the accurate statistics of image coverage of seabed ore is achieved in the paper, which lays a foundation for the exploration of deep learning in the future.
{"title":"An Automatic Analysis Method for Seabed Mineral Resources Based on Image Brightness Equalization","authors":"Xinliang Ma, Zhiwei He, Jiye Huang, Yanhui Dong, ChuFeng you","doi":"10.1145/3316551.3318232","DOIUrl":"https://doi.org/10.1145/3316551.3318232","url":null,"abstract":"Since the beginning of the 21st century, the exploration of marine resources has become increasingly frequent, it is increasingly recognized that marine resources play a vital role in human development. However, there are still some problems such as real-time, accurancy and validity, and many places worth exploring in depth analysis of seabed mineral resources. The main purpose of this paper is to apply image process and filter technology, and then analysis of seabed image clarity, accurate statistical coverage indicators seabed mineral resources, so as to realize forecasting undersea resources distribution in the area. The focus of this paper is to solve the problem of the coverage accuracy of seabed black connected domain by adjusting the brightness equalization algorithm and setting the Setting Region Of(ROI) area and the window Histogram Equalization(HE). In order to achieve the purpose of evaluation of sea area resources, a series of such as color correction, bilater filter, window HE and binarization processing such as image preprocessing algorithm, accurate statistical coverage of seabed mineral resources. In this article, video image processing based on the qt environment, including export processing of video streams and index data, generate clarity evaluation and black pieces connected domain coverage rate curve, can achieve more accurate and stable the indicators of seabed image detection the prediction of the accurate statistics of image coverage of seabed ore is achieved in the paper, which lays a foundation for the exploration of deep learning in the future.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114978132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, electroencephalogram (EEG) is widely applied for physiological research and clinical diagnosis of brain diseases. Therefore, how to eliminate noise to gain a pure EEG signal becomes a common difficulty in this field. As a typical method for chaotic time series, Volterra is widely used to study EEG signal. However, the calculation of Volterra coefficients is likely to cause dimensionality disaster. In addition, EEG signals collected in real environment are not easy to extract the prior information, which is related to the quality of the reconstructed phase space. In order to overcome these two problems, we introduce a uniform searching particle swarm optimization (UPSO) algorithm to optimize the coefficients of Volterra then a noise elimination method based on UPSO second order Volterra filter (UPSO-SOVF) can be constructed. The proposed model can improve the quality of phase-space reconstruction by implicating the phase space reconstruction process in the model solving process and then get the embedding dimension and delay time dynamically. In this paper, some experiments are made on different EEG signals and compared with the particle swarm optimization second order Volterra filter (PSO-SOVF). The result shows that the proposed model has a better performance in avoiding the dimensional disaster and can better reflect regularities of the EEG signal series than PSO-SOVF. It can fully meet the requirements for noise elimination of EEG signal.
{"title":"An Improved Noise Elimination Model of EEG Based on Second Order Volterra Filter","authors":"Xia Wu, Yumei Zhang, Xiaojun Wu","doi":"10.1145/3316551.3316565","DOIUrl":"https://doi.org/10.1145/3316551.3316565","url":null,"abstract":"Recently, electroencephalogram (EEG) is widely applied for physiological research and clinical diagnosis of brain diseases. Therefore, how to eliminate noise to gain a pure EEG signal becomes a common difficulty in this field. As a typical method for chaotic time series, Volterra is widely used to study EEG signal. However, the calculation of Volterra coefficients is likely to cause dimensionality disaster. In addition, EEG signals collected in real environment are not easy to extract the prior information, which is related to the quality of the reconstructed phase space. In order to overcome these two problems, we introduce a uniform searching particle swarm optimization (UPSO) algorithm to optimize the coefficients of Volterra then a noise elimination method based on UPSO second order Volterra filter (UPSO-SOVF) can be constructed. The proposed model can improve the quality of phase-space reconstruction by implicating the phase space reconstruction process in the model solving process and then get the embedding dimension and delay time dynamically. In this paper, some experiments are made on different EEG signals and compared with the particle swarm optimization second order Volterra filter (PSO-SOVF). The result shows that the proposed model has a better performance in avoiding the dimensional disaster and can better reflect regularities of the EEG signal series than PSO-SOVF. It can fully meet the requirements for noise elimination of EEG signal.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116339273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The accurate reconstruction of a signal within a reasonable period is the key process that enables the application of compressive sensing in large-scale image transmission. The sparsity adaptive matching pursuit (SAMP) algorithm does not need prior knowledge on signal sparsity and has high reconstruction accuracy but has low reconstruction efficiency. To overcome the low reconstruction efficiency, we propose the use of the fast segmentation sparsity adaptive matching pursuit (FSSAMP) algorithm, where the value of K estimated in each iteration increases in a nonlinear manner instead of undergoing linear growth. This form can reduce the number of iterations by accurate signal sparsity degree evaluation. In addition, we use signal segmentation strategies in the proposed algorithm to improve the algorithm accuracy. Experimental results demonstrated that the FSSAMP algorithm has more stable reconstruction performance and higher reconstruction accuracy than the SAMP algorithm.
{"title":"Sparse Signal Recovery via Improved Sparse Adaptive Matching Pursuit Algorithm","authors":"Linyu Wang, Mingqi He, Jianhong Xiang","doi":"10.1145/3316551.3316553","DOIUrl":"https://doi.org/10.1145/3316551.3316553","url":null,"abstract":"The accurate reconstruction of a signal within a reasonable period is the key process that enables the application of compressive sensing in large-scale image transmission. The sparsity adaptive matching pursuit (SAMP) algorithm does not need prior knowledge on signal sparsity and has high reconstruction accuracy but has low reconstruction efficiency. To overcome the low reconstruction efficiency, we propose the use of the fast segmentation sparsity adaptive matching pursuit (FSSAMP) algorithm, where the value of K estimated in each iteration increases in a nonlinear manner instead of undergoing linear growth. This form can reduce the number of iterations by accurate signal sparsity degree evaluation. In addition, we use signal segmentation strategies in the proposed algorithm to improve the algorithm accuracy. Experimental results demonstrated that the FSSAMP algorithm has more stable reconstruction performance and higher reconstruction accuracy than the SAMP algorithm.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124996127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low-rank and sparse matrix recovery method based on Robust Principal Component Analysis (RPCA) model are widely used in infrared small target detection. In order to solve the problem of time consuming and difficulty in parameter selection when using this method, a novel method for infrared dim small target detection under complex background based on Region of Interest (ROI) extraction and matrix recovery is presented. Calculate the Variance Weighted Information Entropy (VWIE) of every sub-block and extract the ROI firstly; then use Adaptive Parameter Inexact Augmented Lagrange Multiplier (APIALM) algorithm to recover target image from extracted ROI; finally segmenting and calibrating the target using an adaptive threshold method. Experiments results demonstrate that the proposed method can significantly decline the running time and retain most properties of traditional detection method based on low-rank and sparse matrix recovery.
{"title":"A Novel Method for Single Infrared Dim Small Target Detection Based on ROI extraction and Matrix Recovery","authors":"Bincheng Xiong, Xinhan Huang, Min Wang","doi":"10.1145/3316551.3318234","DOIUrl":"https://doi.org/10.1145/3316551.3318234","url":null,"abstract":"Low-rank and sparse matrix recovery method based on Robust Principal Component Analysis (RPCA) model are widely used in infrared small target detection. In order to solve the problem of time consuming and difficulty in parameter selection when using this method, a novel method for infrared dim small target detection under complex background based on Region of Interest (ROI) extraction and matrix recovery is presented. Calculate the Variance Weighted Information Entropy (VWIE) of every sub-block and extract the ROI firstly; then use Adaptive Parameter Inexact Augmented Lagrange Multiplier (APIALM) algorithm to recover target image from extracted ROI; finally segmenting and calibrating the target using an adaptive threshold method. Experiments results demonstrate that the proposed method can significantly decline the running time and retain most properties of traditional detection method based on low-rank and sparse matrix recovery.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127715253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Te-Wei Ho, Huan Qi, F. Lai, Furen Xiao, Jin-Ming Wu
Segmentation of brain tumors by magnetic resonance imaging (MRI) plays a pivotal role in evaluating the disease condition and deciding on a future treatment plan. This type of segmentation task usually requires extensive experience from medical practitioners and enormous amounts of time. To mitigate these issues, this study deploys a segmentation model for brain tumors based on U-Net and a comprehensive data processing approach, including target magnification and image transformation, such as data augmentation and edge contour enhancement. Compared with the manual segmentation of radiologists, which is considered the gold standard, the proposed model revealed good performance and yielded a median dice similarity coefficient of 0.637 (interquartile range: 0.382-0.803) for brain tumor segmentation. Results with and without edge contour enhancement demonstrated significant differences based on the Wilcoxon signed-tank test with P = 0.028. The proposed model enables effective segmentation of brain tumors determined by MRI and can assist medical practitioners tasked with analyzing complicated medical images.
{"title":"Brain Tumor Segmentation Using U-Net and Edge Contour Enhancement","authors":"Te-Wei Ho, Huan Qi, F. Lai, Furen Xiao, Jin-Ming Wu","doi":"10.1145/3316551.3316554","DOIUrl":"https://doi.org/10.1145/3316551.3316554","url":null,"abstract":"Segmentation of brain tumors by magnetic resonance imaging (MRI) plays a pivotal role in evaluating the disease condition and deciding on a future treatment plan. This type of segmentation task usually requires extensive experience from medical practitioners and enormous amounts of time. To mitigate these issues, this study deploys a segmentation model for brain tumors based on U-Net and a comprehensive data processing approach, including target magnification and image transformation, such as data augmentation and edge contour enhancement. Compared with the manual segmentation of radiologists, which is considered the gold standard, the proposed model revealed good performance and yielded a median dice similarity coefficient of 0.637 (interquartile range: 0.382-0.803) for brain tumor segmentation. Results with and without edge contour enhancement demonstrated significant differences based on the Wilcoxon signed-tank test with P = 0.028. The proposed model enables effective segmentation of brain tumors determined by MRI and can assist medical practitioners tasked with analyzing complicated medical images.","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114940023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","authors":"","doi":"10.1145/3316551","DOIUrl":"https://doi.org/10.1145/3316551","url":null,"abstract":"","PeriodicalId":300199,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Digital Signal Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121243008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}