Pub Date : 2018-04-01DOI: 10.1109/ISACV.2018.8354067
Mustapha El Alaoui, Fouad Farah, Karim El khadiri, H. Qjidaa, A. Aarab, R. El Alami, Ahmed Lakhassassi
This paper presents an analysis and design of Dickson charge pump for EEPROM in 180 nm CMOS technology. The new Dickson Charge Pump is the security sub chip to encrypts/decrypts the data, for this reason we need an EEPROM to write a secret key which must be programmed on chip by the “Dickson Charge Pump”. This Dickson charge pump consists of several blocks, Pre-regulator, Dickson 6-stage, Clock generator and Comparator, it generates an output voltage Vout = 11,25V according to a variable input voltage between 2,7V and 4,4V. The layout occupies a small active area of 32.80um × 46.90um in CMOS 180nm.
{"title":"Analysis and design of dickson charge pump for EEPROM in 180nm CMOS technology","authors":"Mustapha El Alaoui, Fouad Farah, Karim El khadiri, H. Qjidaa, A. Aarab, R. El Alami, Ahmed Lakhassassi","doi":"10.1109/ISACV.2018.8354067","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354067","url":null,"abstract":"This paper presents an analysis and design of Dickson charge pump for EEPROM in 180 nm CMOS technology. The new Dickson Charge Pump is the security sub chip to encrypts/decrypts the data, for this reason we need an EEPROM to write a secret key which must be programmed on chip by the “Dickson Charge Pump”. This Dickson charge pump consists of several blocks, Pre-regulator, Dickson 6-stage, Clock generator and Comparator, it generates an output voltage Vout = 11,25V according to a variable input voltage between 2,7V and 4,4V. The layout occupies a small active area of 32.80um × 46.90um in CMOS 180nm.","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117122067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-01DOI: 10.1109/ISACV.2018.8354041
M. Souaidi, Said Charfi, Abdelkaher Ait Abdelouahad, M. El Ansari
In this paper we present a new feature descriptor for automatic detection of frames with polyp in Wireless Capsule Endoscopy (WCE) images. The new approach is based on the fact that the polyp disease exhibits discriminating features when the WCE images are decomposed into different resolution levels. Hence we have made use of wavelet and emphasis feature extraction approaches. The 2-D discrete wavelet transform, dual tree complex wavelet transform, gabor wavelet transform and curvelet transform have been exploited to find out which one of them combined with probability distribution is suitable for polyp detection. Experiments were done on an augmented dataset and the results are satisfactory achieving 96% in term of performance.
{"title":"New features for wireless capsule endoscopy polyp detection","authors":"M. Souaidi, Said Charfi, Abdelkaher Ait Abdelouahad, M. El Ansari","doi":"10.1109/ISACV.2018.8354041","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354041","url":null,"abstract":"In this paper we present a new feature descriptor for automatic detection of frames with polyp in Wireless Capsule Endoscopy (WCE) images. The new approach is based on the fact that the polyp disease exhibits discriminating features when the WCE images are decomposed into different resolution levels. Hence we have made use of wavelet and emphasis feature extraction approaches. The 2-D discrete wavelet transform, dual tree complex wavelet transform, gabor wavelet transform and curvelet transform have been exploited to find out which one of them combined with probability distribution is suitable for polyp detection. Experiments were done on an augmented dataset and the results are satisfactory achieving 96% in term of performance.","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121155583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-01DOI: 10.1109/ISACV.2018.8354085
Mohammad M. Alnabhan, A. Hammouri, M. Hammad, Mohammad Atoum, Omamah Al-Thnebat
Software visualization is one of the main techniques used to simplify the presentation of software systems and enhance their understandability. It is used to present the software system in a visual manner using simple, clear and meaningful symbols. This study proposes a new 2D software visualization approach. In this approach, each class is represented by rectangle, the name of the class placed above the rectangle, the size of class (Line of Code) represented by the height of the rectangle. The methods and the attributes are represented by circles and triangles respectively. The relationships among classes correspond to arrows. The proposed visualization approach was evaluated in terms of applicability and efficiency. Results have confirmed successful implementation of the proposed approach, and its ability to provide a simple and effective graphical presentation of extracted software components and properties.
{"title":"2D visualization for object-oriented software systems","authors":"Mohammad M. Alnabhan, A. Hammouri, M. Hammad, Mohammad Atoum, Omamah Al-Thnebat","doi":"10.1109/ISACV.2018.8354085","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354085","url":null,"abstract":"Software visualization is one of the main techniques used to simplify the presentation of software systems and enhance their understandability. It is used to present the software system in a visual manner using simple, clear and meaningful symbols. This study proposes a new 2D software visualization approach. In this approach, each class is represented by rectangle, the name of the class placed above the rectangle, the size of class (Line of Code) represented by the height of the rectangle. The methods and the attributes are represented by circles and triangles respectively. The relationships among classes correspond to arrows. The proposed visualization approach was evaluated in terms of applicability and efficiency. Results have confirmed successful implementation of the proposed approach, and its ability to provide a simple and effective graphical presentation of extracted software components and properties.","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-01DOI: 10.1109/ISACV.2018.8354035
Issam Elafi, M. Jedra, N. Zahid
Moving object tracking in video sequences becomes an active research field due to its application in various domains. This work proposes a PSO algorithm based on new chromatic co-occurrence matrices descriptor in order to track objects under a dynamic environment. The use of the co-occurrence matrices will give us the capability to exploit the information about the texture of the target objects. The qualitative and quantitative studies on newest benchmark demonstrate that the obtained results are very competitive in comparison with many recent state-of-the-art methods.
{"title":"A novel particle swarm tracking system based on chromatic co-occurrence matrices","authors":"Issam Elafi, M. Jedra, N. Zahid","doi":"10.1109/ISACV.2018.8354035","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354035","url":null,"abstract":"Moving object tracking in video sequences becomes an active research field due to its application in various domains. This work proposes a PSO algorithm based on new chromatic co-occurrence matrices descriptor in order to track objects under a dynamic environment. The use of the co-occurrence matrices will give us the capability to exploit the information about the texture of the target objects. The qualitative and quantitative studies on newest benchmark demonstrate that the obtained results are very competitive in comparison with many recent state-of-the-art methods.","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131806462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-01DOI: 10.1109/ISACV.2018.8354058
Hajar Filali, J. Riffi, A. M. Mahraz, H. Tairi
Facial detection has recently attracted increasing interest due to the multitude of applications that result from it. In this context, we have used methods based on machine learning that allows a machine to evolve through a learning process, and to perform tasks that are difficult or impossible to fill by more conventional algorithmic means. According to this context, we have established a comparative study between four methods (Haar-AdaBoost, LBP-AdaBoost, GF-SVM, GF-NN). These techniques vary according to the way in which they extract the data and the adopted learning algorithms. The first two methods “Haar-AdaBoost, LBP-AdaBoost” are based on the Boosting algorithm, which is used both for selection and for learning a strong classifier with a cascade classification. While the last two classification methods “GF-SVM, GF-NN” use the Gabor filter to extract the characteristics. From this study, we found that the detection time varies from one method to another. Indeed, the LBP-AdaBoost and Haar-AdaBoost methods are the fastest compared to others. But in terms of detection rate and false detection rate, the Haar-AdaBoost method remains the best of the four methods.
{"title":"Multiple face detection based on machine learning","authors":"Hajar Filali, J. Riffi, A. M. Mahraz, H. Tairi","doi":"10.1109/ISACV.2018.8354058","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354058","url":null,"abstract":"Facial detection has recently attracted increasing interest due to the multitude of applications that result from it. In this context, we have used methods based on machine learning that allows a machine to evolve through a learning process, and to perform tasks that are difficult or impossible to fill by more conventional algorithmic means. According to this context, we have established a comparative study between four methods (Haar-AdaBoost, LBP-AdaBoost, GF-SVM, GF-NN). These techniques vary according to the way in which they extract the data and the adopted learning algorithms. The first two methods “Haar-AdaBoost, LBP-AdaBoost” are based on the Boosting algorithm, which is used both for selection and for learning a strong classifier with a cascade classification. While the last two classification methods “GF-SVM, GF-NN” use the Gabor filter to extract the characteristics. From this study, we found that the detection time varies from one method to another. Indeed, the LBP-AdaBoost and Haar-AdaBoost methods are the fastest compared to others. But in terms of detection rate and false detection rate, the Haar-AdaBoost method remains the best of the four methods.","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127504973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-01DOI: 10.1109/ISACV.2018.8354038
Lahcen El Bouny, Mohammed Khalil, A. Adib
ECG signal denoising is one of the most critical step in any ECG signal processing task. This work provides a comparative study between two of the most widely used transform methods in ECG signal denoising problem. The first class of methods is the wavelet transform, particularly the discrete Wavelet Transform (DWT) and the Stationary Wavelet Transform (SWT). The second class of methods is the Empirical Mode Decomposition (EMD) and its variants namely Ensemble Empirical Mode Decomposition (EEMD) and Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN). This study is focused on the additive white gaussian noise (AWGN) considered as the most common source of noise generally studied in different ECG signal denoising algorithms. Simulations results tested on real ECG signals from MIT-BIH Arrhythmia database showed that the Stationary Wavelet Transform provides the better performance in terms of Signal to Noise Ratio (SNR), Root Mean Square Error (RMSE) and Percent Root Mean Square Difference (PRD).
{"title":"Performance analysis of ECG signal denoising methods in transform domain","authors":"Lahcen El Bouny, Mohammed Khalil, A. Adib","doi":"10.1109/ISACV.2018.8354038","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354038","url":null,"abstract":"ECG signal denoising is one of the most critical step in any ECG signal processing task. This work provides a comparative study between two of the most widely used transform methods in ECG signal denoising problem. The first class of methods is the wavelet transform, particularly the discrete Wavelet Transform (DWT) and the Stationary Wavelet Transform (SWT). The second class of methods is the Empirical Mode Decomposition (EMD) and its variants namely Ensemble Empirical Mode Decomposition (EEMD) and Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN). This study is focused on the additive white gaussian noise (AWGN) considered as the most common source of noise generally studied in different ECG signal denoising algorithms. Simulations results tested on real ECG signals from MIT-BIH Arrhythmia database showed that the Stationary Wavelet Transform provides the better performance in terms of Signal to Noise Ratio (SNR), Root Mean Square Error (RMSE) and Percent Root Mean Square Difference (PRD).","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129241654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning has been widely recognized as a promising approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; However, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for vehicle detection. State-of-the-art performance result has been showed on the DETRAC vehicle dataset.
{"title":"Focal loss dense detector for vehicle surveillance","authors":"Xiaoliang Wang, Peng Cheng, Xinchuan Liu, Benedict Uzochukwu","doi":"10.1109/ISACV.2018.8354064","DOIUrl":"https://doi.org/10.1109/ISACV.2018.8354064","url":null,"abstract":"Deep learning has been widely recognized as a promising approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; However, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for vehicle detection. State-of-the-art performance result has been showed on the DETRAC vehicle dataset.","PeriodicalId":184662,"journal":{"name":"2018 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125619782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}