Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00098
Tebogo Makaba, B. Gatsheni
Road traffic accidents and incidents are responsible for death, injuries and physical disabilities in many developed countries. This burden mainly affects road users, short-term insurance companies, road accident funds, and the public health care infrastructure. This study reviewed the literature on research in road traffic accidents/incidents(RTA/RTI) from a global and African perspective. The bibliometric review included publications from journals, conference proceedings, books and book chapters. The main focus of the study was a comparison of the global and African perspectives using data extracted from Scopus decade between 2010 to August 2019. The data generated from Scopus was used to map out links, gaps, and contributions that had been published in this research space. Further analysis was conducted using MS Excel and the VOSviewer science mapping visualizer graphical tool. The review found that globally and in Africa there was a gradual increase in research being done in the field of road traffic accidents/incidents.
{"title":"A Decade Bibliometric Review of Road Traffic Accidents and Incidents: A Computational Perspective","authors":"Tebogo Makaba, B. Gatsheni","doi":"10.1109/CSCI49370.2019.00098","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00098","url":null,"abstract":"Road traffic accidents and incidents are responsible for death, injuries and physical disabilities in many developed countries. This burden mainly affects road users, short-term insurance companies, road accident funds, and the public health care infrastructure. This study reviewed the literature on research in road traffic accidents/incidents(RTA/RTI) from a global and African perspective. The bibliometric review included publications from journals, conference proceedings, books and book chapters. The main focus of the study was a comparison of the global and African perspectives using data extracted from Scopus decade between 2010 to August 2019. The data generated from Scopus was used to map out links, gaps, and contributions that had been published in this research space. Further analysis was conducted using MS Excel and the VOSviewer science mapping visualizer graphical tool. The review found that globally and in Africa there was a gradual increase in research being done in the field of road traffic accidents/incidents.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115756500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00253
Q. Ma, M. Murata
This paper presents methods of simultaneously performing topic/keyword extraction and unsupervised classification for questions posted in community-based question answering services (CQA) or Q&A websites, using topic models and hybrid models. Large-scale experiments on two kinds of data, one called category data and the other called subtyping data, show the effectiveness of our methods. The purity and correct rate show that the topic models outperform clustering methods, hybrid models outperform topic models in question classification, and the adoption of term frequency-inverse document frequency is effective for the subtyping data. Manual evaluations with the extracted keywords show the effectiveness of the topic models in topic extraction.
{"title":"Topic Extraction and Classification for Questions Posted in Community-Based Question Answering Services","authors":"Q. Ma, M. Murata","doi":"10.1109/CSCI49370.2019.00253","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00253","url":null,"abstract":"This paper presents methods of simultaneously performing topic/keyword extraction and unsupervised classification for questions posted in community-based question answering services (CQA) or Q&A websites, using topic models and hybrid models. Large-scale experiments on two kinds of data, one called category data and the other called subtyping data, show the effectiveness of our methods. The purity and correct rate show that the topic models outperform clustering methods, hybrid models outperform topic models in question classification, and the adoption of term frequency-inverse document frequency is effective for the subtyping data. Manual evaluations with the extracted keywords show the effectiveness of the topic models in topic extraction.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116629559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00152
K. Heljakka, P. Ihamäki, Pauliina Tuomi, Petri Saarikoski
This paper explores the activity of coding with smart toy robots Dash and Botley as a part of playful learning in the Finnish early education context. The findings of our study demonstrate how coding with the two toy robots was approached, conducted and played by Finnish preschoolers aged 5-6 years. The main conclusion of the study is that preschoolers used the toy robots with affordances related to coding mainly in developing gamified play around them by designing tracks for the toys, programming the toys to solve obstacle paths, and competing in player-generated contests of dexterity, speed and physically mobile play.
{"title":"Gamified Coding: Toy Robots and Playful Learning in Early Education","authors":"K. Heljakka, P. Ihamäki, Pauliina Tuomi, Petri Saarikoski","doi":"10.1109/CSCI49370.2019.00152","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00152","url":null,"abstract":"This paper explores the activity of coding with smart toy robots Dash and Botley as a part of playful learning in the Finnish early education context. The findings of our study demonstrate how coding with the two toy robots was approached, conducted and played by Finnish preschoolers aged 5-6 years. The main conclusion of the study is that preschoolers used the toy robots with affordances related to coding mainly in developing gamified play around them by designing tracks for the toys, programming the toys to solve obstacle paths, and competing in player-generated contests of dexterity, speed and physically mobile play.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121086186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate segmentation and volumetric analysis of colon wall is essential to advance computer-aided detection (CAD) of colonic polyps in computed tomography colonography (CTC). Due to their limited geometric information, detection of flat polyps is very difficult in both optical colonoscopy and CTC. In this paper, we present a new framework of segmentation and volumetric analysis of colon wall for improving detection of flat polyps. First, partial volume (PV) effects around the inner mucous membrane of the colon were reserved through our PV based electronic colon cleansing. PV information was further used to guide colon wall segmentation as well as to establish the starting point of iso-potential surfaces for colon wall thickness measures. Then, we employed a dual level set competition model to simultaneously segment both inner and outer colon wall by taking into account the mutual interference between two borders. We further conducted volumetric analysis of the dynamic colon wall information and built four layer of iso-potential surfaces which represent the intrinsic anatomical information of colon wall. We built a unique point-to-point path starting from the very beginning of the mucous membrane of the colon. As flat polyps are plaque-like lesions raised less than 3mm from the colonic mucosa layer, inclusion of PV effects shall bring us the fine information about flat polyps, thus improving the detection performance. The proposed framework was validated on patient CTC scans with flat polyps. Experimental results demonstrated that the framework is very promising towards detection of colonic flat polyps via CTC.
{"title":"Segmentation and Volumetric Analysis of Colon Wall for Detection of Flat Polyp Candidates via CT Colonography","authors":"Lihong C. Li, Xinzhou Wei, Kenneth Ng, Anushka Banerjee, Huafeng Wang, Wenfeng Song, Zhengrong Liang","doi":"10.1109/CSCI49370.2019.00193","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00193","url":null,"abstract":"Accurate segmentation and volumetric analysis of colon wall is essential to advance computer-aided detection (CAD) of colonic polyps in computed tomography colonography (CTC). Due to their limited geometric information, detection of flat polyps is very difficult in both optical colonoscopy and CTC. In this paper, we present a new framework of segmentation and volumetric analysis of colon wall for improving detection of flat polyps. First, partial volume (PV) effects around the inner mucous membrane of the colon were reserved through our PV based electronic colon cleansing. PV information was further used to guide colon wall segmentation as well as to establish the starting point of iso-potential surfaces for colon wall thickness measures. Then, we employed a dual level set competition model to simultaneously segment both inner and outer colon wall by taking into account the mutual interference between two borders. We further conducted volumetric analysis of the dynamic colon wall information and built four layer of iso-potential surfaces which represent the intrinsic anatomical information of colon wall. We built a unique point-to-point path starting from the very beginning of the mucous membrane of the colon. As flat polyps are plaque-like lesions raised less than 3mm from the colonic mucosa layer, inclusion of PV effects shall bring us the fine information about flat polyps, thus improving the detection performance. The proposed framework was validated on patient CTC scans with flat polyps. Experimental results demonstrated that the framework is very promising towards detection of colonic flat polyps via CTC.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121117136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00160
M. Javed, M. Estep
The focus of this study is to explore the CS 4204 Software Engineering dedicated lab period accompanying traditional lectures. iPhone mobile app development took place using the industry standard Apple Xcode IDE and the Swift programming language in a Mac lab. Tutorials that built upon each other were assigned to students, leading to a culminating project. The labs were designed with the intent of helping students put software engineering theory into practice during mobile app development. Student perceptions and lab maintenance issues were discussed. Overall, faculty concluded that the added lab periods were beneficial for students.
{"title":"Teaching Undergraduate Software Engineering: Xcode Mobile App Development during Dedicated Lab Periods","authors":"M. Javed, M. Estep","doi":"10.1109/CSCI49370.2019.00160","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00160","url":null,"abstract":"The focus of this study is to explore the CS 4204 Software Engineering dedicated lab period accompanying traditional lectures. iPhone mobile app development took place using the industry standard Apple Xcode IDE and the Swift programming language in a Mac lab. Tutorials that built upon each other were assigned to students, leading to a culminating project. The labs were designed with the intent of helping students put software engineering theory into practice during mobile app development. Student perceptions and lab maintenance issues were discussed. Overall, faculty concluded that the added lab periods were beneficial for students.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127131910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00271
F. Alqahtani, Frederick T. Sheldon
The primary concern of cloud service consuming organizations is the kind of control that they may have over their data. Companies are legally required to monitor a subset of the data that is crucial both to their business and customers. Conversely, the cloud service providers are not showing commitment towards securely handling enterprise data, and this may put sensitive data at risk. Hence, consumers are expected to maintain control over their sensitive data both at their local infrastructure and in the cloud. To achieve this, companies are practicing methods that would typically block access to cloud service that stores sensitive data at the network level. However, such restrictions may limit employee performance, and at the same time may not combat the malicious activities of bad employees. In this paper, we propose a model that allows consumers and providers the ability to transparently track the data in the cloud environment. The model allows consumers to have control over their data and conduct an audit to the treatment of their data by third-party services, while employees are allowed to use the cloud service.
{"title":"CloudMonitor: Data Flow Filtering as a Service","authors":"F. Alqahtani, Frederick T. Sheldon","doi":"10.1109/CSCI49370.2019.00271","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00271","url":null,"abstract":"The primary concern of cloud service consuming organizations is the kind of control that they may have over their data. Companies are legally required to monitor a subset of the data that is crucial both to their business and customers. Conversely, the cloud service providers are not showing commitment towards securely handling enterprise data, and this may put sensitive data at risk. Hence, consumers are expected to maintain control over their sensitive data both at their local infrastructure and in the cloud. To achieve this, companies are practicing methods that would typically block access to cloud service that stores sensitive data at the network level. However, such restrictions may limit employee performance, and at the same time may not combat the malicious activities of bad employees. In this paper, we propose a model that allows consumers and providers the ability to transparently track the data in the cloud environment. The model allows consumers to have control over their data and conduct an audit to the treatment of their data by third-party services, while employees are allowed to use the cloud service.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00242
A. Kurkovsky
Higher education institutions already accumulated enormous volumes of big data associated with student success, enrolment, professors, educational programs, educational computer systems, economics, etc., that potentially can be used for sustainable development analysis. However, their practical applications in higher education institutions are remained relatively rare because of complexity in the subject domain and lack of a methodological base to reuse some effective previously created models. This paper introduces an approach to incorporate big data into higher education sustainability analysis and reduce the complexity of the subject domain infrastructure by using a set of formalized systems. Within this approach, a simulation umbrella is used as a united methodological base to combine big data and sustainability analysis implementation. To illustrate how the proposed approach works, a simulation case-study associated with a USA young fast-growing higher education institution included in the paper.
{"title":"Big Data and Simulation to Analyze Higher Education Sustainable Development","authors":"A. Kurkovsky","doi":"10.1109/CSCI49370.2019.00242","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00242","url":null,"abstract":"Higher education institutions already accumulated enormous volumes of big data associated with student success, enrolment, professors, educational programs, educational computer systems, economics, etc., that potentially can be used for sustainable development analysis. However, their practical applications in higher education institutions are remained relatively rare because of complexity in the subject domain and lack of a methodological base to reuse some effective previously created models. This paper introduces an approach to incorporate big data into higher education sustainability analysis and reduce the complexity of the subject domain infrastructure by using a set of formalized systems. Within this approach, a simulation umbrella is used as a united methodological base to combine big data and sustainability analysis implementation. To illustrate how the proposed approach works, a simulation case-study associated with a USA young fast-growing higher education institution included in the paper.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"os9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128324269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00111
Sung-won Park, Taesic Kim
Interleaved ADCs are used to increase sampling rate of available existing ADCs. By utilizing M ADCs, sampling rate can be increased by a factor of M. Interleaving of ADCs results in timing mismatch that causes nonuniform sampling. Timing mismatch correction in the frequency domain is considered in this paper. For a stationary signal the entire length of the nonuniformly sampled signal is transformed and timing mismatch is corrected in the frequency domain. Using the inverse Fourier transform, the uniformly sampled signal is reconstructed. The longer the signal, the better the performance in terms of correction. However, for a long signal long delay is required for correction. In image compression DCT is used for small blocks of an image. In this paper DCT is used to correct timing mismatch. Only M-point DCT is used for correction to minimize the delay. The experimental results show that the timing mismatch correction using DCT performs better than the DFT method.
{"title":"Correcting Timing Mismatch of Interleaved ADCs Using DCT","authors":"Sung-won Park, Taesic Kim","doi":"10.1109/CSCI49370.2019.00111","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00111","url":null,"abstract":"Interleaved ADCs are used to increase sampling rate of available existing ADCs. By utilizing M ADCs, sampling rate can be increased by a factor of M. Interleaving of ADCs results in timing mismatch that causes nonuniform sampling. Timing mismatch correction in the frequency domain is considered in this paper. For a stationary signal the entire length of the nonuniformly sampled signal is transformed and timing mismatch is corrected in the frequency domain. Using the inverse Fourier transform, the uniformly sampled signal is reconstructed. The longer the signal, the better the performance in terms of correction. However, for a long signal long delay is required for correction. In image compression DCT is used for small blocks of an image. In this paper DCT is used to correct timing mismatch. Only M-point DCT is used for correction to minimize the delay. The experimental results show that the timing mismatch correction using DCT performs better than the DFT method.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130662869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00259
Matthew Spradling, Mark Allison, Tsenguun Tsogbadrakh, Jay Strong
The prevalence of social botnets has increased public distrust of social media networks. Current methods exist for detecting bot activity on Twitter, Reddit, Facebook, and other social media platforms. Most of these detection methods rely upon observing user behavior for a period of time. Unfortunately, the behavior observation period allows time for a botnet to successfully propagate one or many posts before removal. In this paper, we model the post propagation patterns of normal users and social botnets. We prove that a botnet may exploit deterministic propagation actions to elevate a post even with a small botnet population. We propose a probabilistic model which can limit the impact of social media botnets until they can be detected and removed. While our approach maintains expected results for non-coordinated activity, coordinated botnets will be detected before propagation with high probability.
{"title":"Toward Limiting Social Botnet Effectiveness while Detection Is Performed: A Probabilistic Approach","authors":"Matthew Spradling, Mark Allison, Tsenguun Tsogbadrakh, Jay Strong","doi":"10.1109/CSCI49370.2019.00259","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00259","url":null,"abstract":"The prevalence of social botnets has increased public distrust of social media networks. Current methods exist for detecting bot activity on Twitter, Reddit, Facebook, and other social media platforms. Most of these detection methods rely upon observing user behavior for a period of time. Unfortunately, the behavior observation period allows time for a botnet to successfully propagate one or many posts before removal. In this paper, we model the post propagation patterns of normal users and social botnets. We prove that a botnet may exploit deterministic propagation actions to elevate a post even with a small botnet population. We propose a probabilistic model which can limit the impact of social media botnets until they can be detected and removed. While our approach maintains expected results for non-coordinated activity, coordinated botnets will be detected before propagation with high probability.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114309911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00112
W. Khedr, Mohamed W. Abo Elsoud
The most traditional watermark techniques are used widely in information hiding technology. However, these techniques require some information of origin images to extract it from the cover image in frequency domain. This paper introduces a new blind watermark technique for embedding three watermark gray images into a color cover image based on Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD). The proposed technique is consists of three phases: Firstly, the three gray images are embedded into SVD components of cover color image to produce watermarked image. Secondly, the watermarked image is again embedded into the low frequency (DWTDCT) domains of one component of RGB origin cover image to produce the final resultant watermarked image. Finally, the three watermark images are blind extracted in the reverse operations without require to their SVD components. The implementation of the proposed technique is a perceptible improvement of experimental results compared with the recently watermarked techniques with respected to PSNR and Normalized Correlation (NC). This technique is also a robust to noise and intentional attacks.
{"title":"A Novel Blind and Robust Watermarking Technique of Multiple Images","authors":"W. Khedr, Mohamed W. Abo Elsoud","doi":"10.1109/CSCI49370.2019.00112","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00112","url":null,"abstract":"The most traditional watermark techniques are used widely in information hiding technology. However, these techniques require some information of origin images to extract it from the cover image in frequency domain. This paper introduces a new blind watermark technique for embedding three watermark gray images into a color cover image based on Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and Singular Value Decomposition (SVD). The proposed technique is consists of three phases: Firstly, the three gray images are embedded into SVD components of cover color image to produce watermarked image. Secondly, the watermarked image is again embedded into the low frequency (DWTDCT) domains of one component of RGB origin cover image to produce the final resultant watermarked image. Finally, the three watermark images are blind extracted in the reverse operations without require to their SVD components. The implementation of the proposed technique is a perceptible improvement of experimental results compared with the recently watermarked techniques with respected to PSNR and Normalized Correlation (NC). This technique is also a robust to noise and intentional attacks.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116370539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}