Pub Date : 2021-06-19DOI: 10.5121/csit.2021.110801
Ting Xiong, Shaojing Fu, Xiaochun Luo, Tao Xie
While the Domain Name System (DNS) is an infrastructure of the current network, it still faces the problem of centralization and data authentication according to its concept and practice. Decentralized storage of domain names and user local verification using blockchain may be effective solutions. However, since the blockchain is an add-only type database, domain name changes will cause out of date records to still be correct when using the Simplified Payment Verification (SPV) mechanism locally. This paper mainly introduces Local Enhanced Authentication DNS (LEA-DNS), which allows domain names to be stored in public blockchain database to provide decentralization feature and is compatible with the existing DNS. It achieves the validity and timeliness of local domain name resolution results to ensure correct and up to date with the Merkle Mountain Range and RSA accumulator technologies. Experiments show that less than 3.052Kb is needed for each DNS request to be validated, while the validation time is negligible, and only 9.44Kb of data need to be stored locally by the web client. Its compatibility with the existing DNS system and the lightness of the validation protocols indicate that this is a system suitable for deployment widely.
{"title":"LEA-DNS: DNS Resolution Validity and Timeliness Guarantee Local Authentication Extension with Public Blockchain","authors":"Ting Xiong, Shaojing Fu, Xiaochun Luo, Tao Xie","doi":"10.5121/csit.2021.110801","DOIUrl":"https://doi.org/10.5121/csit.2021.110801","url":null,"abstract":"While the Domain Name System (DNS) is an infrastructure of the current network, it still faces the problem of centralization and data authentication according to its concept and practice. Decentralized storage of domain names and user local verification using blockchain may be effective solutions. However, since the blockchain is an add-only type database, domain name changes will cause out of date records to still be correct when using the Simplified Payment Verification (SPV) mechanism locally. This paper mainly introduces Local Enhanced Authentication DNS (LEA-DNS), which allows domain names to be stored in public blockchain database to provide decentralization feature and is compatible with the existing DNS. It achieves the validity and timeliness of local domain name resolution results to ensure correct and up to date with the Merkle Mountain Range and RSA accumulator technologies. Experiments show that less than 3.052Kb is needed for each DNS request to be validated, while the validation time is negligible, and only 9.44Kb of data need to be stored locally by the web client. Its compatibility with the existing DNS system and the lightness of the validation protocols indicate that this is a system suitable for deployment widely.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45061229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-29DOI: 10.5121/CSIT.2021.110718
Olufemi O. Odegbile, Chaoyi Ma, Shigang Chen, D. Melissourgos, Haibo Wang
This paper introduces a hierarchical traffic model for spread measurement of network traffic flows. The hierarchical model, which aggregates lower level flows into higher-level flows in a hierarchical structure, will allow us to measure network traffic at different granularities at once to support diverse traffic analysis from a grand view to fine-grained details. The spread of a flow is the number of distinct elements (under measurement) in the flow, where the flow label (that identifies packets belonging to the flow) and the elements (which are defined based on application need) can be found in packet headers or payload. Traditional flow spread estimators are designed without hierarchical traffic modeling in mind, and incur high overhead when they are applied to each level of the traffic hierarchy. In this paper, we propose a new Hierarchical Virtual bitmap Estimator (HVE) that performs simultaneous multi-level traffic measurement, at the same cost of a traditional estimator, without degrading measurement accuracy. We implement the proposed solution and perform experiments based on real traffic traces. The experimental results demonstrate that HVE improves measurement throughput by 43% to 155%, thanks to the reduction of perpacket processing overhead. For small to medium flows, its measurement accuracy is largely similar to traditional estimators that work at one level at a time. For large aggregate and base flows, its accuracy is better, with up to 97% smaller error in our experiments.
{"title":"Hierarchical Virtual Bitmaps for Spread Estimation in Traffic Measurement","authors":"Olufemi O. Odegbile, Chaoyi Ma, Shigang Chen, D. Melissourgos, Haibo Wang","doi":"10.5121/CSIT.2021.110718","DOIUrl":"https://doi.org/10.5121/CSIT.2021.110718","url":null,"abstract":"This paper introduces a hierarchical traffic model for spread measurement of network traffic flows. The hierarchical model, which aggregates lower level flows into higher-level flows in a hierarchical structure, will allow us to measure network traffic at different granularities at once to support diverse traffic analysis from a grand view to fine-grained details. The spread of a flow is the number of distinct elements (under measurement) in the flow, where the flow label (that identifies packets belonging to the flow) and the elements (which are defined based on application need) can be found in packet headers or payload. Traditional flow spread estimators are designed without hierarchical traffic modeling in mind, and incur high overhead when they are applied to each level of the traffic hierarchy. In this paper, we propose a new Hierarchical Virtual bitmap Estimator (HVE) that performs simultaneous multi-level traffic measurement, at the same cost of a traditional estimator, without degrading measurement accuracy. We implement the proposed solution and perform experiments based on real traffic traces. The experimental results demonstrate that HVE improves measurement throughput by 43% to 155%, thanks to the reduction of perpacket processing overhead. For small to medium flows, its measurement accuracy is largely similar to traditional estimators that work at one level at a time. For large aggregate and base flows, its accuracy is better, with up to 97% smaller error in our experiments.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43828515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper indicates how the knowledge of complex systems can be put into practice to counter climate change. A contribution of the paper is to show how individual behaviour, institutional analysis, political science and management can be grounded and integrated into the complexity of natural systems to introduce mutual sustainability. Bytes are used as the unit of analysis to explain how nature governs complexity on a more reliable and comprehensive basis than can be achieved by humans using markets and hierarchies. Tax incentives are described to increase revenues while encouraging organisations to adopt elements of ecological governance found in nature and in some social organisations identified by Ostrom and the author. Ecological corporations provide benefits for all stakeholders. This makes them a common good to promote global common goods like enriching democracy from the bottom up while countering: climate change, pollution, and inequalities in power, wealth and income.
{"title":"Managing the Complexity of Climate Change","authors":"S. Turnbull","doi":"10.2139/ssrn.3636845","DOIUrl":"https://doi.org/10.2139/ssrn.3636845","url":null,"abstract":"This paper indicates how the knowledge of complex systems can be put into practice to counter climate change. A contribution of the paper is to show how individual behaviour, institutional analysis, political science and management can be grounded and integrated into the complexity of natural systems to introduce mutual sustainability. Bytes are used as the unit of analysis to explain how nature governs complexity on a more reliable and comprehensive basis than can be achieved by humans using markets and hierarchies. Tax incentives are described to increase revenues while encouraging organisations to adopt elements of ecological governance found in nature and in some social organisations identified by Ostrom and the author. Ecological corporations provide benefits for all stakeholders. This makes them a common good to promote global common goods like enriching democracy from the bottom up while countering: climate change, pollution, and inequalities in power, wealth and income.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42783012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-27DOI: 10.5121/CSIT.2021.110405
Xue Gong, Wan-Cyun Li
With the rapid development of network technology and multimedia, the current color image digital watermarking algorithm has the problems of small capacity and poor robustness. In order to improve the capacity and anti-attack ability of digital watermarking. A color image blind digital watermarking algorithm based on QR code is proposed. The algorithm combines Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT). First, the color image was converted from RGB space to YCbCr space, and the Y component was extracted and the second-level discrete wavelet transform is performed; secondly, the LL2 subband was divided into blocks and carried out discrete cosine transform; finally, used the embedding method to embed the Arnold transform watermark information into the block. The experimental results show that the PSNR of the color image embedded with the QR code is 56.7159 without being attacked. After being attacked, its PSNR is more than 30dB and NC is more than 0.95. It is proved that the algorithm has good robustness and can achieve blind watermark extraction.
{"title":"A Color Image Blind Digital Watermarking Algorithm Based on QR Code","authors":"Xue Gong, Wan-Cyun Li","doi":"10.5121/CSIT.2021.110405","DOIUrl":"https://doi.org/10.5121/CSIT.2021.110405","url":null,"abstract":"With the rapid development of network technology and multimedia, the current color image digital watermarking algorithm has the problems of small capacity and poor robustness. In order to improve the capacity and anti-attack ability of digital watermarking. A color image blind digital watermarking algorithm based on QR code is proposed. The algorithm combines Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT). First, the color image was converted from RGB space to YCbCr space, and the Y component was extracted and the second-level discrete wavelet transform is performed; secondly, the LL2 subband was divided into blocks and carried out discrete cosine transform; finally, used the embedding method to embed the Arnold transform watermark information into the block. The experimental results show that the PSNR of the color image embedded with the QR code is 56.7159 without being attacked. After being attacked, its PSNR is more than 30dB and NC is more than 0.95. It is proved that the algorithm has good robustness and can achieve blind watermark extraction.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48687460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-20DOI: 10.5121/CSIT.2021.110308
M. Menacer, K. Smaïli
The Arabic language has many varieties, including its standard form, Modern Standard Arabic (MSA), and its spoken forms, namely the dialects. Those dialects are representative examples of under-resourced languages for which automatic speech recognition is considered as an unresolved issue. To address this issue, we recorded several hours of spoken Algerian dialect and used them to train a baseline model. This model was boosted afterwards by taking advantage of other languages that impact this dialect by integrating their data in one large corpus and by investigating three approaches: multilingual training, multitask learning and transfer learning. The best performance was achieved using a limited and balanced amount of acoustic data from each additional language, as compared to the data size of the studied dialect. This approach led to an improvement of 3.8% in terms of word error rate in comparison to the baseline system trained only on the dialect data.
{"title":"Investigating Data Sharing in Speech Recognition for an Under-Resourced Language: The Case of Algerian Dialect","authors":"M. Menacer, K. Smaïli","doi":"10.5121/CSIT.2021.110308","DOIUrl":"https://doi.org/10.5121/CSIT.2021.110308","url":null,"abstract":"The Arabic language has many varieties, including its standard form, Modern Standard Arabic (MSA), and its spoken forms, namely the dialects. Those dialects are representative examples of under-resourced languages for which automatic speech recognition is considered as an unresolved issue. To address this issue, we recorded several hours of spoken Algerian dialect and used them to train a baseline model. This model was boosted afterwards by taking advantage of other languages that impact this dialect by integrating their data in one large corpus and by investigating three approaches: multilingual training, multitask learning and transfer learning. The best performance was achieved using a limited and balanced amount of acoustic data from each additional language, as compared to the data size of the studied dialect. This approach led to an improvement of 3.8% in terms of word error rate in comparison to the baseline system trained only on the dialect data.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41338237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-02DOI: 10.21203/RS.3.RS-231323/V1
C. Kong, Junzuo Wang, Xiaogang Ma, Yiping Tian, Zhiting Zhang, Kai Xu
The random forest (RF) model is improved by the optimization of unbalanced geological hazards dataset, differentiation of continuous geological hazards evaluation factors, sample similarity calculation, and iterative method for finding optimal random characteristics by calculating out-of-bagger errors. The geological hazards susceptibility evaluation model based on optimized RF (OPRF) was established and used to assess the susceptibility for Lingyun County. Then, ROC curve and field investigation were performed to verify the efficiency for different geological hazards susceptibility assessment models. The AUC values for five models were estimated as 0.766, 0.814, 0.842, 0.846 and 0.934, respectively, which indicated that the prediction accuracy of the OPRF model can be as high as 93.4%. This result demonstrated that the geological hazards susceptibility assessment model based on OPRF has the highest prediction accuracy. Furthermore, the OPRF model could be extended to other regions with similar geological environment backgrounds for geological hazards susceptibility assessment and prediction.
{"title":"Optimization of Random Forest Model for Assessing and Predicting Geological Hazards Susceptibility in Lingyun County","authors":"C. Kong, Junzuo Wang, Xiaogang Ma, Yiping Tian, Zhiting Zhang, Kai Xu","doi":"10.21203/RS.3.RS-231323/V1","DOIUrl":"https://doi.org/10.21203/RS.3.RS-231323/V1","url":null,"abstract":"The random forest (RF) model is improved by the optimization of unbalanced geological hazards dataset, differentiation of continuous geological hazards evaluation factors, sample similarity calculation, and iterative method for finding optimal random characteristics by calculating out-of-bagger errors. The geological hazards susceptibility evaluation model based on optimized RF (OPRF) was established and used to assess the susceptibility for Lingyun County. Then, ROC curve and field investigation were performed to verify the efficiency for different geological hazards susceptibility assessment models. The AUC values for five models were estimated as 0.766, 0.814, 0.842, 0.846 and 0.934, respectively, which indicated that the prediction accuracy of the OPRF model can be as high as 93.4%. This result demonstrated that the geological hazards susceptibility assessment model based on OPRF has the highest prediction accuracy. Furthermore, the OPRF model could be extended to other regions with similar geological environment backgrounds for geological hazards susceptibility assessment and prediction.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47968613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-23DOI: 10.5121/CSIT.2021.110112
A. Bonti, Akanksha Saini, Thien Pham, M. Abdelrazek, Lorenzo Pinto
The data economy is predicted to boom and become a 156B dollars business by 2025. In this demo we introduce the use of distributed ledger technologies (DLT) applied to digital surveys in order to create an ecosystem where data becomes a central piece of a complex economy. Our system allows for interesting key features; ownership, traceability, secure profiles, and anonymity where required. Also, the most important feature, is the incentive mechanism that rewards all participants, both users creating surveys and those answering the surveys. DSurvey (decentralized survey) is a novel application framework that aims at moving away from the large commercial data sink paradigm whose business is restricted to gathering data and reselling it. Our solution makes so that no central data sink exists, and it always belongs to the creator, who are able to know who is using it, and receive royalties.
{"title":"DSurvey: A Blockchain-Enhanced Survey Platform for the Data Economy","authors":"A. Bonti, Akanksha Saini, Thien Pham, M. Abdelrazek, Lorenzo Pinto","doi":"10.5121/CSIT.2021.110112","DOIUrl":"https://doi.org/10.5121/CSIT.2021.110112","url":null,"abstract":"The data economy is predicted to boom and become a 156B dollars business by 2025. In this demo we introduce the use of distributed ledger technologies (DLT) applied to digital surveys in order to create an ecosystem where data becomes a central piece of a complex economy. Our system allows for interesting key features; ownership, traceability, secure profiles, and anonymity where required. Also, the most important feature, is the incentive mechanism that rewards all participants, both users creating surveys and those answering the surveys. DSurvey (decentralized survey) is a novel application framework that aims at moving away from the large commercial data sink paradigm whose business is restricted to gathering data and reselling it. Our solution makes so that no central data sink exists, and it always belongs to the creator, who are able to know who is using it, and receive royalties.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47264362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-26DOI: 10.5121/csit.2020.102007
R. Phalnikar, S. Dixit, Harsha V. Talele
The COVID-19 infection caused by Novel Corona Virus has been declared a pandemic and a public health emergency of international concern. Infections caused by Corona Virus have been previously recognized in people and is known to cause Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). Unlike the earlier infections, COVID19 spreads alarmingly and the experience and volume of the scientific knowledge on the virus is small and lacks substantiation. To manage this crisis, Artificial intelligence (AI) promises to play a key role in understanding and addressing the COVID-19 crisis. It tends to be valuable to identify the infection, analyse it, treat it and also predict the stages of infection. Artificial intelligence algorithms can be applied to make diagnosis of COVID-19 and stepping up research and therapy. The paper explains a detailed flowchart of COVID-19 patient and discusses the use of AI at various stages. The preliminary contribution of the paper is in identifying the stages where the use of Artificial Intelligence and its allied fields can help in managing COVID-19 patient and paves a road for systematic research in future.
{"title":"Clinical Assessment and Management of Covid-19 Patients using Artificial Intelligence","authors":"R. Phalnikar, S. Dixit, Harsha V. Talele","doi":"10.5121/csit.2020.102007","DOIUrl":"https://doi.org/10.5121/csit.2020.102007","url":null,"abstract":"The COVID-19 infection caused by Novel Corona Virus has been declared a pandemic and a public health emergency of international concern. Infections caused by Corona Virus have been previously recognized in people and is known to cause Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). Unlike the earlier infections, COVID19 spreads alarmingly and the experience and volume of the scientific knowledge on the virus is small and lacks substantiation. To manage this crisis, Artificial intelligence (AI) promises to play a key role in understanding and addressing the COVID-19 crisis. It tends to be valuable to identify the infection, analyse it, treat it and also predict the stages of infection. Artificial intelligence algorithms can be applied to make diagnosis of COVID-19 and stepping up research and therapy. The paper explains a detailed flowchart of COVID-19 patient and discusses the use of AI at various stages. The preliminary contribution of the paper is in identifying the stages where the use of Artificial Intelligence and its allied fields can help in managing COVID-19 patient and paves a road for systematic research in future.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44656067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-26DOI: 10.5121/csit.2020.102002
Rachana Jaiswal, S. Satarkar
In medical imaging, accurate anatomical structure extraction is important for diagnosis and therapeutic interventional planning. So, for easier, quicker and accurate diagnosis of medical images, image processing technologies may be employed in analysis and feature extraction of medical images. In this paper, some modifications to level set algorithm are made and modified algorithm is used for extracting contour of foetal objects in an image. The proposed approach is applied on foetal ultrasound images. In traditional approach, foetal parameters are extracted manually from ultrasound images. Due to lack of consistency and accuracy of manual measurements, an automatic technique is highly desirable to obtain foetal biometric measurements. This proposed approach is based on global & local region information for foetal contour extraction from ultrasonic images. The primary goal of this research is to provide a new methodology to aid the analysis and feature extraction from foetal images.
{"title":"Biometric Foetal Contour Extraction using Hybrid Level Set","authors":"Rachana Jaiswal, S. Satarkar","doi":"10.5121/csit.2020.102002","DOIUrl":"https://doi.org/10.5121/csit.2020.102002","url":null,"abstract":"In medical imaging, accurate anatomical structure extraction is important for diagnosis and therapeutic interventional planning. So, for easier, quicker and accurate diagnosis of medical images, image processing technologies may be employed in analysis and feature extraction of medical images. In this paper, some modifications to level set algorithm are made and modified algorithm is used for extracting contour of foetal objects in an image. The proposed approach is applied on foetal ultrasound images. In traditional approach, foetal parameters are extracted manually from ultrasound images. Due to lack of consistency and accuracy of manual measurements, an automatic technique is highly desirable to obtain foetal biometric measurements. This proposed approach is based on global & local region information for foetal contour extraction from ultrasonic images. The primary goal of this research is to provide a new methodology to aid the analysis and feature extraction from foetal images.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42089613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-26DOI: 10.5121/csit.2020.102005
Jagan Raj Jayapandiyan, C. Kavitha, K. Sakthivel
In this proposed research work, an attempt has been made to use multiple image files for steganography encoding along with the capability of secret text recovery in the event of any image corruption during the transit. This algorithm is effective on the security factor of secret image since the embedded checksum will validate for any unauthorized users or intruders attempt to corrupt the picture in any aspect. If any of the stego image underwent any steganalysis or MiM attack, then this proposed algorithm can effectively regenerate the content of one stego image using other intact stego images received in the receiving end.
{"title":"Multi Image Steganography using Distributed LSB Algorithm and Secret Text Recovery on Stego Image Corruption","authors":"Jagan Raj Jayapandiyan, C. Kavitha, K. Sakthivel","doi":"10.5121/csit.2020.102005","DOIUrl":"https://doi.org/10.5121/csit.2020.102005","url":null,"abstract":"In this proposed research work, an attempt has been made to use multiple image files for steganography encoding along with the capability of secret text recovery in the event of any image corruption during the transit. This algorithm is effective on the security factor of secret image since the embedded checksum will validate for any unauthorized users or intruders attempt to corrupt the picture in any aspect. If any of the stego image underwent any steganalysis or MiM attack, then this proposed algorithm can effectively regenerate the content of one stego image using other intact stego images received in the receiving end.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46901870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}