Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068134
Mona M. Nasr, Hanan Fahmy, M. Thabet
Deep Web is an important topic of research. According to the deep web pages' complicated structure, extracting content is a very challenging issue. In this paper a framework for efficiently discovery deep web data records is proposed. The proposed framework is able to perform crawling and fetching relevant pages related to user's text query. To retrieve the relevant pages this paper proposes a similarity method based on the improved weighting function (ITF-IDF). This framework utilizes the web page's visual features to obtain data records rather than analyze the source code of HTML. To accurately retrieve the data records, an approach called layout tree is exploited. The proposed framework uses Noise Filter (NSFilter) algorithm to eliminate all noise like header, footer, ads and unnecessary content. Data records are defined as a similar layout visual blocks. To cluster the visual blocks with similar layout, this paper proposes a method based on appearance similarity and similar shape and coordinate feature (SSC). The experiment results illustrate that the framework being proposed is better than previous data extraction works.
{"title":"Efficiency Improvement Approach of Deep Web Data Extraction","authors":"Mona M. Nasr, Hanan Fahmy, M. Thabet","doi":"10.1109/ICCES48960.2019.9068134","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068134","url":null,"abstract":"Deep Web is an important topic of research. According to the deep web pages' complicated structure, extracting content is a very challenging issue. In this paper a framework for efficiently discovery deep web data records is proposed. The proposed framework is able to perform crawling and fetching relevant pages related to user's text query. To retrieve the relevant pages this paper proposes a similarity method based on the improved weighting function (ITF-IDF). This framework utilizes the web page's visual features to obtain data records rather than analyze the source code of HTML. To accurately retrieve the data records, an approach called layout tree is exploited. The proposed framework uses Noise Filter (NSFilter) algorithm to eliminate all noise like header, footer, ads and unnecessary content. Data records are defined as a similar layout visual blocks. To cluster the visual blocks with similar layout, this paper proposes a method based on appearance similarity and similar shape and coordinate feature (SSC). The experiment results illustrate that the framework being proposed is better than previous data extraction works.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126736698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068182
S. Elaw, W. Abd-Elhafiez, M. Heshmat
this paper presents, new face detection methods based on HSL and HSI color spaces are presented. A comparison of the new face detection methods and a new HSV skin color range is presented. The three color spaces are based on: H, S, V, L, and I, whose represent Hue, Saturation, Value, Luminance and Intensity respectively. YouTube Celebrities Face Tracking and Recognition Dataset is used. It contains 1910 sequences of 47 subjects. All dataset videos are encoded in MPEG4 at 25fps rate. The proposed methods based on two main steps, at the beginning, the skin like regions is detected by the gradient values of the proposed color space. According to main facial features, such as eyes, mouth and nose the desired faces are determined from the recommended regions. According to experimental results, HSV color space gives good results in lighten_faces, HSL color space gives good results for multi_faces and HSI color space gives good results for single_faces and zoomed_faces videos.
{"title":"Comparison of Video Face Detection methods Using HSV, HSL and HSI Color Spaces","authors":"S. Elaw, W. Abd-Elhafiez, M. Heshmat","doi":"10.1109/ICCES48960.2019.9068182","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068182","url":null,"abstract":"this paper presents, new face detection methods based on HSL and HSI color spaces are presented. A comparison of the new face detection methods and a new HSV skin color range is presented. The three color spaces are based on: H, S, V, L, and I, whose represent Hue, Saturation, Value, Luminance and Intensity respectively. YouTube Celebrities Face Tracking and Recognition Dataset is used. It contains 1910 sequences of 47 subjects. All dataset videos are encoded in MPEG4 at 25fps rate. The proposed methods based on two main steps, at the beginning, the skin like regions is detected by the gradient values of the proposed color space. According to main facial features, such as eyes, mouth and nose the desired faces are determined from the recommended regions. According to experimental results, HSV color space gives good results in lighten_faces, HSL color space gives good results for multi_faces and HSI color space gives good results for single_faces and zoomed_faces videos.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126784646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068157
Sherif M. Samir, H. K. Mohamed, Hazem Said
Cryptography and cryptographic primitives are widely considered to be the most important fundamental of blockchain that provide secure and anonymity guaranteed decentralized solutions. In the history of papers study cryptographic primitives used in blockchain, the focus has always been in the use cases of blockchain in different aspects of industry, example health care, IoT, information security, consensus building systems and some other fields. To the best of our knowledge, current cryptography techniques used in blockchain, or still theoretical but the security proof can be proven under certain security assumption, so can be used in blockchain has gathered numerous awareness in the last five years. In this paper, we fully review and analysis some cryptographic techniques used in designing a distributed consensus protocol that is efficient, decentralized, and flexible as a framework which are already used in blockchain. Take in consideration deploying a permissioned consensus like delegated proof-of-stack (DPoS) in the decentralized IOT applications is hard. The IOT systems needs to consider a larger data size and a larger span deployment. With this in mind we point to the special challenge for the IOT applications related to blockchain. Several modern cryptography techniques have been adopted to enhance the consensus process with respect to the computational overhead vs communication overhead and storage cost. Additionally, we re-examine non-interactive signature and public key aggregation in digital signature on different messages types and present a change in the aggregation scheme, as far as we know this is the first time that pairing become more faster in verify signatures, and to enable more transaction in the block.
{"title":"Compact aggregate short-lived signatures for consortium consensus protocols","authors":"Sherif M. Samir, H. K. Mohamed, Hazem Said","doi":"10.1109/ICCES48960.2019.9068157","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068157","url":null,"abstract":"Cryptography and cryptographic primitives are widely considered to be the most important fundamental of blockchain that provide secure and anonymity guaranteed decentralized solutions. In the history of papers study cryptographic primitives used in blockchain, the focus has always been in the use cases of blockchain in different aspects of industry, example health care, IoT, information security, consensus building systems and some other fields. To the best of our knowledge, current cryptography techniques used in blockchain, or still theoretical but the security proof can be proven under certain security assumption, so can be used in blockchain has gathered numerous awareness in the last five years. In this paper, we fully review and analysis some cryptographic techniques used in designing a distributed consensus protocol that is efficient, decentralized, and flexible as a framework which are already used in blockchain. Take in consideration deploying a permissioned consensus like delegated proof-of-stack (DPoS) in the decentralized IOT applications is hard. The IOT systems needs to consider a larger data size and a larger span deployment. With this in mind we point to the special challenge for the IOT applications related to blockchain. Several modern cryptography techniques have been adopted to enhance the consensus process with respect to the computational overhead vs communication overhead and storage cost. Additionally, we re-examine non-interactive signature and public key aggregation in digital signature on different messages types and present a change in the aggregation scheme, as far as we know this is the first time that pairing become more faster in verify signatures, and to enable more transaction in the block.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121521770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068123
M. Labib, Dina G. Mahmoud, G. Alkady, I. Adly, H. Amer, R. Daoud, H. Elsayed
Reliability is a very important aspect in electronic components used in the Automotive industry. Due to the harsh environments in this industry, Printed Circuit Board (PCB) tracks are a very common cause of failure. This paper proposes a scheme that will increase the reliability of chip-to-chip interconnections. The scheme relies on the availability of unused microcontroller IO ports. It is shown how to implement a 1-out-of-2 (or even 1-out-of-3) fault-tolerant communication channel. To illustrate the advantage of the proposed scheme, communication channel reliability is calculated with and without fault tolerance and a use case indicates that the increase in reliability can be significant.
{"title":"Heterogeneous Redundancy for PCB Track Failures: An Automotive Example","authors":"M. Labib, Dina G. Mahmoud, G. Alkady, I. Adly, H. Amer, R. Daoud, H. Elsayed","doi":"10.1109/ICCES48960.2019.9068123","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068123","url":null,"abstract":"Reliability is a very important aspect in electronic components used in the Automotive industry. Due to the harsh environments in this industry, Printed Circuit Board (PCB) tracks are a very common cause of failure. This paper proposes a scheme that will increase the reliability of chip-to-chip interconnections. The scheme relies on the availability of unused microcontroller IO ports. It is shown how to implement a 1-out-of-2 (or even 1-out-of-3) fault-tolerant communication channel. To illustrate the advantage of the proposed scheme, communication channel reliability is calculated with and without fault tolerance and a use case indicates that the increase in reliability can be significant.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"1838 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068154
Mohamed Elsayed, A. Abdelwahab, Hatem Ahdelkader
With the rapid development of Big Data and the necessity for analyzing their huge volumes, the issue of Unstructured Data analysis in social media was appeared. The Data analysis process is very important in all fields as to make decisions at the right time and over certain facts. The usage of social media has become the latest trend in today's world in which users send, read posts known as ‘message’ and communicate with various groups. Users are sharing their regular life, posting their views on everything like products and locations. This data is extremely unstructured, making it hard to analyze. Machine learning technology offers important data preparation techniques for processing large-scale data to extract knowledge, e.g., classifying data. Extract useful information from social media data is essential to success in the big data age. Therefore, fresh strategies are needed for handling huge quantities of unstructured data and finding the hidden information in these data and achieving better data analysis outcomes, In this paper, the proposed framework recommends the construction of a machine-learning model capable of analyzing unstructured text data with highly accuracy compared to other machine learning algorithms.
{"title":"A Proposed Framework for Improving Analysis of Big Unstructured Data in Social Media","authors":"Mohamed Elsayed, A. Abdelwahab, Hatem Ahdelkader","doi":"10.1109/ICCES48960.2019.9068154","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068154","url":null,"abstract":"With the rapid development of Big Data and the necessity for analyzing their huge volumes, the issue of Unstructured Data analysis in social media was appeared. The Data analysis process is very important in all fields as to make decisions at the right time and over certain facts. The usage of social media has become the latest trend in today's world in which users send, read posts known as ‘message’ and communicate with various groups. Users are sharing their regular life, posting their views on everything like products and locations. This data is extremely unstructured, making it hard to analyze. Machine learning technology offers important data preparation techniques for processing large-scale data to extract knowledge, e.g., classifying data. Extract useful information from social media data is essential to success in the big data age. Therefore, fresh strategies are needed for handling huge quantities of unstructured data and finding the hidden information in these data and achieving better data analysis outcomes, In this paper, the proposed framework recommends the construction of a machine-learning model capable of analyzing unstructured text data with highly accuracy compared to other machine learning algorithms.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122700067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068117
Ebtsam Adel, S. Barakat, Mohammed M Elmogy
Realthcare is one of the main domains where sharing information is an essential requirement. Medical information systems store all the clinical data in many different kinds of formats. Subsequently, there is an urgent requirement to address the semantic interoperability problem. This paper proposes a fuzzy-ontology framework that could integrate most existing EHR different data models. In the proposed framework, each input source is represented into an ontology representation. Those data sources may be relational databases, XML documents, Excel spreadsheets, CSV files, or EHRs standards. Second, all those output ontologies are merged and combined in only one ontology. DL-Query Protégé plug-in is used for providing specific queries. All the results are explained with the help of screenshots. We expect our framework to be a step towards solving the specific problem without losses of data and semantics.
{"title":"Distributed Electronic Health Records Semantic Interoperability Based on a Fuzzy Ontology Architecture","authors":"Ebtsam Adel, S. Barakat, Mohammed M Elmogy","doi":"10.1109/ICCES48960.2019.9068117","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068117","url":null,"abstract":"Realthcare is one of the main domains where sharing information is an essential requirement. Medical information systems store all the clinical data in many different kinds of formats. Subsequently, there is an urgent requirement to address the semantic interoperability problem. This paper proposes a fuzzy-ontology framework that could integrate most existing EHR different data models. In the proposed framework, each input source is represented into an ontology representation. Those data sources may be relational databases, XML documents, Excel spreadsheets, CSV files, or EHRs standards. Second, all those output ontologies are merged and combined in only one ontology. DL-Query Protégé plug-in is used for providing specific queries. All the results are explained with the help of screenshots. We expect our framework to be a step towards solving the specific problem without losses of data and semantics.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131372121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068185
Hanan Elhilbawi, S. Eldawlatly, Hani M. K. Mahdi
Discretizing continuous attributes is one essential and important data preprocessing step in data mining. Various data mining techniques are designed to be applied to discrete attributes. There have been tremendous efforts to propose discretization techniques with different characteristics. However, a clear pathway that can guide the choice of the needed discretization technique for different types of datasets is lacking. This paper proposes a taxonomy based on the existence of class information and relationship between attributes in the analyzed dataset. We review different discretization techniques classified according to the proposed taxonomy. The proposed taxonomy emphasizes the advantages and disadvantages of each discretization technique to be able theoretically to find a suitable discretization technique for a particular dataset.
{"title":"A Taxonomy of Discretization Techniques based on Class Labels and Attributes' Relationship","authors":"Hanan Elhilbawi, S. Eldawlatly, Hani M. K. Mahdi","doi":"10.1109/ICCES48960.2019.9068185","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068185","url":null,"abstract":"Discretizing continuous attributes is one essential and important data preprocessing step in data mining. Various data mining techniques are designed to be applied to discrete attributes. There have been tremendous efforts to propose discretization techniques with different characteristics. However, a clear pathway that can guide the choice of the needed discretization technique for different types of datasets is lacking. This paper proposes a taxonomy based on the existence of class information and relationship between attributes in the analyzed dataset. We review different discretization techniques classified according to the proposed taxonomy. The proposed taxonomy emphasizes the advantages and disadvantages of each discretization technique to be able theoretically to find a suitable discretization technique for a particular dataset.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115830584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068180
A. Elkorany, Rehab M. Helmy, A. Saleeb, N. Areed
An EBG is designed at the ground plane of a pentagon microstrip patch antenna for detecting brain tumors. Two circular EBG types have been introduced. The first type is a rectangular lattice of holes which produced an increase in S11 by 19% at the same resonance frequency which is 3.9 GHz with and without tumor. The second one is a squared lattice of holes that presented an increase of 27 % in S11. It also provides a 2.9% shift in the resonant frequency at −10 dB on a head phantom with a brain tumor compared to without a tumor. The electric field, magnetic field, and current density are calculated in each type of EBG. A remarkable difference has been observed between with and without tumor especially on the squared lattice. One-, two-and four- elements linear antenna arrays are designed to be put at a 10-mm distance from the head phantom. The purpose of antenna arrays is to provide sufficient energy to penetrate human tissues. The directivity was increased as 6.65 dB, 8.5 dB, and 12 dB in one element, two elements, and four elements respectively. The S11 is calculated for each antenna on a head phantom with and without tumor. The S11 values are increased by 1.05dB, 2.73dB, and 4dB for the three cases respectively. Finally, the E and H fields, current density and specific absorption rate SAR are also calculated.
{"title":"Microstrip Patch Antenna Linear Arrays for Brain Tumor Detection","authors":"A. Elkorany, Rehab M. Helmy, A. Saleeb, N. Areed","doi":"10.1109/ICCES48960.2019.9068180","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068180","url":null,"abstract":"An EBG is designed at the ground plane of a pentagon microstrip patch antenna for detecting brain tumors. Two circular EBG types have been introduced. The first type is a rectangular lattice of holes which produced an increase in S11 by 19% at the same resonance frequency which is 3.9 GHz with and without tumor. The second one is a squared lattice of holes that presented an increase of 27 % in S11. It also provides a 2.9% shift in the resonant frequency at −10 dB on a head phantom with a brain tumor compared to without a tumor. The electric field, magnetic field, and current density are calculated in each type of EBG. A remarkable difference has been observed between with and without tumor especially on the squared lattice. One-, two-and four- elements linear antenna arrays are designed to be put at a 10-mm distance from the head phantom. The purpose of antenna arrays is to provide sufficient energy to penetrate human tissues. The directivity was increased as 6.65 dB, 8.5 dB, and 12 dB in one element, two elements, and four elements respectively. The S11 is calculated for each antenna on a head phantom with and without tumor. The S11 values are increased by 1.05dB, 2.73dB, and 4dB for the three cases respectively. Finally, the E and H fields, current density and specific absorption rate SAR are also calculated.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115005531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068161
Lamiaa M. Elshenawy
Multivariate statistical process monitoring techniques have been developed to detect and isolate abnormal situations of modern industrial processes that became more complicated and are classified as large-scale systems. Several fault detection and isolation indices have been proposed for multivariate statistical process monitoring. This paper discusses these indices and compare their performances by applying for an industrial benchmark, the Tennessee Eastman chemical process. The efficiency of these indices is measured by four key performance indicators (KPIs), i.e., fault detection time delay, false alarm rate, missed detection rate, correct fault isolation.
{"title":"Fault Detection and Isolation Indices for Large-Scale Systems","authors":"Lamiaa M. Elshenawy","doi":"10.1109/ICCES48960.2019.9068161","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068161","url":null,"abstract":"Multivariate statistical process monitoring techniques have been developed to detect and isolate abnormal situations of modern industrial processes that became more complicated and are classified as large-scale systems. Several fault detection and isolation indices have been proposed for multivariate statistical process monitoring. This paper discusses these indices and compare their performances by applying for an industrial benchmark, the Tennessee Eastman chemical process. The efficiency of these indices is measured by four key performance indicators (KPIs), i.e., fault detection time delay, false alarm rate, missed detection rate, correct fault isolation.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123220196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICCES48960.2019.9068141
A. Waleed, H. Medhat, Mariam Esmail, Kareem Osama, Radwa Samy, Taraggy M. Ghanim
Fish diseases are the major cause for increasing mortality in fish farms. Automatic identification of diseased fish at early stages is necessary step to prevent spreading disease. Fish disease diagnosis suffers from some limitations that need high level of expertise to be resolved. Recognition of fish abnormal behaviors helps in early prediction of fish diseases. Fish behavior is evaluated by analyzing fish trajectories in videos. Abnormalities may be due to environmental changes. This paper introduces a survey on what computer vision techniques propose in that field. A comprehensive comparison between different automatic recognition systems is included. Finally, our approach is proposed to automatically recognize and identify three different types of fish diseases. These diseases are Epizootic ulcerative syndrome (EUS), Ichthyophthirius (Ich) and Columnaris. Our approach shows the effect of different color spaces on the Convolutional Neural Networkk CNN final performance.
{"title":"Automatic Recognition of Fish Diseases in Fish Farms","authors":"A. Waleed, H. Medhat, Mariam Esmail, Kareem Osama, Radwa Samy, Taraggy M. Ghanim","doi":"10.1109/ICCES48960.2019.9068141","DOIUrl":"https://doi.org/10.1109/ICCES48960.2019.9068141","url":null,"abstract":"Fish diseases are the major cause for increasing mortality in fish farms. Automatic identification of diseased fish at early stages is necessary step to prevent spreading disease. Fish disease diagnosis suffers from some limitations that need high level of expertise to be resolved. Recognition of fish abnormal behaviors helps in early prediction of fish diseases. Fish behavior is evaluated by analyzing fish trajectories in videos. Abnormalities may be due to environmental changes. This paper introduces a survey on what computer vision techniques propose in that field. A comprehensive comparison between different automatic recognition systems is included. Finally, our approach is proposed to automatically recognize and identify three different types of fish diseases. These diseases are Epizootic ulcerative syndrome (EUS), Ichthyophthirius (Ich) and Columnaris. Our approach shows the effect of different color spaces on the Convolutional Neural Networkk CNN final performance.","PeriodicalId":136643,"journal":{"name":"2019 14th International Conference on Computer Engineering and Systems (ICCES)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125173125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}