Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856444
M. Ahmed, Mohamed M. Elwakil, A. Hassanien, Ehab E. Hassanien
Multidimensionality is a distinctive aspect of real world social networks. Multidimensional social networks appeared as a result of that most social media sites such as Facebook, Twitter, and YouTube enable people to interact with each other through different social activities, reflecting different kinds of relationships between them. Recently, studying community structures hidden in multidimensional social networks has attracted a lot of attention. When dealing with these networks, the concept of community detection problem changes to be the discovery of the shared group structure across all network dimensions, such that members in the same group are tightly connected with each other, but are loosely connected with others outside the group. Studies in community detection topic have traditionally focused on networks that represent one type of interactions or one type of relationships between network entities. In this paper, we propose Discrete Group Search Optimizer (DGSO-MDNet) to solve the community detection problem in Multidimensional social networks, without any prior knowledge about the number of communities. The method aims to find community structure that maximizes multi-slice modularity, as an objective function. The proposed DGSO-MDNet algorithm adopts the locus-based adjacency representation and several discrete operators. Experiments on synthetic and real life networks show the capability of the proposed algorithm to successfully detect the structure hidden within these networks compared with other high performance algorithms in the literature.
{"title":"Discrete Group Search Optimizer for community detection in multidimensional social network","authors":"M. Ahmed, Mohamed M. Elwakil, A. Hassanien, Ehab E. Hassanien","doi":"10.1109/ICENCO.2016.7856444","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856444","url":null,"abstract":"Multidimensionality is a distinctive aspect of real world social networks. Multidimensional social networks appeared as a result of that most social media sites such as Facebook, Twitter, and YouTube enable people to interact with each other through different social activities, reflecting different kinds of relationships between them. Recently, studying community structures hidden in multidimensional social networks has attracted a lot of attention. When dealing with these networks, the concept of community detection problem changes to be the discovery of the shared group structure across all network dimensions, such that members in the same group are tightly connected with each other, but are loosely connected with others outside the group. Studies in community detection topic have traditionally focused on networks that represent one type of interactions or one type of relationships between network entities. In this paper, we propose Discrete Group Search Optimizer (DGSO-MDNet) to solve the community detection problem in Multidimensional social networks, without any prior knowledge about the number of communities. The method aims to find community structure that maximizes multi-slice modularity, as an objective function. The proposed DGSO-MDNet algorithm adopts the locus-based adjacency representation and several discrete operators. Experiments on synthetic and real life networks show the capability of the proposed algorithm to successfully detect the structure hidden within these networks compared with other high performance algorithms in the literature.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132908368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856447
A. Shehab, M. Elhoseny, A. Hassanien
This paper presents a hybrid approach to an Automated Essay Grading System (AEGS) that provides automated grading and evaluation of student essays. The proposed system has two complementary components: Writing Features Analysis tools, which rely on natural language processing (NLP) techniques and neural network grading engine, which rely on a set of pre-graded essays to judge the student answer and assign a grade. By this way, students essays could be evaluated with a feedback that would improve their writing skills. The proposed system is evaluated using datasets from computer and information sciences college students' essays in Mansoura University. These datasets was written as part of mid-term exams in introduction to information systems course and Systems analysis and design course. The obtained results shows an agreement with teachers' grades in between 70% and nearly 90% with teachers' grades. This indicates that the proposed might be useful as a tool for automatic assessment of students' essays, thus leading to a considerable reduction in essay grading costs.
{"title":"A hybrid scheme for Automated Essay Grading based on LVQ and NLP techniques","authors":"A. Shehab, M. Elhoseny, A. Hassanien","doi":"10.1109/ICENCO.2016.7856447","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856447","url":null,"abstract":"This paper presents a hybrid approach to an Automated Essay Grading System (AEGS) that provides automated grading and evaluation of student essays. The proposed system has two complementary components: Writing Features Analysis tools, which rely on natural language processing (NLP) techniques and neural network grading engine, which rely on a set of pre-graded essays to judge the student answer and assign a grade. By this way, students essays could be evaluated with a feedback that would improve their writing skills. The proposed system is evaluated using datasets from computer and information sciences college students' essays in Mansoura University. These datasets was written as part of mid-term exams in introduction to information systems course and Systems analysis and design course. The obtained results shows an agreement with teachers' grades in between 70% and nearly 90% with teachers' grades. This indicates that the proposed might be useful as a tool for automatic assessment of students' essays, thus leading to a considerable reduction in essay grading costs.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134304301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856455
F. Raffaeli, S. Awad
This paper describes a system for speech analysis and synthesis. It may be used with a PC, or headless, with basic audio connections. The purpose is to enable an open, low-cost graphical platform for learning and embedded applications such as speech recognition, security, compression, communication, and user speech interface. The platform is illustrated using readily available, inexpensive hardware and software.
{"title":"Portable low-cost platform for embedded speech analysis and synthesis","authors":"F. Raffaeli, S. Awad","doi":"10.1109/ICENCO.2016.7856455","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856455","url":null,"abstract":"This paper describes a system for speech analysis and synthesis. It may be used with a PC, or headless, with basic audio connections. The purpose is to enable an open, low-cost graphical platform for learning and embedded applications such as speech recognition, security, compression, communication, and user speech interface. The platform is illustrated using readily available, inexpensive hardware and software.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856440
K. ElMahgoub
A pre-encryption algorithm for passive ultra-high frequency (UHF) radio frequency identification (RFID) systems is described. The algorithm is based on advanced encryption standard (AES) as the core encryption technique with two extra steps; first step is random key generation and the second step is data randomization, which increase the immunity of the encryption process against attacks. The algorithm is implemented using C programming language and is used to encrypt and decrypt the user data of an UHF RFID passive tag. The algorithm is simple, easy to implement and does not require any hardware changes at the reader and tag sides. Moreover, it is difficult to break due to its multiple steps and randomness. The algorithm ensures secured communication for the passive UHF RFID system.
{"title":"Pre-encrypted user data for secure passive UHF RFID communication","authors":"K. ElMahgoub","doi":"10.1109/ICENCO.2016.7856440","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856440","url":null,"abstract":"A pre-encryption algorithm for passive ultra-high frequency (UHF) radio frequency identification (RFID) systems is described. The algorithm is based on advanced encryption standard (AES) as the core encryption technique with two extra steps; first step is random key generation and the second step is data randomization, which increase the immunity of the encryption process against attacks. The algorithm is implemented using C programming language and is used to encrypt and decrypt the user data of an UHF RFID passive tag. The algorithm is simple, easy to implement and does not require any hardware changes at the reader and tag sides. Moreover, it is difficult to break due to its multiple steps and randomness. The algorithm ensures secured communication for the passive UHF RFID system.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127375960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856457
Passent M. ElKafrawy, Amr M. Sauber, Mohamed M. Hafez
Hadoop Distributed File System (HDFS) is a file system designed to handle large files - which are in gigabytes or terabytes size - with streaming data access patterns, running clusters on commodity hardware. However, big data may exist in a huge number of small files such as: in biology, astronomy or some applications generating 30 million files with an average size of 190 Kbytes. Unfortunately, HDFS wouldn't be able to handle such kind of fractured big data because single Namenode is considered a bottleneck when handling large number of small files. In this paper, we present a new structure for HDFS (HDFSX) to avoid higher memory usage, flooding network, requests overhead and centralized point of failure (single point of failure “SPOF”) of the single Namenode.
{"title":"HDFSX: Big data Distributed File System with small files support","authors":"Passent M. ElKafrawy, Amr M. Sauber, Mohamed M. Hafez","doi":"10.1109/ICENCO.2016.7856457","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856457","url":null,"abstract":"Hadoop Distributed File System (HDFS) is a file system designed to handle large files - which are in gigabytes or terabytes size - with streaming data access patterns, running clusters on commodity hardware. However, big data may exist in a huge number of small files such as: in biology, astronomy or some applications generating 30 million files with an average size of 190 Kbytes. Unfortunately, HDFS wouldn't be able to handle such kind of fractured big data because single Namenode is considered a bottleneck when handling large number of small files. In this paper, we present a new structure for HDFS (HDFSX) to avoid higher memory usage, flooding network, requests overhead and centralized point of failure (single point of failure “SPOF”) of the single Namenode.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133062787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856459
N. Rizk, M. Gheith, Eman S. Nasr
Electronic Learning, or more popularly known as eLearning, is generally defined to be the use of technology in the delivery of education or training. eLearning Systems (eLS) are now integral parts of educational organizations. eLS are diverse in nature and size. They are nowadays integral parts also of some commercial or governmental organizations as they are cost-effective means of delivering training to employees. With the diversity of people using eLS, there is a need for continuous improvement, and software development teams need to better understand the stakeholders' requirements for faster delivery, enhancement, or personalization of eLS. Requirements elicitation is an activity within requirements engineering that is concerned with discovering needs of stakeholders, either for software development from scratch or evolution. In this paper we identify the special properties of eLS that characterize them from other software systems to help with better understanding of such domain, discuss the special requirements elicitation challenges that the special properties introduce, and introduce the main current requirements elicitation approaches used for the domain. Our research so far revealed that there are very limited approaches that are especially tailored for such domain. Hence we propose in this paper the use of crowdsourcing, which means exploiting the power of the crowd in performing tasks, as a new approach for eliciting requirements of eLS, the paper presents a framework of the necessary elements needed under the umbrella of this new approach to fill in the identified current research gap in the domain.
{"title":"Requirements' elicitation needs for eLearning Systems","authors":"N. Rizk, M. Gheith, Eman S. Nasr","doi":"10.1109/ICENCO.2016.7856459","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856459","url":null,"abstract":"Electronic Learning, or more popularly known as eLearning, is generally defined to be the use of technology in the delivery of education or training. eLearning Systems (eLS) are now integral parts of educational organizations. eLS are diverse in nature and size. They are nowadays integral parts also of some commercial or governmental organizations as they are cost-effective means of delivering training to employees. With the diversity of people using eLS, there is a need for continuous improvement, and software development teams need to better understand the stakeholders' requirements for faster delivery, enhancement, or personalization of eLS. Requirements elicitation is an activity within requirements engineering that is concerned with discovering needs of stakeholders, either for software development from scratch or evolution. In this paper we identify the special properties of eLS that characterize them from other software systems to help with better understanding of such domain, discuss the special requirements elicitation challenges that the special properties introduce, and introduce the main current requirements elicitation approaches used for the domain. Our research so far revealed that there are very limited approaches that are especially tailored for such domain. Hence we propose in this paper the use of crowdsourcing, which means exploiting the power of the crowd in performing tasks, as a new approach for eliciting requirements of eLS, the paper presents a framework of the necessary elements needed under the umbrella of this new approach to fill in the identified current research gap in the domain.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122553460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856448
Aaron Ward, S. Awad, R. Merson, M. Rolnick
This paper presents improvements upon a previous paper detailing an Android mobile phone application demonstrating methods for implementing or assisting traditional speech therapy techniques using mobile devices. [1] After development, the software was implemented on a Galaxy Tab, an Android tablet that, at approximately $100 per tablet, is quite affordable. The techniques are principally aimed at treating Parkinsons-disease induced hypophonia and habit-induced hyperphonia, but are applicable to other speech or language disorders, particularly vocal projection issues. This paper will present practical obstacles in developing the application and their solutions, feedback from medical testing, and practical improvements to the device setup itself. In particular, it focuses on improvements to the device's user interface, physical noise reduction in mobile devices, and the implementation and altered-audio-feedback on an Android device. It was found that using a single type of device with pre-set parameters for certain tasks was more effective than allowing medical technicians, who may be unfamiliar with the technology, to calibrate the software to different types of devices. Additionally, the use of throat microphones will be tested to reduce noise and enable more effective treatment.
{"title":"Improvements to Android-based real-time treatment of speech-language pathologies","authors":"Aaron Ward, S. Awad, R. Merson, M. Rolnick","doi":"10.1109/ICENCO.2016.7856448","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856448","url":null,"abstract":"This paper presents improvements upon a previous paper detailing an Android mobile phone application demonstrating methods for implementing or assisting traditional speech therapy techniques using mobile devices. [1] After development, the software was implemented on a Galaxy Tab, an Android tablet that, at approximately $100 per tablet, is quite affordable. The techniques are principally aimed at treating Parkinsons-disease induced hypophonia and habit-induced hyperphonia, but are applicable to other speech or language disorders, particularly vocal projection issues. This paper will present practical obstacles in developing the application and their solutions, feedback from medical testing, and practical improvements to the device setup itself. In particular, it focuses on improvements to the device's user interface, physical noise reduction in mobile devices, and the implementation and altered-audio-feedback on an Android device. It was found that using a single type of device with pre-set parameters for certain tasks was more effective than allowing medical technicians, who may be unfamiliar with the technology, to calibrate the software to different types of devices. Additionally, the use of throat microphones will be tested to reduce noise and enable more effective treatment.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856449
Ehab A. Omran, Wael A. Murtada
Spacecraft Attitude Determination and Control System (ADCS) is considered to be one of the most critical subsystem of the low earth orbit satellites due to the pointing accuracy required during its operation. Consequently a fast and reliable Fault Detection and Identification (FDI) technique is obtaining more significant weight meanwhile years of researches. This paper presents a procedure to ameliorate and amend the (FDI) of a spacecraft reaction wheel as a part of the (ADCS) by differentiating the signatures of possible faults which could be occurred inside the reaction wheel such as over voltage, under voltage, current loss, temperature increase, and hybrid faults using Autoregressive Moving Average (ARMA) model for either normal and faulty data based on the behavior of a dynamic mathematical model of 3-axis spacecraft reaction wheel and neural network classifier. The results demonstrate that the fault detection and identification are successfully accomplished.
{"title":"Fault Detection and Identification of spacecraft reaction wheels using Autoregressive Moving Average model and neural networks","authors":"Ehab A. Omran, Wael A. Murtada","doi":"10.1109/ICENCO.2016.7856449","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856449","url":null,"abstract":"Spacecraft Attitude Determination and Control System (ADCS) is considered to be one of the most critical subsystem of the low earth orbit satellites due to the pointing accuracy required during its operation. Consequently a fast and reliable Fault Detection and Identification (FDI) technique is obtaining more significant weight meanwhile years of researches. This paper presents a procedure to ameliorate and amend the (FDI) of a spacecraft reaction wheel as a part of the (ADCS) by differentiating the signatures of possible faults which could be occurred inside the reaction wheel such as over voltage, under voltage, current loss, temperature increase, and hybrid faults using Autoregressive Moving Average (ARMA) model for either normal and faulty data based on the behavior of a dynamic mathematical model of 3-axis spacecraft reaction wheel and neural network classifier. The results demonstrate that the fault detection and identification are successfully accomplished.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856454
M. Kurdi, I. Elzein, A. Zeki
Digital watermarking is an authentication technique for distribution of content over the Internet. Such a method is highly desired due to the proliferation of high-capacity, digital recording contrivances which have fueled incremented concerns over copyright auspice of content [1]. Digital Watermarks are valuable mechanisms for protecting image, audio, video, and data and they are withal becoming a consequential implement in facilitating e-commerce. Any company that is earnest about safely protecting and distributing their content and products must use digital watermarks. Digital watermarking and steganography is considered as the most interesting field of research and development for many authors. In this article, we use Least Significant Bit (LSB) and Random Right Circular Shift (RRCF) in digital watermarking. LSB is used because it is not noticeable by human eyes and of its minor distortion of the image. For simulation of our algorithm, we apply MATLAB using Least Significant Bit (LSB) and Random Right Circular Shift (RRCF).
{"title":"Least Significant Bit (LSB) and Random Right Circular Shift (RRCF) in digital watermarking","authors":"M. Kurdi, I. Elzein, A. Zeki","doi":"10.1109/ICENCO.2016.7856454","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856454","url":null,"abstract":"Digital watermarking is an authentication technique for distribution of content over the Internet. Such a method is highly desired due to the proliferation of high-capacity, digital recording contrivances which have fueled incremented concerns over copyright auspice of content [1]. Digital Watermarks are valuable mechanisms for protecting image, audio, video, and data and they are withal becoming a consequential implement in facilitating e-commerce. Any company that is earnest about safely protecting and distributing their content and products must use digital watermarks. Digital watermarking and steganography is considered as the most interesting field of research and development for many authors. In this article, we use Least Significant Bit (LSB) and Random Right Circular Shift (RRCF) in digital watermarking. LSB is used because it is not noticeable by human eyes and of its minor distortion of the image. For simulation of our algorithm, we apply MATLAB using Least Significant Bit (LSB) and Random Right Circular Shift (RRCF).","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126501647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ICENCO.2016.7856470
Linqi Huang, Xibing Li, Dong Liu
The development of seismic monitoring brings hope to the prediction of the rockburst and other seismic hazards, where the determination of the potential hypocenter is the key point. Here the equations are listed according to the measured data of sensors, and the seismic location is solved from the equations. The accuracy of location is closely related to the solve method. In this paper, a microseismic location method using Ω penalty function based on ant colony optimization is proposed. Experimental results show that the presented method gets a more accurately prediction in the microseismic source location.
{"title":"Microseismic location method using Ω penalty function based ant colony optimization","authors":"Linqi Huang, Xibing Li, Dong Liu","doi":"10.1109/ICENCO.2016.7856470","DOIUrl":"https://doi.org/10.1109/ICENCO.2016.7856470","url":null,"abstract":"The development of seismic monitoring brings hope to the prediction of the rockburst and other seismic hazards, where the determination of the potential hypocenter is the key point. Here the equations are listed according to the measured data of sensors, and the seismic location is solved from the equations. The accuracy of location is closely related to the solve method. In this paper, a microseismic location method using Ω penalty function based on ant colony optimization is proposed. Experimental results show that the presented method gets a more accurately prediction in the microseismic source location.","PeriodicalId":332360,"journal":{"name":"2016 12th International Computer Engineering Conference (ICENCO)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120950960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}