Pub Date : 2021-10-01DOI: 10.33166/aetic.2021.04.002
Jinfeng Li
Unconventional folded shielded coplanar waveguide (FS-CPW) has yet to be fully investigated for tunable dielectrics-based applications. This work formulates designs of FS-CPW based on liquid crystals (LC) for electrically controlled 0-360˚ phase shifters, featuring a minimally redundant approach for reducing the LC volume and hence the costs for mass production. The design exhibits a few conceptual features that make it stand apart from others, noteworthy, the dual-strip structure with a simplified enclosure engraved that enables LC volume sharing between adjacent core lines. Insertion loss reduction by 0.77 dB and LC volume reduction by 1.62% per device are reported at 77 GHz, as compared with those of the conventional single-strip configuration. Based on the proof-of-concept results obtained for the novel dual-strip FS-CPW proposed, this work provides a springboard for follow-up investible propositions that will underpin the development of a phased array demonstrator.
{"title":"Towards 76-81 GHz Scalable Phase Shifting by Folded Dual-strip Shielded Coplanar Waveguide with Liquid Crystals","authors":"Jinfeng Li","doi":"10.33166/aetic.2021.04.002","DOIUrl":"https://doi.org/10.33166/aetic.2021.04.002","url":null,"abstract":"Unconventional folded shielded coplanar waveguide (FS-CPW) has yet to be fully investigated for tunable dielectrics-based applications. This work formulates designs of FS-CPW based on liquid crystals (LC) for electrically controlled 0-360˚ phase shifters, featuring a minimally redundant approach for reducing the LC volume and hence the costs for mass production. The design exhibits a few conceptual features that make it stand apart from others, noteworthy, the dual-strip structure with a simplified enclosure engraved that enables LC volume sharing between adjacent core lines. Insertion loss reduction by 0.77 dB and LC volume reduction by 1.62% per device are reported at 77 GHz, as compared with those of the conventional single-strip configuration. Based on the proof-of-concept results obtained for the novel dual-strip FS-CPW proposed, this work provides a springboard for follow-up investible propositions that will underpin the development of a phased array demonstrator.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42471475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.33166/aetic.2021.04.001
M. Faizan, M. Zuhairi, S. Ismail
The potential in process mining is progressively growing due to the increasing amount of event-data. Process mining strategies use event-logs to automatically classify process models, recommend improvements, predict processing times, check conformance, and recognize anomalies/deviations and bottlenecks. However, proper handling of event-logs while evaluating and using them as input is crucial to any process mining technique. When process mining techniques are applied to flexible systems with a large number of decisions to take at runtime, the outcome is often unstructured or semi-structured process models that are hard to comprehend. Existing approaches are good at discovering and visualizing structured processes but often struggle with less structured ones. Surprisingly, process mining is most useful in domains where flexibility is desired. A good illustration is the "patient treatment" process in a hospital, where the ability to deviate from dealing with changing conditions is crucial. It is useful to have insights into actual operations. However, there is a significant amount of diversity, which contributes to complicated, difficult-to-understand models. Trace clustering is a method for decreasing the complexity of process models in this context while also increasing their comprehensibility and accuracy. This paper discusses process mining, event-logs, and presenting a clustering approach to pre-process event-logs, i.e., a homogeneous subset of the event-log is created. A process model is generated for each subset. These homogeneous subsets are then evaluated independently from each other, which significantly improving the quality of mining results in flexible environments. The presented approach improves the fitness and precision of a discovered model while reducing its complexity, resulting in well-structured and easily understandable process discovery results.
{"title":"Process Discovery Enhancement with Trace Clustering and Profiling","authors":"M. Faizan, M. Zuhairi, S. Ismail","doi":"10.33166/aetic.2021.04.001","DOIUrl":"https://doi.org/10.33166/aetic.2021.04.001","url":null,"abstract":"The potential in process mining is progressively growing due to the increasing amount of event-data. Process mining strategies use event-logs to automatically classify process models, recommend improvements, predict processing times, check conformance, and recognize anomalies/deviations and bottlenecks. However, proper handling of event-logs while evaluating and using them as input is crucial to any process mining technique. When process mining techniques are applied to flexible systems with a large number of decisions to take at runtime, the outcome is often unstructured or semi-structured process models that are hard to comprehend. Existing approaches are good at discovering and visualizing structured processes but often struggle with less structured ones. Surprisingly, process mining is most useful in domains where flexibility is desired. A good illustration is the \"patient treatment\" process in a hospital, where the ability to deviate from dealing with changing conditions is crucial. It is useful to have insights into actual operations. However, there is a significant amount of diversity, which contributes to complicated, difficult-to-understand models. Trace clustering is a method for decreasing the complexity of process models in this context while also increasing their comprehensibility and accuracy. This paper discusses process mining, event-logs, and presenting a clustering approach to pre-process event-logs, i.e., a homogeneous subset of the event-log is created. A process model is generated for each subset. These homogeneous subsets are then evaluated independently from each other, which significantly improving the quality of mining results in flexible environments. The presented approach improves the fitness and precision of a discovered model while reducing its complexity, resulting in well-structured and easily understandable process discovery results.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42389461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.33166/aetic.2021.04.003
J. A. Onesimu, Robin D Sebastian, Y. Sei, Lenny Christopher
One of the largest automotive sectors in the world is India. The number of vehicles traveling by road has increased in recent times. In malls or other crowded places, many vehicles enter and exit the parking area. Due to the increase in vehicles, it is difficult to manually note down the license plate number of all the vehicles passing in and out of the parking area. Hence, it is necessary to develop an Automatic License Plate Detection and Recognition (ALPDR) model that recognize the license plate number of vehicles automatically. To automate this process, we propose a three-step process that will detect the license plate, segment the characters and recognize the characters present in it. Detection is done by converting the input image to a bi-level image. Using region props the characters are segmented from the detected license plate. A two-layer CNN model is developed to recognize the segmented characters. The proposed model automatically updates the details of the car entering and exiting the parking area to the database. The proposed ALPDR model has been tested in several conditions such as blurred images, different distances from the cameras, day and night conditions on the stationary vehicles. Experimental result shows that the proposed system achieves 91.1%, 96.7%, and 98.8% accuracy on license plate detection, segmentation, and recognition respectively which is superior to state-of-the-art literature models.
{"title":"An Intelligent License Plate Detection and Recognition Model Using Deep Neural Networks","authors":"J. A. Onesimu, Robin D Sebastian, Y. Sei, Lenny Christopher","doi":"10.33166/aetic.2021.04.003","DOIUrl":"https://doi.org/10.33166/aetic.2021.04.003","url":null,"abstract":"One of the largest automotive sectors in the world is India. The number of vehicles traveling by road has increased in recent times. In malls or other crowded places, many vehicles enter and exit the parking area. Due to the increase in vehicles, it is difficult to manually note down the license plate number of all the vehicles passing in and out of the parking area. Hence, it is necessary to develop an Automatic License Plate Detection and Recognition (ALPDR) model that recognize the license plate number of vehicles automatically. To automate this process, we propose a three-step process that will detect the license plate, segment the characters and recognize the characters present in it. Detection is done by converting the input image to a bi-level image. Using region props the characters are segmented from the detected license plate. A two-layer CNN model is developed to recognize the segmented characters. The proposed model automatically updates the details of the car entering and exiting the parking area to the database. The proposed ALPDR model has been tested in several conditions such as blurred images, different distances from the cameras, day and night conditions on the stationary vehicles. Experimental result shows that the proposed system achieves 91.1%, 96.7%, and 98.8% accuracy on license plate detection, segmentation, and recognition respectively which is superior to state-of-the-art literature models.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41857513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.33166/aetic.2021.04.004
Zurana Mehrin Ruhi, Sigma Jahan, J. Uddin
In the fourth industrial revolution, data-driven intelligent fault diagnosis for industrial purposes serves a crucial role. In contemporary times, although deep learning is a popular approach for fault diagnosis, it requires massive amounts of labelled samples for training, which is arduous to come by in the real world. Our contribution to introduce a novel comprehensive intelligent fault detection model using the Case Western Reserve University dataset is divided into two steps. Firstly, a new hybrid signal decomposition methodology is developed comprising Empirical Mode Decomposition and Variational Mode Decomposition to leverage signal information from both processes for effective feature extraction. Secondly, transfer learning with DenseNet121 is employed to alleviate the constraints of deep learning models. Finally, our proposed novel technique surpassed not only previous outcomes but also generated state-of-the-art outcomes represented via the F1 score.
{"title":"A Novel Hybrid Signal Decomposition Technique for Transfer Learning Based Industrial Fault Diagnosis","authors":"Zurana Mehrin Ruhi, Sigma Jahan, J. Uddin","doi":"10.33166/aetic.2021.04.004","DOIUrl":"https://doi.org/10.33166/aetic.2021.04.004","url":null,"abstract":"In the fourth industrial revolution, data-driven intelligent fault diagnosis for industrial purposes serves a crucial role. In contemporary times, although deep learning is a popular approach for fault diagnosis, it requires massive amounts of labelled samples for training, which is arduous to come by in the real world. Our contribution to introduce a novel comprehensive intelligent fault detection model using the Case Western Reserve University dataset is divided into two steps. Firstly, a new hybrid signal decomposition methodology is developed comprising Empirical Mode Decomposition and Variational Mode Decomposition to leverage signal information from both processes for effective feature extraction. Secondly, transfer learning with DenseNet121 is employed to alleviate the constraints of deep learning models. Finally, our proposed novel technique surpassed not only previous outcomes but also generated state-of-the-art outcomes represented via the F1 score.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44663534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.33166/AETIC.2021.03.004
N. Chatzisavvas, G. Priniotakis, M. Papoutsidakis, D. Nikolopoulos, I. Valais, Georgios Karpetas
The fast developments and ongoing demands in radiation dosimetry have piqued the attention of many software developers and physicists to create powerful tools to make their experiments more exact, less expensive, more focused, and with a wider range of possibilities. Many software toolkits, packages, and programs have been produced in recent years, with the majority of them available as open source, open access, or closed source. This study is mostly focused to present what are the Monte Carlo software developed over the years, their implementation in radiation treatment, radiation dosimetry, nuclear detector design for diagnostic imaging, radiation shielding design and radiation protection. Ten software toolkits are introduced, a table with main characteristics and information is presented in order to make someone entering the field of computational Physics with Monte Carlo, make a decision of which software to use for their experimental needs. The possibilities that this software can provide us with allow us to design anything from an X-Ray Tube to whole LINAC costly systems with readily changeable features. From basic x-ray and pair detectors to whole PET, SPECT, CT systems which can be evaluated, validated and configured in order to test new ideas. Calculating doses in patients allows us to quickly acquire, from dosimetry estimates with various sources and isotopes, in various materials, to actual radiation therapies such as Brachytherapy and Proton therapy. We can also manage and simulate Treatment Planning Systems with a variety of characteristics and develop a highly exact approach that actual patients will find useful and enlightening. Shielding is an important feature not only to protect people from radiation in places like nuclear power plants, nuclear medical imaging, and CT and X-Ray examination rooms, but also to prepare and safeguard humanity for interstellar travel and space station missions. This research looks at the computational software that has been available in many applications up to now, with an emphasis on Radiation Dosimetry and its relevance in today's environment.
{"title":"Monte Carlo Computational Software and Methods in Radiation Dosimetry","authors":"N. Chatzisavvas, G. Priniotakis, M. Papoutsidakis, D. Nikolopoulos, I. Valais, Georgios Karpetas","doi":"10.33166/AETIC.2021.03.004","DOIUrl":"https://doi.org/10.33166/AETIC.2021.03.004","url":null,"abstract":"The fast developments and ongoing demands in radiation dosimetry have piqued the attention of many software developers and physicists to create powerful tools to make their experiments more exact, less expensive, more focused, and with a wider range of possibilities. Many software toolkits, packages, and programs have been produced in recent years, with the majority of them available as open source, open access, or closed source. This study is mostly focused to present what are the Monte Carlo software developed over the years, their implementation in radiation treatment, radiation dosimetry, nuclear detector design for diagnostic imaging, radiation shielding design and radiation protection. Ten software toolkits are introduced, a table with main characteristics and information is presented in order to make someone entering the field of computational Physics with Monte Carlo, make a decision of which software to use for their experimental needs. The possibilities that this software can provide us with allow us to design anything from an X-Ray Tube to whole LINAC costly systems with readily changeable features. From basic x-ray and pair detectors to whole PET, SPECT, CT systems which can be evaluated, validated and configured in order to test new ideas. Calculating doses in patients allows us to quickly acquire, from dosimetry estimates with various sources and isotopes, in various materials, to actual radiation therapies such as Brachytherapy and Proton therapy. We can also manage and simulate Treatment Planning Systems with a variety of characteristics and develop a highly exact approach that actual patients will find useful and enlightening. Shielding is an important feature not only to protect people from radiation in places like nuclear power plants, nuclear medical imaging, and CT and X-Ray examination rooms, but also to prepare and safeguard humanity for interstellar travel and space station missions. This research looks at the computational software that has been available in many applications up to now, with an emphasis on Radiation Dosimetry and its relevance in today's environment.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43320251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.33166/AETIC.2021.03.005
D. Mati, Mentor Hamiti, Arsim Susuri, B. Selimi, Jaumin Ajdari
The development of natural language processing resources for Albanian has grown steadily in recent years. This paper presents research conducted on unsupervised learning-the challenges associated with building a dictionary for the Albanian language and creating part-of-speech tagging models. The majority of languages have their own dictionary, but languages with low resources suffer from a lack of resources. It facilitates the sharing of information and services for users and whole communities through natural language processing. The experimentation corpora for the Albanian language includes 250K sentences from different disciplines, with a proposal for a part-of-speech tagging tag set that can adequately represent the underlying linguistic phenomena. Contributing to the development of Albanian is the purpose of this paper. The results of experiments with the Albanian language corpus revealed that its use of articles and pronouns resembles that of more high-resource languages. According to this study, the total expected frequency as a means for correctly tagging words has been proven effective for populating the Albanian language dictionary.
{"title":"Building Dictionaries for Low Resource Languages: Challenges of Unsupervised Learning","authors":"D. Mati, Mentor Hamiti, Arsim Susuri, B. Selimi, Jaumin Ajdari","doi":"10.33166/AETIC.2021.03.005","DOIUrl":"https://doi.org/10.33166/AETIC.2021.03.005","url":null,"abstract":"The development of natural language processing resources for Albanian has grown steadily in recent years. This paper presents research conducted on unsupervised learning-the challenges associated with building a dictionary for the Albanian language and creating part-of-speech tagging models. The majority of languages have their own dictionary, but languages with low resources suffer from a lack of resources. It facilitates the sharing of information and services for users and whole communities through natural language processing. The experimentation corpora for the Albanian language includes 250K sentences from different disciplines, with a proposal for a part-of-speech tagging tag set that can adequately represent the underlying linguistic phenomena. Contributing to the development of Albanian is the purpose of this paper. The results of experiments with the Albanian language corpus revealed that its use of articles and pronouns resembles that of more high-resource languages. According to this study, the total expected frequency as a means for correctly tagging words has been proven effective for populating the Albanian language dictionary.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48607303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.33166/AETIC.2021.03.002
Rumali Siddiqua, S. Rahman, J. Uddin
Dengue fever, a mosquito-borne disease caused by dengue viruses, is a significant public health concern in many countries especially in the tropical and subtropical regions. In this paper, we introduce a deep learning-based model using Faster R-CNN with InceptionV2 accompanied by image processing techniques to identify the dengue mosquitoes. Performance of the proposed model is evaluated using a custom mosquito dataset built upon varying environments which are collected from the internet. The proposed Faster R-CNN with InceptionV2 model is compared with other two state-of-art models, R-FCN with ResNet 101 and SSD with MobilenetV2. The False positive (FP), False negative (FN), precision and recall are used as performance measurement tools to evaluate the detection accuracy of the proposed model. The experimental results demonstrate that as a classifier the Faster- RCNN model shows 95.19% of accuracy and outperforms other state-of-the-art models as R-FCN and SSD model show 94.20% and 92.55% detection accuracy, respectively for the test dataset.
{"title":"A Deep Learning-based Dengue Mosquito Detection Method Using Faster R-CNN and Image Processing Techniques","authors":"Rumali Siddiqua, S. Rahman, J. Uddin","doi":"10.33166/AETIC.2021.03.002","DOIUrl":"https://doi.org/10.33166/AETIC.2021.03.002","url":null,"abstract":"Dengue fever, a mosquito-borne disease caused by dengue viruses, is a significant public health concern in many countries especially in the tropical and subtropical regions. In this paper, we introduce a deep learning-based model using Faster R-CNN with InceptionV2 accompanied by image processing techniques to identify the dengue mosquitoes. Performance of the proposed model is evaluated using a custom mosquito dataset built upon varying environments which are collected from the internet. The proposed Faster R-CNN with InceptionV2 model is compared with other two state-of-art models, R-FCN with ResNet 101 and SSD with MobilenetV2. The False positive (FP), False negative (FN), precision and recall are used as performance measurement tools to evaluate the detection accuracy of the proposed model. The experimental results demonstrate that as a classifier the Faster- RCNN model shows 95.19% of accuracy and outperforms other state-of-the-art models as R-FCN and SSD model show 94.20% and 92.55% detection accuracy, respectively for the test dataset.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46504716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.33166/AETIC.2021.03.003
H. Shahzad, Adil Ali Saleem, Amna Ahmed, Kiran Shehzadi, H. Siddiqui
Emotions are feelings that are the result of biochemical processes in the body that are influenced by a variety of factors such as one's state of mind, situations, experiences, and surrounding environment. Emotions have an impact on one's ability to think and act. People interact with each other to share their thoughts and feelings. Emotions play a vital role in the field of medicine and can also strengthen the human computer interaction. There are different techniques being used to detect emotions based on facial features, texts, speech, and physiological signals. One of the physiological signal breathing is a parameter which represents an emotion. The rational belief that different breathing habits are correlated with different emotions has expanded the evidence for a connection between breathing and emotion. In this manuscript different recent investigations about the emotion recognition using respiration patterns have been reviewed. The aim of the survey is to sum up the latest technologies and techniques to help researchers develop a global solution for emotional detection system. Various researchers use benchmark datasets and few of them created their own dataset for emotion recognition. It is observed that many investigators used invasive sensors to acquire respiration signals that makes subject uncomfortable and conscious that affects the results. The numbers of subjects involved in the studies reviewed are of the same age and race which is the reason why the results obtained in those studies cannot be applied to diverse population. There is no single global solution exist.
{"title":"A Review on Physiological Signal Based Emotion Detection","authors":"H. Shahzad, Adil Ali Saleem, Amna Ahmed, Kiran Shehzadi, H. Siddiqui","doi":"10.33166/AETIC.2021.03.003","DOIUrl":"https://doi.org/10.33166/AETIC.2021.03.003","url":null,"abstract":"Emotions are feelings that are the result of biochemical processes in the body that are influenced by a variety of factors such as one's state of mind, situations, experiences, and surrounding environment. Emotions have an impact on one's ability to think and act. People interact with each other to share their thoughts and feelings. Emotions play a vital role in the field of medicine and can also strengthen the human computer interaction. There are different techniques being used to detect emotions based on facial features, texts, speech, and physiological signals. One of the physiological signal breathing is a parameter which represents an emotion. The rational belief that different breathing habits are correlated with different emotions has expanded the evidence for a connection between breathing and emotion. In this manuscript different recent investigations about the emotion recognition using respiration patterns have been reviewed. The aim of the survey is to sum up the latest technologies and techniques to help researchers develop a global solution for emotional detection system. Various researchers use benchmark datasets and few of them created their own dataset for emotion recognition. It is observed that many investigators used invasive sensors to acquire respiration signals that makes subject uncomfortable and conscious that affects the results. The numbers of subjects involved in the studies reviewed are of the same age and race which is the reason why the results obtained in those studies cannot be applied to diverse population. There is no single global solution exist.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44576768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.33166/AETIC.2021.03.001
Melih Öz, T. Danisman, Melih Gunay, Esra Zekiye Şanal, Özgür Duman, J. Ledet
The human eye contains valuable information about an individual’s identity and health. Therefore, segmenting the eye into distinct regions is an essential step towards gathering this useful information precisely. The main challenges in segmenting the human eye include low light conditions, reflections on the eye, variations in the eyelid, and head positions that make an eye image hard to segment. For this reason, there is a need for deep neural networks, which are preferred due to their success in segmentation problems. However, deep neural networks need a large amount of manually annotated data to be trained. Manual annotation is a labor-intensive task, and to tackle this problem, we used data augmentation methods to improve synthetic data. In this paper, we detail the exploration of the scenario, which, with limited data, whether performance can be enhanced using similar context data with image augmentation methods. Our training and test set consists of 3D synthetic eye images generated from the UnityEyes application and manually annotated real-life eye images, respectively. We examined the effect of using synthetic eye images with the Deeplabv3+ network in different conditions using image augmentation methods on the synthetic data. According to our experiments, the network trained with processed synthetic images beside real-life images produced better mIoU results than the network, which only trained with real-life images in the Base dataset. We also observed mIoU increase in the test set we created from MICHE II competition images.
{"title":"The Use of Synthetic Data to Facilitate Eye Segmentation Using Deeplabv3+","authors":"Melih Öz, T. Danisman, Melih Gunay, Esra Zekiye Şanal, Özgür Duman, J. Ledet","doi":"10.33166/AETIC.2021.03.001","DOIUrl":"https://doi.org/10.33166/AETIC.2021.03.001","url":null,"abstract":"The human eye contains valuable information about an individual’s identity and health. Therefore, segmenting the eye into distinct regions is an essential step towards gathering this useful information precisely. The main challenges in segmenting the human eye include low light conditions, reflections on the eye, variations in the eyelid, and head positions that make an eye image hard to segment. For this reason, there is a need for deep neural networks, which are preferred due to their success in segmentation problems. However, deep neural networks need a large amount of manually annotated data to be trained. Manual annotation is a labor-intensive task, and to tackle this problem, we used data augmentation methods to improve synthetic data. In this paper, we detail the exploration of the scenario, which, with limited data, whether performance can be enhanced using similar context data with image augmentation methods. Our training and test set consists of 3D synthetic eye images generated from the UnityEyes application and manually annotated real-life eye images, respectively. We examined the effect of using synthetic eye images with the Deeplabv3+ network in different conditions using image augmentation methods on the synthetic data. According to our experiments, the network trained with processed synthetic images beside real-life images produced better mIoU results than the network, which only trained with real-life images in the Base dataset. We also observed mIoU increase in the test set we created from MICHE II competition images.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43810856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.33166/AETIC.2021.02.008
Takuro Hada, Y. Sei, Yasuyuki Tahara, Akihiko Ohsuga
Recently, the use of microblogs in drug trafficking has surged and become a social problem. A common method applied by cyber patrols to repress crimes, such as drug trafficking, involves searching for crime-related keywords. However, criminals who post crime-inducing messages maximally exploit “codewords” rather than keywords, such as enjo kosai, marijuana, and methamphetamine, to camouflage their criminal intentions. Research suggests that these codewords change once they gain popularity; thus, effective codeword detection requires significant effort to keep track of the latest codewords. In this study, we focused on the appearance of codewords and those likely to be included in incriminating posts to detect codewords with a high likelihood of inclusion in incriminating posts. We proposed new methods for detecting codewords based on differences in word usage and conducted experiments on concealed-word detection to evaluate the effectiveness of the method. The results showed that the proposed method could detect concealed words other than those in the initial list and to a better degree than the baseline methods. These findings demonstrated the ability of the proposed method to rapidly and automatically detect codewords that change over time and blog posts that instigate crimes, thereby potentially reducing the burden of continuous codeword surveillance.
{"title":"Codeword Detection, Focusing on Differences in Similar Words Between Two Corpora of Microblogs","authors":"Takuro Hada, Y. Sei, Yasuyuki Tahara, Akihiko Ohsuga","doi":"10.33166/AETIC.2021.02.008","DOIUrl":"https://doi.org/10.33166/AETIC.2021.02.008","url":null,"abstract":"Recently, the use of microblogs in drug trafficking has surged and become a social problem. A common method applied by cyber patrols to repress crimes, such as drug trafficking, involves searching for crime-related keywords. However, criminals who post crime-inducing messages maximally exploit “codewords” rather than keywords, such as enjo kosai, marijuana, and methamphetamine, to camouflage their criminal intentions. Research suggests that these codewords change once they gain popularity; thus, effective codeword detection requires significant effort to keep track of the latest codewords. In this study, we focused on the appearance of codewords and those likely to be included in incriminating posts to detect codewords with a high likelihood of inclusion in incriminating posts. We proposed new methods for detecting codewords based on differences in word usage and conducted experiments on concealed-word detection to evaluate the effectiveness of the method. The results showed that the proposed method could detect concealed words other than those in the initial list and to a better degree than the baseline methods. These findings demonstrated the ability of the proposed method to rapidly and automatically detect codewords that change over time and blog posts that instigate crimes, thereby potentially reducing the burden of continuous codeword surveillance.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44102068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}