Pub Date : 2020-12-26DOI: 10.5121/csit.2020.102006
Kshitiz Badola, Ajay J. Joshi, Deepesh Sengar
In today’s world, with the increasing demand of products and their growing productivity from producers, customers sometimes failed to decide whether they are interested in buying a particular product or not. So author, here proposed a framework which deals with the buying of only items of interest, for a consumer. In our feature-set, whenever any consumer tends to watch any video from YouTube, it results in breakdown into several frames (frames per second), and from there we use object detection technique to detect each and every object in a particular frame, and then to find whether our consumer is interested in that particular object or not, we use facial emotion detector to check whether our user is happy, surprised, neutral or any other emotion. After viewing those products which are present in a frame of a video. Merging only those items of interest which were tend to fall for consumer’s positive choices (emotions), we then used Amazon online marketing technique to recommend products selected by our framework.
{"title":"Product Recommendation using Object Detection from Video, Based on Facial Emotions","authors":"Kshitiz Badola, Ajay J. Joshi, Deepesh Sengar","doi":"10.5121/csit.2020.102006","DOIUrl":"https://doi.org/10.5121/csit.2020.102006","url":null,"abstract":"In today’s world, with the increasing demand of products and their growing productivity from producers, customers sometimes failed to decide whether they are interested in buying a particular product or not. So author, here proposed a framework which deals with the buying of only items of interest, for a consumer. In our feature-set, whenever any consumer tends to watch any video from YouTube, it results in breakdown into several frames (frames per second), and from there we use object detection technique to detect each and every object in a particular frame, and then to find whether our consumer is interested in that particular object or not, we use facial emotion detector to check whether our user is happy, surprised, neutral or any other emotion. After viewing those products which are present in a frame of a video. Merging only those items of interest which were tend to fall for consumer’s positive choices (emotions), we then used Amazon online marketing technique to recommend products selected by our framework.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44292104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-26DOI: 10.5121/csit.2020.102004
Geeta Kocher, G. Kumar
With the advancement of internet technology, the numbers of threats are also rising exponentially. To reduce the impact of these threats, researchers have proposed many solutions for intrusion detection. In the literature, various machine learning classifiers are trained on older datasets for intrusion detection which limits their detection accuracy. So, there is a need to train the machine learning classifiers on latest dataset. In this paper, UNSW-NB15, the latest dataset is used to train machine learning classifiers. On the basis of theoretical analysis, taxonomy is proposed in terms of lazy and eager learners. From this proposed taxonomy, KNearest Neighbors (KNN), Stochastic Gradient Descent (SGD), Decision Tree (DT), Random Forest (RF), Logistic Regression (LR) and Naïve Bayes (NB) classifiers are selected for training. The performance of these classifiers is tested in terms of Accuracy, Mean Squared Error (MSE), Precision, Recall, F1-Score, True Positive Rate (TPR) and False Positive Rate (FPR) on UNSW-NB15 dataset and comparative analysis of these machine learning classifiers is carried out. The experimental results show that RF classifier outperforms other classifiers.
{"title":"Performance Analysis of Machine Learning Classifiers for Intrusion Detection using UNSW-NB15 Dataset","authors":"Geeta Kocher, G. Kumar","doi":"10.5121/csit.2020.102004","DOIUrl":"https://doi.org/10.5121/csit.2020.102004","url":null,"abstract":"With the advancement of internet technology, the numbers of threats are also rising exponentially. To reduce the impact of these threats, researchers have proposed many solutions for intrusion detection. In the literature, various machine learning classifiers are trained on older datasets for intrusion detection which limits their detection accuracy. So, there is a need to train the machine learning classifiers on latest dataset. In this paper, UNSW-NB15, the latest dataset is used to train machine learning classifiers. On the basis of theoretical analysis, taxonomy is proposed in terms of lazy and eager learners. From this proposed taxonomy, KNearest Neighbors (KNN), Stochastic Gradient Descent (SGD), Decision Tree (DT), Random Forest (RF), Logistic Regression (LR) and Naïve Bayes (NB) classifiers are selected for training. The performance of these classifiers is tested in terms of Accuracy, Mean Squared Error (MSE), Precision, Recall, F1-Score, True Positive Rate (TPR) and False Positive Rate (FPR) on UNSW-NB15 dataset and comparative analysis of these machine learning classifiers is carried out. The experimental results show that RF classifier outperforms other classifiers.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44116069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101911
Björn Friedrich, Enno-Edzard Steen, Sebastian J. F. Fudickar, A. Hein
A continuous monitoring of the physical strength and mobility of elderly people is important for maintaining their health and treating diseases at an early stage. However, frequent screenings by physicians are exceeding the logistic capacities. An alternate approach is the automatic and unobtrusive collection of functional measures by ambient sensors. In the current publication, we show the correlation among data of ambient motion sensors and the wellestablished mobility assessment Short-Physical-Performance-Battery and Tinetti. We use the average number of motion sensor events for correlation with the assessment scores. The evaluation on a real-world dataset shows a moderate to strong correlation with the scores of standardised geriatrics physical assessments.
{"title":"Assessing the Mobility of Elderly People in Domestic Smart Home Environments","authors":"Björn Friedrich, Enno-Edzard Steen, Sebastian J. F. Fudickar, A. Hein","doi":"10.5121/csit.2020.101911","DOIUrl":"https://doi.org/10.5121/csit.2020.101911","url":null,"abstract":"A continuous monitoring of the physical strength and mobility of elderly people is important for maintaining their health and treating diseases at an early stage. However, frequent screenings by physicians are exceeding the logistic capacities. An alternate approach is the automatic and unobtrusive collection of functional measures by ambient sensors. In the current publication, we show the correlation among data of ambient motion sensors and the wellestablished mobility assessment Short-Physical-Performance-Battery and Tinetti. We use the average number of motion sensor events for correlation with the assessment scores. The evaluation on a real-world dataset shows a moderate to strong correlation with the scores of standardised geriatrics physical assessments.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45991593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101905
M. Mamman, Z. Hanapi
The goal for improved wireless communication between interconnected objects in a network has been long anticipated. The present Long Term Evolution (LTE) fourth-generation (4G) network does not allow the variety of services for the future need, as the fifth-generation (5G) network is faster, efficient, reliable, and more flexible. The 5G network and call admission control (CAC) are best certainty that defines the elementary principles of the smart cities of the upcoming 5G network technology. It is predicted that substantial CAC in the smart cites environment where millions of wireless devices are connected, communication will be granted based on latency, speed, and cost. Furthermore, the present CAC algorithm suffers from performance deteriorates under the 4G network because of the adaptive threshold value used to determine the strength of the network. In this paper, a novel CAC algorithm that uses dynamic threshold value for smart cities in the 5G network to address performance deterioration is proposed. Simulation is used to evaluate the efficacy of the proposed algorithm, and results show that it significantly performs better than do other algorithm based on the metrics measured.
改进网络中相互连接的对象之间的无线通信的目标已经被期待很久了。目前的LTE (Long Term Evolution)第四代(4G)网络无法满足未来需求的各种业务,而第五代(5G)网络更快、更高效、更可靠、更灵活。5G网络和呼叫接纳控制(CAC)是定义即将到来的5G网络技术的智慧城市基本原则的最佳确定性。据预测,在数百万无线设备连接的智能城市环境中,大量的CAC将根据延迟、速度和成本授予通信。此外,由于使用自适应阈值来确定网络强度,目前的CAC算法在4G网络下性能下降。本文提出了一种新的CAC算法,利用5G网络中智慧城市的动态阈值来解决性能下降问题。通过仿真对该算法的有效性进行了评价,结果表明,该算法的性能明显优于基于测量指标的其他算法。
{"title":"An Efficient Dynamic Call Admission Control for 4G and 5G Networks","authors":"M. Mamman, Z. Hanapi","doi":"10.5121/csit.2020.101905","DOIUrl":"https://doi.org/10.5121/csit.2020.101905","url":null,"abstract":"The goal for improved wireless communication between interconnected objects in a network has been long anticipated. The present Long Term Evolution (LTE) fourth-generation (4G) network does not allow the variety of services for the future need, as the fifth-generation (5G) network is faster, efficient, reliable, and more flexible. The 5G network and call admission control (CAC) are best certainty that defines the elementary principles of the smart cities of the upcoming 5G network technology. It is predicted that substantial CAC in the smart cites environment where millions of wireless devices are connected, communication will be granted based on latency, speed, and cost. Furthermore, the present CAC algorithm suffers from performance deteriorates under the 4G network because of the adaptive threshold value used to determine the strength of the network. In this paper, a novel CAC algorithm that uses dynamic threshold value for smart cities in the 5G network to address performance deterioration is proposed. Simulation is used to evaluate the efficacy of the proposed algorithm, and results show that it significantly performs better than do other algorithm based on the metrics measured.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41869578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101904
Mahabaleshwar S. Kabbur, V. Kumar
Vehicular Ad-hoc network (VANET) has gained huge attraction from research community due to their significant nature of providing the autonomous vehicular communication. The efficient communication is considered as prime concern in these networks however, several techniques have been introduced to improve the overall communication of VANETs. Security and privacy are also considered as prime aspects of VANETs. Maintaining data security and privacy is highly dynamic VANETs is a challenging task. Several techniques have been introduced recently which are based on the cryptography and key exchange. However, these techniques provide solution to limited security threats. Hence, this work introduces a novel approach for key management and distribution in VANET to provide the security to the network and its components. This approach is later incorporated with cryptography mechanism to secure data packets. Hence, the proposed approach is named as Secure Group Key Management and Cryptography (SGKC). The experimental study shows significant improvements in the network performance. This SGKC approach will help the VANET user’s fraternity to perform secured data transmission.
{"title":"Mar_Security: A Joint Scheme for Improving the Security in VANET using Secure Group Key Management and Cryptography (SGKC)","authors":"Mahabaleshwar S. Kabbur, V. Kumar","doi":"10.5121/csit.2020.101904","DOIUrl":"https://doi.org/10.5121/csit.2020.101904","url":null,"abstract":"Vehicular Ad-hoc network (VANET) has gained huge attraction from research community due to their significant nature of providing the autonomous vehicular communication. The efficient communication is considered as prime concern in these networks however, several techniques have been introduced to improve the overall communication of VANETs. Security and privacy are also considered as prime aspects of VANETs. Maintaining data security and privacy is highly dynamic VANETs is a challenging task. Several techniques have been introduced recently which are based on the cryptography and key exchange. However, these techniques provide solution to limited security threats. Hence, this work introduces a novel approach for key management and distribution in VANET to provide the security to the network and its components. This approach is later incorporated with cryptography mechanism to secure data packets. Hence, the proposed approach is named as Secure Group Key Management and Cryptography (SGKC). The experimental study shows significant improvements in the network performance. This SGKC approach will help the VANET user’s fraternity to perform secured data transmission.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42236694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101909
Ruben Ventura
This paper presents new and evolved methods to perform Blind SQL Injection attacks. These are much faster than the current publicly available tools and techniques due to optimization and redesign ideas that hack databases in more efficient methods, using cleverer injection payloads; this is the result of years of private research. Implementing these methods within carefully crafted code has resulted in the development of the fastest tools in the world to extract information from a database through Blind SQL Injection vulnerabilities. These tools are around 1600% faster than the currently most popular tools. The nature of such attack vectors will be explained in this paper, including all of their intrinsic details.
{"title":"Blind SQL Injection Attacks Optimization","authors":"Ruben Ventura","doi":"10.5121/csit.2020.101909","DOIUrl":"https://doi.org/10.5121/csit.2020.101909","url":null,"abstract":"This paper presents new and evolved methods to perform Blind SQL Injection attacks. These are much faster than the current publicly available tools and techniques due to optimization and redesign ideas that hack databases in more efficient methods, using cleverer injection payloads; this is the result of years of private research. Implementing these methods within carefully crafted code has resulted in the development of the fastest tools in the world to extract information from a database through Blind SQL Injection vulnerabilities. These tools are around 1600% faster than the currently most popular tools. The nature of such attack vectors will be explained in this paper, including all of their intrinsic details.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45449771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101902
Xiang-Song Zhang, Wei-Xin Gao, Shihuan Zhu
In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.
{"title":"Research on Noise Reduction and Enhancement of Weld Image","authors":"Xiang-Song Zhang, Wei-Xin Gao, Shihuan Zhu","doi":"10.5121/csit.2020.101902","DOIUrl":"https://doi.org/10.5121/csit.2020.101902","url":null,"abstract":"In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49644260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101901
Una M. Kelly, L. Spreeuwers, R. Veldhuis
State-of-the-art face recognition systems (FRS) are vulnerable to morphing attacks, in which two photos of different people are merged in such a way that the resulting photo resembles both people. Such a photo could be used to apply for a passport, allowing both people to travel with the same identity document. Research has so far focussed on developing morphing detection methods. We suggest that it might instead be worthwhile to make face recognition systems themselves more robust to morphing attacks. We show that deep-learning-based face recognition can be improved simply by treating morphed images just like real images during training but also that, for significant improvements, more work is needed. Furthermore, we test the performance of our FRS on morphs of a type not seen during training. This addresses the problem of overfitting to the type of morphs used during training, which is often overlooked in current research.
{"title":"Improving Deep-Learning-based Face Recognition to Increase Robustness against Morphing Attacks","authors":"Una M. Kelly, L. Spreeuwers, R. Veldhuis","doi":"10.5121/csit.2020.101901","DOIUrl":"https://doi.org/10.5121/csit.2020.101901","url":null,"abstract":"State-of-the-art face recognition systems (FRS) are vulnerable to morphing attacks, in which two photos of different people are merged in such a way that the resulting photo resembles both people. Such a photo could be used to apply for a passport, allowing both people to travel with the same identity document. Research has so far focussed on developing morphing detection methods. We suggest that it might instead be worthwhile to make face recognition systems themselves more robust to morphing attacks. We show that deep-learning-based face recognition can be improved simply by treating morphed images just like real images during training but also that, for significant improvements, more work is needed. Furthermore, we test the performance of our FRS on morphs of a type not seen during training. This addresses the problem of overfitting to the type of morphs used during training, which is often overlooked in current research.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44108994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101906
Cao Xiaopeng, Qu Hongyan
The massive network traffic and high-dimensional features affect detection performance. In order to improve the efficiency and performance of detection, whale optimization sparse autoencoder model (WO-SAE) is proposed. Firstly, sparse autoencoder performs unsupervised training on high-dimensional raw data and extracts low-dimensional features of network traffic. Secondly, the key parameters of sparse autoencoder are optimized automatically by whale optimization algorithm to achieve better feature extraction ability. Finally, gated recurrent unit is used to classify the time series data. The experimental results show that the proposed model is superior to existing detection algorithms in accuracy, precision, and recall. And the accuracy presents 98.69%. WO-SAE model is a novel approach that reduces the user’s reliance on deep learning expertise.
{"title":"Deep Feature Extraction via Sparse Autoencoder for Intrusion Detection System","authors":"Cao Xiaopeng, Qu Hongyan","doi":"10.5121/csit.2020.101906","DOIUrl":"https://doi.org/10.5121/csit.2020.101906","url":null,"abstract":"The massive network traffic and high-dimensional features affect detection performance. In order to improve the efficiency and performance of detection, whale optimization sparse autoencoder model (WO-SAE) is proposed. Firstly, sparse autoencoder performs unsupervised training on high-dimensional raw data and extracts low-dimensional features of network traffic. Secondly, the key parameters of sparse autoencoder are optimized automatically by whale optimization algorithm to achieve better feature extraction ability. Finally, gated recurrent unit is used to classify the time series data. The experimental results show that the proposed model is superior to existing detection algorithms in accuracy, precision, and recall. And the accuracy presents 98.69%. WO-SAE model is a novel approach that reduces the user’s reliance on deep learning expertise.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48366142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A serious video game is an easy and practical way to get the player to learn about a complex subject, such as performing integrals, applying first aid, or even getting children to learn to read and write in their native language or another language. Therefore, to develop a serious video game, you must have a guide containing the basic or necessary elements of its software components to be considered. This research presents a quality model to evaluate the playability, taking the attributes of usability and understandability at the level of software components. This model can serve as parameters to measure the quality of the software product of the serious video games before and during its development, providing a margin with the primordial elements that a serious video game must have so that the players reach the desired objective of learning while playing. The experimental results show that 88.045% is obtained concerning for to the quality model proposed for the serious video game used in the test case, margin that can vary according to the needs of the implemented video game.
{"title":"Quality Model based on Playability for the Understandability and Usability Components in Serious Video Games","authors":"Iván Humberto Fuentes Chab, Damián Uriel Rosado Castellanos, Olivia Graciela Fragoso Diaz, Ivette Stephany Pacheco Farfán","doi":"10.5121/csit.2020.101912","DOIUrl":"https://doi.org/10.5121/csit.2020.101912","url":null,"abstract":"A serious video game is an easy and practical way to get the player to learn about a complex subject, such as performing integrals, applying first aid, or even getting children to learn to read and write in their native language or another language. Therefore, to develop a serious video game, you must have a guide containing the basic or necessary elements of its software components to be considered. This research presents a quality model to evaluate the playability, taking the attributes of usability and understandability at the level of software components. This model can serve as parameters to measure the quality of the software product of the serious video games before and during its development, providing a margin with the primordial elements that a serious video game must have so that the players reach the desired objective of learning while playing. The experimental results show that 88.045% is obtained concerning for to the quality model proposed for the serious video game used in the test case, margin that can vary according to the needs of the implemented video game.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42242342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}