Requirements elicitation process faces major challenges about how stakeholders can easily verify requirements. Requirements document allows developers to visualize requirements using modeling language to ensure stakeholders have the same perspective as them. It is also effective to give presentations to stakeholders about how business processes will be carried out after the requirements are implemented. Issues are raised in building requirements modeling as business users generally do not have enough knowledge to build requirements models in specific notations. Transforming requirements (natural language) into semi-formal notation (BPMN) manually lead to inconsistency of elements structure. The need to automatically generate requirements model become crucial because it will be the basis for the programming process. Existing studies are mostly concerned on auto-completion of modeling language using domain ontology as basic knowledge, and let the stakeholders building initial requirements model with limited knowledge. The idea of this paper is to propose a methodology for building business process model in semi-formal language (BPMN) to represent future business processes using ontology approach. This research continues from previous study which transform requirements list into requirements ontology to formalize the elements such as problem, actor and process. By using requirements ontology as input, rule-based mapping method is proposed to map ontology instances to BPMN elements.
{"title":"Automating Business Process Model Generation from Ontology-based Requirements","authors":"Amarilis Putri Yanuarifiani, Fang-Fang Chua, Gaik-Yee Chan","doi":"10.1145/3316615.3316683","DOIUrl":"https://doi.org/10.1145/3316615.3316683","url":null,"abstract":"Requirements elicitation process faces major challenges about how stakeholders can easily verify requirements. Requirements document allows developers to visualize requirements using modeling language to ensure stakeholders have the same perspective as them. It is also effective to give presentations to stakeholders about how business processes will be carried out after the requirements are implemented. Issues are raised in building requirements modeling as business users generally do not have enough knowledge to build requirements models in specific notations. Transforming requirements (natural language) into semi-formal notation (BPMN) manually lead to inconsistency of elements structure. The need to automatically generate requirements model become crucial because it will be the basis for the programming process. Existing studies are mostly concerned on auto-completion of modeling language using domain ontology as basic knowledge, and let the stakeholders building initial requirements model with limited knowledge. The idea of this paper is to propose a methodology for building business process model in semi-formal language (BPMN) to represent future business processes using ontology approach. This research continues from previous study which transform requirements list into requirements ontology to formalize the elements such as problem, actor and process. By using requirements ontology as input, rule-based mapping method is proposed to map ontology instances to BPMN elements.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122992731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generally, the concentration of the study and research in the understanding of sounds revolves around the speech and music area, on the contrary, there are few in environmental and non-speech recognition. This paper carries out a meta-analysis of the acoustic transformation and feature set extraction of the environmental sound raw signal form into a parametric type representation in handling analysis, perception, and labeling for audio analysis of sound identification systems. We evaluated and analyzed the various contemporary methods and feature algorithms surveyed for the acoustic identification and perception of surrounding sounds, the Gammatone spectral coefficients (GSTC) and Mel Filterbank (FBEs) then the acoustic signal classification the Convolutional Neural Network (ConvNet) was applied. The outcome demonstrates that GSTC accomplished better as a feature in contrast to FBEs, but FBEs tend to improve performance when merge or incorporated with other feature. The analysis demonstrates that merging or incorporating with other features set is encouraging in achieving a much better accuracy in contrast to a single feature in classifying environmental sounds that is useful in the advancement of the intelligent machine listening frameworks.
{"title":"Meta-Analysis of Acoustic Feature Extraction for Machine Listening Systems","authors":"Ricardo A. Catanghal, T. Palaoag, C. Dayagdag","doi":"10.1145/3316615.3316664","DOIUrl":"https://doi.org/10.1145/3316615.3316664","url":null,"abstract":"Generally, the concentration of the study and research in the understanding of sounds revolves around the speech and music area, on the contrary, there are few in environmental and non-speech recognition. This paper carries out a meta-analysis of the acoustic transformation and feature set extraction of the environmental sound raw signal form into a parametric type representation in handling analysis, perception, and labeling for audio analysis of sound identification systems. We evaluated and analyzed the various contemporary methods and feature algorithms surveyed for the acoustic identification and perception of surrounding sounds, the Gammatone spectral coefficients (GSTC) and Mel Filterbank (FBEs) then the acoustic signal classification the Convolutional Neural Network (ConvNet) was applied. The outcome demonstrates that GSTC accomplished better as a feature in contrast to FBEs, but FBEs tend to improve performance when merge or incorporated with other feature. The analysis demonstrates that merging or incorporating with other features set is encouraging in achieving a much better accuracy in contrast to a single feature in classifying environmental sounds that is useful in the advancement of the intelligent machine listening frameworks.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"24 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132835638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Asyraf Asbullah, M. Ariffin, Z. Mahad, Muhamad Azlan Daud
The AAβ cryptosystem is a well-designed encryption scheme for securing a message (or plaintext) which could transmit large dataset than its key size. Nevertheless, the idea to transmit data larger than the specified condition is not a good idea. This work will explain why it is the case. As a result, we show that some most significant bit of the data can be recovered. In spite the very fact that the complete parameter cannot be recovered fully, anyway by leaks of its most significant bits, even of a very little amount could lead any cryptosystem insecure.
{"title":"(In)Security of the AAβ Cryptosystem for Transmitting Large Data","authors":"Muhammad Asyraf Asbullah, M. Ariffin, Z. Mahad, Muhamad Azlan Daud","doi":"10.1145/3316615.3316661","DOIUrl":"https://doi.org/10.1145/3316615.3316661","url":null,"abstract":"The AAβ cryptosystem is a well-designed encryption scheme for securing a message (or plaintext) which could transmit large dataset than its key size. Nevertheless, the idea to transmit data larger than the specified condition is not a good idea. This work will explain why it is the case. As a result, we show that some most significant bit of the data can be recovered. In spite the very fact that the complete parameter cannot be recovered fully, anyway by leaks of its most significant bits, even of a very little amount could lead any cryptosystem insecure.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133434282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Book recommender systems play an important role in book search engines, digital library or book shopping sites. In the field of recommender systems, processing data, selecting suitable data features, and classification methods are always challenging to decide the performance of a recommender system. This paper presents some solutions of data process, feature and classifier selection in order to build an efficient book recommender system. The Book-Crossing dataset, which has been studied in many book recommender systems, is taken into account as a case study. The attributes of books are analyzed and processed to increase the classification accuracy. Some well-known classification algorithms, such as, Naïve Bayes, decision tree, etc., are utilized to predict user interests in books and evaluated in several experiments. It has been found that Naïve Bayes is the best selection for book recommendation with acceptable run-time and accuracy.
{"title":"Model-Based Book Recommender Systems using Naïve Bayes enhanced with Optimal Feature Selection","authors":"Thi Thanh Sang Nguyen","doi":"10.1145/3316615.3316727","DOIUrl":"https://doi.org/10.1145/3316615.3316727","url":null,"abstract":"Book recommender systems play an important role in book search engines, digital library or book shopping sites. In the field of recommender systems, processing data, selecting suitable data features, and classification methods are always challenging to decide the performance of a recommender system. This paper presents some solutions of data process, feature and classifier selection in order to build an efficient book recommender system. The Book-Crossing dataset, which has been studied in many book recommender systems, is taken into account as a case study. The attributes of books are analyzed and processed to increase the classification accuracy. Some well-known classification algorithms, such as, Naïve Bayes, decision tree, etc., are utilized to predict user interests in books and evaluated in several experiments. It has been found that Naïve Bayes is the best selection for book recommendation with acceptable run-time and accuracy.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the problem of rich websites in campus without targeted recommendation, which makes it difficult for users to find the information resources of high interest and high quality, this paper proposes an implicit feedback recommendation algorithm in campus network based on user's changing interest and user influence. Based on the traditional collaborative filtering algorithm, introduces time function that adapting to user's changing interest and user's influence factors. The score matrix based on time weight is integrated with the influence matrix to solve the problem that user similarity calculation is too single, and improves the accuracy and explanatory of the recommendation results. Experimental results show that the algorithm can effectively reduce the sparsity and cold start problem of the dataset, and has better recommendation quality than traditional collaborative filtering algorithm.
{"title":"Implicit Recommendation with Interest Change and User Influence","authors":"Qiaoqiao Tan, Fang’ai Liu, Shuning Xing","doi":"10.1145/3316615.3316680","DOIUrl":"https://doi.org/10.1145/3316615.3316680","url":null,"abstract":"Aiming at the problem of rich websites in campus without targeted recommendation, which makes it difficult for users to find the information resources of high interest and high quality, this paper proposes an implicit feedback recommendation algorithm in campus network based on user's changing interest and user influence. Based on the traditional collaborative filtering algorithm, introduces time function that adapting to user's changing interest and user's influence factors. The score matrix based on time weight is integrated with the influence matrix to solve the problem that user similarity calculation is too single, and improves the accuracy and explanatory of the recommendation results. Experimental results show that the algorithm can effectively reduce the sparsity and cold start problem of the dataset, and has better recommendation quality than traditional collaborative filtering algorithm.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126144195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the widespread adoption of electronic voting, various voting protocols have been proposed. Voting protocols need to satisfy security requirements, including privacy protection and the prevention of illegal voting (e.g., double voting). Our research focuses on the most important property of voting protocols, namely whether all votes are reflected in the voting results accurately. We formalized and verified this for one voting protocol using strand space analysis. We can also consider multiple security requirements depending on the extent to which the voting result is reflected accurately. These properties are discussed.
{"title":"Verification of Verifiability of Voting Protocols by Strand Space Analysis","authors":"Shigeki Hagihara, Masaya Shimakawa, N. Yonezaki","doi":"10.1145/3316615.3316629","DOIUrl":"https://doi.org/10.1145/3316615.3316629","url":null,"abstract":"With the widespread adoption of electronic voting, various voting protocols have been proposed. Voting protocols need to satisfy security requirements, including privacy protection and the prevention of illegal voting (e.g., double voting). Our research focuses on the most important property of voting protocols, namely whether all votes are reflected in the voting results accurately. We formalized and verified this for one voting protocol using strand space analysis. We can also consider multiple security requirements depending on the extent to which the voting result is reflected accurately. These properties are discussed.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"65-66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123129932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reusing software components from third-party vendors is one of the key technologies to gain shorter time-to-market and better quality of the software system. These components, also known as OTS (Off-the-Shelf) components, come in two types: COTS (Commercial Off-The-Shelf) and OSS (Open-Source Software). To utilize OSS components effectively, it is necessary to figure out how the development processes and methods to be adapted. Most current studies are either theoretical proposals without empirical assessment or case studies in similar project contexts. It is therefore necessary to conduct more empirical studies on how process improvement and risk management can be performed and what are the results in various project contexts.
{"title":"Risk Management in Projects Based on Open-Source Software","authors":"Nguyen Duc Linh, P. D. Hung, V. Diep, Ta Duc Tung","doi":"10.1145/3316615.3316648","DOIUrl":"https://doi.org/10.1145/3316615.3316648","url":null,"abstract":"Reusing software components from third-party vendors is one of the key technologies to gain shorter time-to-market and better quality of the software system. These components, also known as OTS (Off-the-Shelf) components, come in two types: COTS (Commercial Off-The-Shelf) and OSS (Open-Source Software). To utilize OSS components effectively, it is necessary to figure out how the development processes and methods to be adapted. Most current studies are either theoretical proposals without empirical assessment or case studies in similar project contexts. It is therefore necessary to conduct more empirical studies on how process improvement and risk management can be performed and what are the results in various project contexts.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124142473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People of the modern times are becoming more prone to have spinal curvature disorder due to the improper habits especially those that stays at desk more often. To diagnose this disorder, method such as radiography and other conventional method are used. Conventional method such as goniometry require human skills can be time consuming which eventually lead to exhaustion of logistic. These problems can be solved by using 3D photogrammetry method. This research uses Kinect obtain the 3D human body model and find the optimum parameters to capture the 3D model for body posture screening. The most optimum parameters that set to capture the 3D model of the subject is at 1.3 m distance between subject and camera, 80 lux and at chest level. The 3D model reconstructed from these parameters shows 100% accuracy of the point needed to be assessed. This papers highlight the validation of optimum parameters that will affect the performance of capturing 3D human reconstructed model for measuring the spinal curvature.
{"title":"Development of Assessment System for Spine Curvature Angle Measurement","authors":"Chua Shanyu, L. C. Chin, S. Basah, A. F. Azizan","doi":"10.1145/3316615.3316647","DOIUrl":"https://doi.org/10.1145/3316615.3316647","url":null,"abstract":"People of the modern times are becoming more prone to have spinal curvature disorder due to the improper habits especially those that stays at desk more often. To diagnose this disorder, method such as radiography and other conventional method are used. Conventional method such as goniometry require human skills can be time consuming which eventually lead to exhaustion of logistic. These problems can be solved by using 3D photogrammetry method. This research uses Kinect obtain the 3D human body model and find the optimum parameters to capture the 3D model for body posture screening. The most optimum parameters that set to capture the 3D model of the subject is at 1.3 m distance between subject and camera, 80 lux and at chest level. The 3D model reconstructed from these parameters shows 100% accuracy of the point needed to be assessed. This papers highlight the validation of optimum parameters that will affect the performance of capturing 3D human reconstructed model for measuring the spinal curvature.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126453593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saurabh Shukla, M. Hassan, L. T. Jung, A. Awang, Muhammad Khalid Khan
Healthcare Internet-of-things comprises a huge number of wearable sensors and interconnected computers. The high volume of IoT data is transacted over servers leading to servers overloading with high traffic causing network congestion. These cloud servers are typically for analyzing, retrieving and storing the large data generated from IoT devices. There exist challenges regarding sending real-time healthcare data from cloud servers to end-users. These challenges include the high computational latency, high communication latency, and high network latency. Due to these challenges, IoTs may not be able to send data in real-time to end-users. Fog nodes can be used to play a major role in reducing the high delay and high traffic. It can be a solution to increase system performance. In this paper, we proposed a 3-tier architecture, an analytical model for healthcare IoT using a hybrid approach consisting of fuzzy logic and reinforcement learning in a fog computing environment. The aim is to minimize network latency. The proposed model and 3-tier architecture are simulated using iFogSim simulator.
{"title":"A 3-Tier Architecture for Network Latency Reduction in Healthcare Internet-of-Things Using Fog Computing and Machine Learning","authors":"Saurabh Shukla, M. Hassan, L. T. Jung, A. Awang, Muhammad Khalid Khan","doi":"10.1145/3316615.3318222","DOIUrl":"https://doi.org/10.1145/3316615.3318222","url":null,"abstract":"Healthcare Internet-of-things comprises a huge number of wearable sensors and interconnected computers. The high volume of IoT data is transacted over servers leading to servers overloading with high traffic causing network congestion. These cloud servers are typically for analyzing, retrieving and storing the large data generated from IoT devices. There exist challenges regarding sending real-time healthcare data from cloud servers to end-users. These challenges include the high computational latency, high communication latency, and high network latency. Due to these challenges, IoTs may not be able to send data in real-time to end-users. Fog nodes can be used to play a major role in reducing the high delay and high traffic. It can be a solution to increase system performance. In this paper, we proposed a 3-tier architecture, an analytical model for healthcare IoT using a hybrid approach consisting of fuzzy logic and reinforcement learning in a fog computing environment. The aim is to minimize network latency. The proposed model and 3-tier architecture are simulated using iFogSim simulator.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is of significance to identify the source of malicious information in social networks, since this information diffusion is already a problem, which can seriously affect social stability. In this paper, we develop a propagation path based approach where the estimator of information source is chosen to be the root node associated with the propagation path that most likely leads to the monitored state of network. When the information diffusion process follows the Susceptible-Infected (SI) model and satisfying the instant forwarding hypothesis, we proved that the source estimator we proposed is the root node of the network shortest arborescence. Finally, multiple simulations on networks with different structure show that our method outperforms existing algorithms.
{"title":"An Information Source Identification Algorithm Based on Shortest Arborescence of Network","authors":"Zhong Li, Chunhe Xia, Tianbo Wang, Xiaochen Liu","doi":"10.1145/3316615.3316686","DOIUrl":"https://doi.org/10.1145/3316615.3316686","url":null,"abstract":"It is of significance to identify the source of malicious information in social networks, since this information diffusion is already a problem, which can seriously affect social stability. In this paper, we develop a propagation path based approach where the estimator of information source is chosen to be the root node associated with the propagation path that most likely leads to the monitored state of network. When the information diffusion process follows the Susceptible-Infected (SI) model and satisfying the instant forwarding hypothesis, we proved that the source estimator we proposed is the root node of the network shortest arborescence. Finally, multiple simulations on networks with different structure show that our method outperforms existing algorithms.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}