Abstract Under the background of the rapid development of information technology, people’s data and information security problems are becoming increasingly serious. Data and information can be leaked in daily Internet access or communications. When doing data sharing, the security mechanism of the data sharing platform should be analyzed. This article aims to study how to analyze the security mechanism of cloud computing-based data sharing platforms in the Internet of Things era. This article presented an attribute-based encryption (ABE) algorithm, a detailed interpretation of the attribute-based encryption algorithm, and analyzed security problems in data sharing in cloud computing. The experimental results showed that the ABE algorithm takes an average of 11 s with five trials, while the other two methods take 51.8 and 31.6 s. ABEs take less time for different encryption numbers under the same data than the other two methods and are more efficient than the other two methods. Thus, attribute-based encryption algorithms should have more advantages.
{"title":"Data sharing platform and security mechanism based on cloud computing under the Internet of Things","authors":"Jiejian Cai, J. Wang","doi":"10.1515/comp-2022-0256","DOIUrl":"https://doi.org/10.1515/comp-2022-0256","url":null,"abstract":"Abstract Under the background of the rapid development of information technology, people’s data and information security problems are becoming increasingly serious. Data and information can be leaked in daily Internet access or communications. When doing data sharing, the security mechanism of the data sharing platform should be analyzed. This article aims to study how to analyze the security mechanism of cloud computing-based data sharing platforms in the Internet of Things era. This article presented an attribute-based encryption (ABE) algorithm, a detailed interpretation of the attribute-based encryption algorithm, and analyzed security problems in data sharing in cloud computing. The experimental results showed that the ABE algorithm takes an average of 11 s with five trials, while the other two methods take 51.8 and 31.6 s. ABEs take less time for different encryption numbers under the same data than the other two methods and are more efficient than the other two methods. Thus, attribute-based encryption algorithms should have more advantages.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"403 - 415"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43222244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The rough set theory is a powerful numerical model used to handle the impreciseness and ambiguity of data. Many existing multigranulation rough set models were derived from the multigranulation decision-theoretic rough set framework. The multigranulation rough set theory is very desirable in many practical applications such as high-dimensional knowledge discovery, distributional information systems, and multisource data processing. So far research works were carried out only for multigranulation rough sets in extraction, selection of features, reduction of data, decision rules, and pattern extraction. The proposed approach mainly focuses on anomaly detection in qualitative data with multiple granules. The approximations of the dataset will be derived through multiequivalence relation, and then, the rough set-based entropy measure with weighted density method is applied on every object and attribute. For detecting outliers, threshold value fixation is performed based on the estimated weight. The performance of the algorithm is evaluated and compared with existing outlier detection algorithms. Datasets such as breast cancer, chess, and car evaluation have been taken from the UCI repository to prove its efficiency and performance.
{"title":"Rough set-based entropy measure with weighted density outlier detection method","authors":"T. Sangeetha, Geetha Mary Amalanathan","doi":"10.1515/comp-2020-0228","DOIUrl":"https://doi.org/10.1515/comp-2020-0228","url":null,"abstract":"Abstract The rough set theory is a powerful numerical model used to handle the impreciseness and ambiguity of data. Many existing multigranulation rough set models were derived from the multigranulation decision-theoretic rough set framework. The multigranulation rough set theory is very desirable in many practical applications such as high-dimensional knowledge discovery, distributional information systems, and multisource data processing. So far research works were carried out only for multigranulation rough sets in extraction, selection of features, reduction of data, decision rules, and pattern extraction. The proposed approach mainly focuses on anomaly detection in qualitative data with multiple granules. The approximations of the dataset will be derived through multiequivalence relation, and then, the rough set-based entropy measure with weighted density method is applied on every object and attribute. For detecting outliers, threshold value fixation is performed based on the estimated weight. The performance of the algorithm is evaluated and compared with existing outlier detection algorithms. Datasets such as breast cancer, chess, and car evaluation have been taken from the UCI repository to prove its efficiency and performance.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"123 - 133"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48375646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haijie Shen, Y. Li, Xinzhi Tian, Xiaofan Chen, Caihong Li, Qian Bian, Zhenduo Wang, Weihua Wang
Abstract With the rapid development of the Internet of Things, the requirements for massive data processing technology are getting higher and higher. Traditional computer data processing capabilities can no longer deliver fast, simple, and efficient data analysis and processing for today’s massive data processing due to the real-time, massive, polymorphic, and heterogeneous characteristics of Internet of Things data. Mass heterogeneous data of different types of subsystems in the Internet of Things need to be processed and stored uniformly, so the mass data processing method is required to be able to integrate multiple different networks, multiple data sources, and heterogeneous mass data and be able to perform processing on these data. Therefore, this article proposes massive data processing and multidimensional database management based on deep learning to meet the needs of contemporary society for massive data processing. This article has deeply studied the basic technical methods of massive data processing, including MapReduce technology, parallel data technology, database technology based on distributed memory databases, and distributed real-time database technology based on cloud computing technology, and constructed a massive data fusion algorithm based on deep learning. The model and the multidimensional online analytical processing model of the multidimensional database based on deep learning analyze the performance, scalability, load balancing, data query, and other aspects of the multidimensional database based on deep learning. It is concluded that the accuracy of multidimensional database query data is as high as 100%, and the accuracy of the average data query time is only 0.0053 s, which is much lower than the general database query time.
{"title":"Mass data processing and multidimensional database management based on deep learning","authors":"Haijie Shen, Y. Li, Xinzhi Tian, Xiaofan Chen, Caihong Li, Qian Bian, Zhenduo Wang, Weihua Wang","doi":"10.1515/comp-2022-0251","DOIUrl":"https://doi.org/10.1515/comp-2022-0251","url":null,"abstract":"Abstract With the rapid development of the Internet of Things, the requirements for massive data processing technology are getting higher and higher. Traditional computer data processing capabilities can no longer deliver fast, simple, and efficient data analysis and processing for today’s massive data processing due to the real-time, massive, polymorphic, and heterogeneous characteristics of Internet of Things data. Mass heterogeneous data of different types of subsystems in the Internet of Things need to be processed and stored uniformly, so the mass data processing method is required to be able to integrate multiple different networks, multiple data sources, and heterogeneous mass data and be able to perform processing on these data. Therefore, this article proposes massive data processing and multidimensional database management based on deep learning to meet the needs of contemporary society for massive data processing. This article has deeply studied the basic technical methods of massive data processing, including MapReduce technology, parallel data technology, database technology based on distributed memory databases, and distributed real-time database technology based on cloud computing technology, and constructed a massive data fusion algorithm based on deep learning. The model and the multidimensional online analytical processing model of the multidimensional database based on deep learning analyze the performance, scalability, load balancing, data query, and other aspects of the multidimensional database based on deep learning. It is concluded that the accuracy of multidimensional database query data is as high as 100%, and the accuracy of the average data query time is only 0.0053 s, which is much lower than the general database query time.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"300 - 313"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42526067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chuanyun Xu, Yueping Zheng, Yang Zhang, Gang Li, Ying Wang
Abstract Recent object detectors have achieved excellent performance in accuracy and speed. Even with such impressive results, the most advanced detectors are challenging in dense scenes. In this article, we analyze and find the reasons for the decrease in detection accuracy in dense scenes. We started our work in terms of region proposal and location loss. We found that low-quality proposal regions during the training process are the main factors affecting detection accuracy. To prove our research, we established and trained a dense detection model based on Cascade R-CNN. The model achieves an accuracy of mAP 0.413 on the SKU-110K sub-dataset. Our results show that improving the quality of recommended regions can effectively improve the detection accuracy in dense scenes.
{"title":"A method for detecting objects in dense scenes","authors":"Chuanyun Xu, Yueping Zheng, Yang Zhang, Gang Li, Ying Wang","doi":"10.1515/comp-2022-0231","DOIUrl":"https://doi.org/10.1515/comp-2022-0231","url":null,"abstract":"Abstract Recent object detectors have achieved excellent performance in accuracy and speed. Even with such impressive results, the most advanced detectors are challenging in dense scenes. In this article, we analyze and find the reasons for the decrease in detection accuracy in dense scenes. We started our work in terms of region proposal and location loss. We found that low-quality proposal regions during the training process are the main factors affecting detection accuracy. To prove our research, we established and trained a dense detection model based on Cascade R-CNN. The model achieves an accuracy of mAP 0.413 on the SKU-110K sub-dataset. Our results show that improving the quality of recommended regions can effectively improve the detection accuracy in dense scenes.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"75 - 82"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44086066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Steganography is a technique that embeds secret information in a suitable cover file such as text, image, audio, and video in such a manner that secret information remains invisible to the outside world. The study of the literature relevant to video steganography reveals that a tradeoff exists in attaining the acceptable values of various evaluation parameters such as a higher capacity usually results in lesser robustness or imperceptibility. In this article, we propose a technique that achieves high capacity along with required robustness. The embedding capacity is increased using singular value decomposition compression. To achieve the desired robustness, we constrain the embedding of the secret message in the region of interest in the cover video file. In this manner, we also succeed in maintaining the required imperceptibility. We prefer Haar-based lifting scheme in the wavelet domain for embedding the information because of its intrinsic benefits. We have implemented our suggested technique using MATLAB. The analysis of results on the prespecified parameters of the steganography justifies the effectiveness of the proposed technique.
{"title":"An ROI-based robust video steganography technique using SVD in wavelet domain","authors":"Urmila Pilania, Rohit Tanwar, Prinima Gupta","doi":"10.1515/comp-2020-0229","DOIUrl":"https://doi.org/10.1515/comp-2020-0229","url":null,"abstract":"Abstract Steganography is a technique that embeds secret information in a suitable cover file such as text, image, audio, and video in such a manner that secret information remains invisible to the outside world. The study of the literature relevant to video steganography reveals that a tradeoff exists in attaining the acceptable values of various evaluation parameters such as a higher capacity usually results in lesser robustness or imperceptibility. In this article, we propose a technique that achieves high capacity along with required robustness. The embedding capacity is increased using singular value decomposition compression. To achieve the desired robustness, we constrain the embedding of the secret message in the region of interest in the cover video file. In this manner, we also succeed in maintaining the required imperceptibility. We prefer Haar-based lifting scheme in the wavelet domain for embedding the information because of its intrinsic benefits. We have implemented our suggested technique using MATLAB. The analysis of results on the prespecified parameters of the steganography justifies the effectiveness of the proposed technique.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"1 - 16"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43224766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the high incidence of the dengue epidemic in developing countries, it is crucial to understand its dynamics from a holistic perspective. This paper analyzes different types of antecedents from a cybernetics perspective using a structural modelling approach. The novelty of this paper is twofold. First, it analyzes antecedents that may be social, institutional, environmental, or economic in nature. Since this type of study has not been done in the context of the dengue epidemic modelling, this paper offers a fresh perspective on this topic. Second, the paper pioneers the use of fuzzy multiple attribute decision making (F-MADM) approaches for the modelling of epidemic antecedents. As such, the paper has provided an avenue for the cross-fertilization of knowledge between scholars working in soft computing and epidemiological modelling domains.
{"title":"Modelling the interdependent relationships among epidemic antecedents using fuzzy multiple attribute decision making (F-MADM) approaches","authors":"Dharyll Prince M. Abellana","doi":"10.1515/comp-2020-0213","DOIUrl":"https://doi.org/10.1515/comp-2020-0213","url":null,"abstract":"Abstract With the high incidence of the dengue epidemic in developing countries, it is crucial to understand its dynamics from a holistic perspective. This paper analyzes different types of antecedents from a cybernetics perspective using a structural modelling approach. The novelty of this paper is twofold. First, it analyzes antecedents that may be social, institutional, environmental, or economic in nature. Since this type of study has not been done in the context of the dengue epidemic modelling, this paper offers a fresh perspective on this topic. Second, the paper pioneers the use of fuzzy multiple attribute decision making (F-MADM) approaches for the modelling of epidemic antecedents. As such, the paper has provided an avenue for the cross-fertilization of knowledge between scholars working in soft computing and epidemiological modelling domains.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"305 - 329"},"PeriodicalIF":1.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49526661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Voštinár, D. Horváthová, Martin Mitter, M. Bako
Abstract Virtual, augmented and mixed reality (VR, AR and MR) infiltrated not only gaming, industry, engineering, live events, entertainment, real estate, retail, military, etc., but as surveys indicate, also healthcare and education. In all these areas there is a lack of software development experts for VR, AR and MR to meet the needs of practice. Therefore, our intention at the Department of Computer Science, Faculty of Natural Sciences, Matej Bel University in Banská Bystrica, Slovakia, is to focus on the education and enlightenment of these areas. The aim of this article is to show the role of interactivity in different VR applications and its impact on users in three different areas: gaming, healthcare and education. In the case of one application of Arachnophobia, we also present the results of the research using a questionnaire.
{"title":"The look at the various uses of VR","authors":"P. Voštinár, D. Horváthová, Martin Mitter, M. Bako","doi":"10.1515/comp-2020-0123","DOIUrl":"https://doi.org/10.1515/comp-2020-0123","url":null,"abstract":"Abstract Virtual, augmented and mixed reality (VR, AR and MR) infiltrated not only gaming, industry, engineering, live events, entertainment, real estate, retail, military, etc., but as surveys indicate, also healthcare and education. In all these areas there is a lack of software development experts for VR, AR and MR to meet the needs of practice. Therefore, our intention at the Department of Computer Science, Faculty of Natural Sciences, Matej Bel University in Banská Bystrica, Slovakia, is to focus on the education and enlightenment of these areas. The aim of this article is to show the role of interactivity in different VR applications and its impact on users in three different areas: gaming, healthcare and education. In the case of one application of Arachnophobia, we also present the results of the research using a questionnaire.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"241 - 250"},"PeriodicalIF":1.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0123","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41640631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In the last decade, many researchers applied shallow and deep networks for human activity recognition (HAR). Currently, the trending research line in HAR is applying deep learning to extract features and classify activities from raw data. However, we observed that, authors of previous studies have not performed an efficient hyperparameter search on their artificial neural network (shallow or deep)-based classifier. Therefore, in this article, we demonstrate the effect of the random and Bayesian parameter search on a shallow neural network using five HAR databases. The result of this work shows that a shallow neural network with correct parameter optimization can achieve similar or even better recognition accuracy than the previous best deep classifier(s) on all databases. In addition, we draw conclusions about the advantages and disadvantages of the two hyperparameter search techniques according to the results.
{"title":"The effect of hyperparameter search on artificial neural network in human activity recognition","authors":"J. Suto","doi":"10.1515/comp-2020-0227","DOIUrl":"https://doi.org/10.1515/comp-2020-0227","url":null,"abstract":"Abstract In the last decade, many researchers applied shallow and deep networks for human activity recognition (HAR). Currently, the trending research line in HAR is applying deep learning to extract features and classify activities from raw data. However, we observed that, authors of previous studies have not performed an efficient hyperparameter search on their artificial neural network (shallow or deep)-based classifier. Therefore, in this article, we demonstrate the effect of the random and Bayesian parameter search on a shallow neural network using five HAR databases. The result of this work shows that a shallow neural network with correct parameter optimization can achieve similar or even better recognition accuracy than the previous best deep classifier(s) on all databases. In addition, we draw conclusions about the advantages and disadvantages of the two hyperparameter search techniques according to the results.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"411 - 422"},"PeriodicalIF":1.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0227","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44773328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Nowadays, in real-world applications, the dimensions of data are generated dynamically, and the traditional batch feature selection methods are not suitable for streaming data. So, online streaming feature selection methods gained more attention but the existing methods had demerits like low classification accuracy, fails to avoid redundant and irrelevant features, and a higher number of features selected. In this paper, we propose a parallel online feature selection method using multiple sliding-windows and fuzzy fast-mRMR feature selection analysis, which is used for selecting minimum redundant and maximum relevant features, and also overcomes the drawbacks of existing online streaming feature selection methods. To increase the performance speed of the proposed method parallel processing is used. To evaluate the performance of the proposed online feature selection method k-NN, SVM, and Decision Tree Classifiers are used and compared against the state-of-the-art online feature selection methods. Evaluation metrics like Accuracy, Precision, Recall, F1-Score are used on benchmark datasets for performance analysis. From the experimental analysis, it is proved that the proposed method has achieved more than 95% accuracy for most of the datasets and performs well over other existing online streaming feature selection methods and also, overcomes the drawbacks of the existing methods.
{"title":"Fuzzy Rank Based Parallel Online Feature Selection Method using Multiple Sliding Windows","authors":"B. Venkatesh, J. Anuradha","doi":"10.1515/comp-2020-0169","DOIUrl":"https://doi.org/10.1515/comp-2020-0169","url":null,"abstract":"Abstract Nowadays, in real-world applications, the dimensions of data are generated dynamically, and the traditional batch feature selection methods are not suitable for streaming data. So, online streaming feature selection methods gained more attention but the existing methods had demerits like low classification accuracy, fails to avoid redundant and irrelevant features, and a higher number of features selected. In this paper, we propose a parallel online feature selection method using multiple sliding-windows and fuzzy fast-mRMR feature selection analysis, which is used for selecting minimum redundant and maximum relevant features, and also overcomes the drawbacks of existing online streaming feature selection methods. To increase the performance speed of the proposed method parallel processing is used. To evaluate the performance of the proposed online feature selection method k-NN, SVM, and Decision Tree Classifiers are used and compared against the state-of-the-art online feature selection methods. Evaluation metrics like Accuracy, Precision, Recall, F1-Score are used on benchmark datasets for performance analysis. From the experimental analysis, it is proved that the proposed method has achieved more than 95% accuracy for most of the datasets and performs well over other existing online streaming feature selection methods and also, overcomes the drawbacks of the existing methods.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"275 - 287"},"PeriodicalIF":1.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0169","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46106542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Papageorgiou, A. Ioannou, Athanasios Maimaris, Alexander N. Ness
Abstract Information and Communication Technology (ICT), and recent advancements in Computer Science can serve as a catalyst for promoting sustainable means of transport. Through ICT applications, active mobility can be promoted and established as a viable transport mode. This can be achieved by providing relevant information for fostering social capital and promoting physical activity, thus contributing to a higher quality of life. Further, active mobility can greatly contribute to reducing air pollution and improving health status. For this purpose, the implementation of a Smart Pedestrian Network (SPN) information system is proposed. Such an implementation requires the collaboration of various stakeholders including the public, local authorities and local businesses. To convince stake-holders of the viability of implementing SPN, the benefits of active mobility should be clear. This paper proposes a framework to quantify active mobility benefits so that stake-holders can assess the investment that can be realized from implementing SPN. The proposed framework makes use of quantifying benefits in various market conditions. The benefits are shown to be significant and very much in favor of investing in technology and implementing the envisioned SPN system.
{"title":"Evaluation of the Benefits of Implementing a Smart Pedestrian Network System","authors":"George Papageorgiou, A. Ioannou, Athanasios Maimaris, Alexander N. Ness","doi":"10.1515/comp-2020-0127","DOIUrl":"https://doi.org/10.1515/comp-2020-0127","url":null,"abstract":"Abstract Information and Communication Technology (ICT), and recent advancements in Computer Science can serve as a catalyst for promoting sustainable means of transport. Through ICT applications, active mobility can be promoted and established as a viable transport mode. This can be achieved by providing relevant information for fostering social capital and promoting physical activity, thus contributing to a higher quality of life. Further, active mobility can greatly contribute to reducing air pollution and improving health status. For this purpose, the implementation of a Smart Pedestrian Network (SPN) information system is proposed. Such an implementation requires the collaboration of various stakeholders including the public, local authorities and local businesses. To convince stake-holders of the viability of implementing SPN, the benefits of active mobility should be clear. This paper proposes a framework to quantify active mobility benefits so that stake-holders can assess the investment that can be realized from implementing SPN. The proposed framework makes use of quantifying benefits in various market conditions. The benefits are shown to be significant and very much in favor of investing in technology and implementing the envisioned SPN system.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"11 1","pages":"224 - 231"},"PeriodicalIF":1.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/comp-2020-0127","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42404027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}