首页 > 最新文献

Intelligent Decision Technologies-Netherlands最新文献

英文 中文
Analysis and comparison of various deep learning models to implement suspicious activity recognition in CCTV surveillance 分析比较各种深度学习模型在CCTV监控中可疑活动识别的实现
Q4 Computer Science Pub Date : 2023-10-21 DOI: 10.3233/idt-230469
Dhruv Saluja, Harsh Kukreja, Akash Saini, Devanshi Tegwal, Preeti Nagrath, Jude Hemanth
The paper aims to analyze and compare various deep learning (DL) algorithms in order to develop a Suspicious Activity Recognition (SAR) system for closed-circuit television (CCTV) surveillance. Automated systems for detecting and classifying suspicious activities are crucial as technology’s role in safety and security expands. This paper addresses these challenges by creating a robust SAR system using machine learning techniques. It analyzes and compares evaluation metrics such as Precision, Recall, F1 Score, and Accuracy using various deep learning methods (convolutional neural network (CNN), Long short-term memory (LSTM) – Visual Geometry Group 16 (VGG16), LSTM – ResNet50, LSTM – EfficientNetB0, LSTM – InceptionNetV3, LSTM – DenseNet121, and Long-term Recurrent Convolutional Network (LRCN)). The proposed system improves threat identification, vandalism deterrence, fight prevention, and video surveillance. It aids emergency response by accurately classifying suspicious activities from CCTV footage, reducing reliance on human security personnel and addressing limitations in manual monitoring. The objectives of the paper include analyzing existing works, extracting features from CCTV videos, training robust deep learning models, evaluating algorithms, and improving accuracy. The conclusion highlights the superior performance of the LSTM-DenseNet121 algorithm, achieving an overall accuracy of 91.17% in detecting suspicious activities. This enhances security monitoring capabilities and reduces response time. Limitations of the system include subjectivity, contextual understanding, occlusion, false alarms, and privacy concerns. Future improvements involve real-time object tracking, collaboration with law enforcement agencies, and performance optimization. Ongoing research is necessary to overcome limitations and enhance the effectiveness of CCTV surveillance.
本文旨在分析和比较各种深度学习(DL)算法,以开发用于闭路电视(CCTV)监控的可疑活动识别(SAR)系统。随着技术在安全和安保方面的作用不断扩大,用于检测和分类可疑活动的自动化系统至关重要。本文通过使用机器学习技术创建一个强大的SAR系统来解决这些挑战。它使用各种深度学习方法(卷积神经网络(CNN),长短期记忆(LSTM) -视觉几何组16 (VGG16), LSTM - ResNet50, LSTM - EfficientNetB0, LSTM - InceptionNetV3, LSTM - DenseNet121和长期循环卷积网络(LRCN))分析和比较评估指标,如Precision, Recall, F1 Score和Accuracy。提出的系统改进了威胁识别、破坏威慑、战斗预防和视频监控。它通过从闭路电视录像中准确分类可疑活动,减少对人类安全人员的依赖,并解决人工监控的局限性,从而有助于应急响应。本文的目标包括分析现有作品,从CCTV视频中提取特征,训练鲁棒深度学习模型,评估算法以及提高准确性。结论突出了LSTM-DenseNet121算法的优越性能,在检测可疑活动时,总体准确率达到91.17%。这增强了安全监视功能并缩短了响应时间。该系统的局限性包括主观性、上下文理解、遮挡、误报和隐私问题。未来的改进包括实时对象跟踪、与执法机构的协作以及性能优化。为了克服局限性,提高闭路电视监控的有效性,进行研究是必要的。
{"title":"Analysis and comparison of various deep learning models to implement suspicious activity recognition in CCTV surveillance","authors":"Dhruv Saluja, Harsh Kukreja, Akash Saini, Devanshi Tegwal, Preeti Nagrath, Jude Hemanth","doi":"10.3233/idt-230469","DOIUrl":"https://doi.org/10.3233/idt-230469","url":null,"abstract":"The paper aims to analyze and compare various deep learning (DL) algorithms in order to develop a Suspicious Activity Recognition (SAR) system for closed-circuit television (CCTV) surveillance. Automated systems for detecting and classifying suspicious activities are crucial as technology’s role in safety and security expands. This paper addresses these challenges by creating a robust SAR system using machine learning techniques. It analyzes and compares evaluation metrics such as Precision, Recall, F1 Score, and Accuracy using various deep learning methods (convolutional neural network (CNN), Long short-term memory (LSTM) – Visual Geometry Group 16 (VGG16), LSTM – ResNet50, LSTM – EfficientNetB0, LSTM – InceptionNetV3, LSTM – DenseNet121, and Long-term Recurrent Convolutional Network (LRCN)). The proposed system improves threat identification, vandalism deterrence, fight prevention, and video surveillance. It aids emergency response by accurately classifying suspicious activities from CCTV footage, reducing reliance on human security personnel and addressing limitations in manual monitoring. The objectives of the paper include analyzing existing works, extracting features from CCTV videos, training robust deep learning models, evaluating algorithms, and improving accuracy. The conclusion highlights the superior performance of the LSTM-DenseNet121 algorithm, achieving an overall accuracy of 91.17% in detecting suspicious activities. This enhances security monitoring capabilities and reduces response time. Limitations of the system include subjectivity, contextual understanding, occlusion, false alarms, and privacy concerns. Future improvements involve real-time object tracking, collaboration with law enforcement agencies, and performance optimization. Ongoing research is necessary to overcome limitations and enhance the effectiveness of CCTV surveillance.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135514254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Severity prediction in COVID-19 patients using clinical markers and explainable artificial intelligence: A stacked ensemble machine learning approach 使用临床标志物和可解释的人工智能预测COVID-19患者的严重程度:堆叠集成机器学习方法
Q4 Computer Science Pub Date : 2023-10-21 DOI: 10.3233/idt-230320
Krishnaraj Chadaga, Srikanth Prabhu, Niranjana Sampathila, Rajagopala Chadaga
The recent COVID-19 pandemic had wreaked havoc worldwide, causing a massive strain on already-struggling healthcare infrastructure. Vaccines have been rolled out and seem effective in preventing a bad prognosis. However, a small part of the population (elderly and people with comorbidities) continues to succumb to this deadly virus. Due to a lack of available resources, appropriate triaging and treatment planning are vital to improving outcomes for patients with COVID-19. Assessing whether a patient requires the hospital’s Intensive Care Unit (ICU) is very important since these units are not available for every patient. In this research, we automate this assessment with stacked ensemble machine learning models that predict ICU admission based on general patient laboratory data. We have built an explainable decision support model which automatically scores the COVID-19 severity for individual patients. Data from 1925 COVID-19 positive patients, sourced from three top-tier Brazilian hospitals, were used to design the model. Pearson’s correlation and mutual information were utilized for feature selection, and the top 24 features were chosen as input for the model. The final stacked model could provide decision support on whether an admitted COVID-19 patient would require the ICU or not, with an accuracy of 88%. Explainable Artificial Intelligence (EAI) was used to undertake system-level insight discovery and investigate various clinical variables’ impact on decision-making. It was found that the most critical factors were respiratory rate, temperature, blood pressure, lactate dehydrogenase, hemoglobin, and age. Healthcare facilities can use the proposed approach to categorize COVID-19 patients and prevent COVID-19 fatalities.
最近的COVID-19大流行在全球范围内造成了严重破坏,给本已陷入困境的医疗基础设施造成了巨大压力。疫苗已经推出,似乎对预防不良预后有效。然而,一小部分人口(老年人和有合并症的人)继续死于这种致命病毒。由于缺乏可用资源,适当的分诊和治疗计划对于改善COVID-19患者的预后至关重要。评估病人是否需要医院的重症监护病房(ICU)是非常重要的,因为这些病房并不是对每个病人都可用。在这项研究中,我们使用堆叠集成机器学习模型自动进行评估,该模型基于一般患者实验室数据预测ICU入院情况。我们建立了一个可解释的决策支持模型,该模型可以自动对个体患者的COVID-19严重程度进行评分。该模型的设计使用了来自巴西三家顶级医院的1925名COVID-19阳性患者的数据。利用Pearson的相关性和互信息进行特征选择,选择前24个特征作为模型的输入。最终的堆叠模型可以为入院的COVID-19患者是否需要ICU提供决策支持,准确率为88%。使用可解释人工智能(EAI)进行系统级洞察发现,并调查各种临床变量对决策的影响。发现呼吸频率、体温、血压、乳酸脱氢酶、血红蛋白和年龄是最关键的因素。医疗机构可以使用拟议的方法对COVID-19患者进行分类并预防COVID-19死亡。
{"title":"Severity prediction in COVID-19 patients using clinical markers and explainable artificial intelligence: A stacked ensemble machine learning approach","authors":"Krishnaraj Chadaga, Srikanth Prabhu, Niranjana Sampathila, Rajagopala Chadaga","doi":"10.3233/idt-230320","DOIUrl":"https://doi.org/10.3233/idt-230320","url":null,"abstract":"The recent COVID-19 pandemic had wreaked havoc worldwide, causing a massive strain on already-struggling healthcare infrastructure. Vaccines have been rolled out and seem effective in preventing a bad prognosis. However, a small part of the population (elderly and people with comorbidities) continues to succumb to this deadly virus. Due to a lack of available resources, appropriate triaging and treatment planning are vital to improving outcomes for patients with COVID-19. Assessing whether a patient requires the hospital’s Intensive Care Unit (ICU) is very important since these units are not available for every patient. In this research, we automate this assessment with stacked ensemble machine learning models that predict ICU admission based on general patient laboratory data. We have built an explainable decision support model which automatically scores the COVID-19 severity for individual patients. Data from 1925 COVID-19 positive patients, sourced from three top-tier Brazilian hospitals, were used to design the model. Pearson’s correlation and mutual information were utilized for feature selection, and the top 24 features were chosen as input for the model. The final stacked model could provide decision support on whether an admitted COVID-19 patient would require the ICU or not, with an accuracy of 88%. Explainable Artificial Intelligence (EAI) was used to undertake system-level insight discovery and investigate various clinical variables’ impact on decision-making. It was found that the most critical factors were respiratory rate, temperature, blood pressure, lactate dehydrogenase, hemoglobin, and age. Healthcare facilities can use the proposed approach to categorize COVID-19 patients and prevent COVID-19 fatalities.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135514255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data analytics methods to measure service quality: A systematic review 测量服务质量的数据分析方法:系统回顾
Q4 Computer Science Pub Date : 2023-10-11 DOI: 10.3233/idt-230363
Georgia Gkioka, Thimios Bothos, Babis Magoutas, Gregoris Mentzas
The volume of user generated content (UGC) regarding the quality of provided services has increased exponentially. Meanwhile, research on how to leverage this data using data-driven methods to systematically measure service quality is rather limited. Several works have employed Data Analytics (DA) techniques on UGC and shown that using such data to measure service quality is promising and efficient. The purpose of this study is to provide insights into the studies which use Data Analytics techniques to measure service quality in different sectors, identify gaps in the literature and propose future directions. This study performs a systematic literature review (SLR) of Data Analytics (DA) techniques to measure service quality in various sectors. This paper focuses on the type of data, the approaches used, and the evaluation techniques found in these studies. The study derives a new categorization of the Data Analytics methods used in measuring service quality, distinguishes the most used data sources and provides insights regarding methods and data sources used per industry. Finally, the paper concludes by identifying gaps in the literature and proposes future research directions aiming to provide practitioners and academia with guidance on implementing DA for service quality assessment, complementary to traditional survey-based methods.
与提供的服务质量相关的用户生成内容(UGC)的数量呈指数级增长。同时,如何利用这些数据,采用数据驱动的方法系统地衡量服务质量的研究相当有限。有几项研究使用了数据分析(DA)技术对用户原创内容进行分析,并表明使用这些数据来衡量服务质量是有希望和有效的。本研究的目的是为使用数据分析技术来衡量不同部门服务质量的研究提供见解,找出文献中的差距,并提出未来的方向。本研究对数据分析(DA)技术进行系统的文献回顾(SLR),以衡量各个部门的服务质量。本文重点介绍了这些研究中的数据类型、使用的方法和评估技术。该研究对用于衡量服务质量的数据分析方法进行了新的分类,区分了最常用的数据源,并提供了有关每个行业使用的方法和数据源的见解。最后,本文总结了文献中的不足之处,并提出了未来的研究方向,旨在为从业者和学术界提供在服务质量评估中实施数据分析的指导,补充传统的基于调查的方法。
{"title":"Data analytics methods to measure service quality: A systematic review","authors":"Georgia Gkioka, Thimios Bothos, Babis Magoutas, Gregoris Mentzas","doi":"10.3233/idt-230363","DOIUrl":"https://doi.org/10.3233/idt-230363","url":null,"abstract":"The volume of user generated content (UGC) regarding the quality of provided services has increased exponentially. Meanwhile, research on how to leverage this data using data-driven methods to systematically measure service quality is rather limited. Several works have employed Data Analytics (DA) techniques on UGC and shown that using such data to measure service quality is promising and efficient. The purpose of this study is to provide insights into the studies which use Data Analytics techniques to measure service quality in different sectors, identify gaps in the literature and propose future directions. This study performs a systematic literature review (SLR) of Data Analytics (DA) techniques to measure service quality in various sectors. This paper focuses on the type of data, the approaches used, and the evaluation techniques found in these studies. The study derives a new categorization of the Data Analytics methods used in measuring service quality, distinguishes the most used data sources and provides insights regarding methods and data sources used per industry. Finally, the paper concludes by identifying gaps in the literature and proposes future research directions aiming to provide practitioners and academia with guidance on implementing DA for service quality assessment, complementary to traditional survey-based methods.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136214023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data storage query and traceability method of electronic certificate based on cloud computing and blockchain 基于云计算和区块链的电子证书数据存储查询及可追溯方法
Q4 Computer Science Pub Date : 2023-10-03 DOI: 10.3233/idt-230152
Huanying Chen, Bo Wei, Zhaoji Huang
In the age of big data, electronic data has developed rapidly and gradually replaced traditional paper documents. In daily life, all kinds of data are saved in the form of electronic documents. In this regard, people have strengthened the development of electronic depository system. Electronic storage refers to the storage of actual events in the form of electronic data through information technology to prove the time and content of events. Its application scenarios are very extensive such as electronic contracts, online transactions and intellectual property rights. However, due to the vulnerability of electronic data, the existing electronic data depository system has certain security risks, and its content is very easy to be tampered with and destroyed, resulting in the loss of depository information. Due to the complexity of the operation of the existing electronic data depository system, some users are likely to reduce the authenticity of the depository information due to the non-standard operation. In order to solve the problems existing in the current electronic data storage system, this paper designed an electronic data storage system based on cloud computing and blockchain technology. The data storage of cloud computing and blockchain was decentralized, and its content cannot be tampered with. It can effectively ensure the integrity and security of electronic information, which is more suitable for the needs of electronic storage scenarios. This paper first introduced the development of electronic data depository system and cloud computing, and optimized the electronic data depository system through the task scheduling model of cloud computing. Finally, the feasibility of the system was verified through experiments. The data showed that the functional efficiency of the system in the electronic data sampling point storage function, the upload of documents to be stored, the download of stored documents, the view of stored information function and the file storage and certificate comparison verification function has reached 0.843, 0.821, 0.798, 0.862 and 0.812 respectively. The final function indexes of each function of the traditional electronic data depository system were 0.619, 0.594, 0.618, 0.597 and 0.622 respectively. This data shows that the electronic data storage system based on cloud computing and blockchain modeling can effectively manage electronic data and facilitate relevant personnel to verify electronic data.
在大数据时代,电子数据发展迅速,逐渐取代了传统的纸质文件。在日常生活中,各种数据都以电子文档的形式保存。对此,人们加强了电子存管系统的开发。电子存储是指通过信息技术以电子数据的形式存储实际事件,以证明事件发生的时间和内容。它的应用场景非常广泛,如电子合同、网上交易、知识产权等。但是,由于电子数据的脆弱性,现有的电子数据存管系统存在一定的安全风险,其内容非常容易被篡改和破坏,造成存管信息的丢失。由于现有电子数据存管系统操作的复杂性,一些用户可能会因为操作不规范而降低存管信息的真实性。为了解决当前电子数据存储系统中存在的问题,本文设计了一种基于云计算和区块链技术的电子数据存储系统。云计算和区块链的数据存储是去中心化的,其内容不可篡改。能有效保证电子信息的完整性和安全性,更适合电子存储场景的需求。本文首先介绍了电子数据存储系统和云计算的发展,并通过云计算的任务调度模型对电子数据存储系统进行了优化。最后,通过实验验证了系统的可行性。数据表明,系统在电子数据采样点存储功能、待存储文件上传功能、已存储文件下载功能、已存储信息查看功能和文件存储与证书比对验证功能中的功能效率分别达到0.843、0.821、0.798、0.862和0.812。传统电子数据存管系统各功能的最终功能指标分别为0.619、0.594、0.618、0.597、0.622。该数据表明,基于云计算和区块链建模的电子数据存储系统可以有效管理电子数据,方便相关人员对电子数据进行验证。
{"title":"Data storage query and traceability method of electronic certificate based on cloud computing and blockchain","authors":"Huanying Chen, Bo Wei, Zhaoji Huang","doi":"10.3233/idt-230152","DOIUrl":"https://doi.org/10.3233/idt-230152","url":null,"abstract":"In the age of big data, electronic data has developed rapidly and gradually replaced traditional paper documents. In daily life, all kinds of data are saved in the form of electronic documents. In this regard, people have strengthened the development of electronic depository system. Electronic storage refers to the storage of actual events in the form of electronic data through information technology to prove the time and content of events. Its application scenarios are very extensive such as electronic contracts, online transactions and intellectual property rights. However, due to the vulnerability of electronic data, the existing electronic data depository system has certain security risks, and its content is very easy to be tampered with and destroyed, resulting in the loss of depository information. Due to the complexity of the operation of the existing electronic data depository system, some users are likely to reduce the authenticity of the depository information due to the non-standard operation. In order to solve the problems existing in the current electronic data storage system, this paper designed an electronic data storage system based on cloud computing and blockchain technology. The data storage of cloud computing and blockchain was decentralized, and its content cannot be tampered with. It can effectively ensure the integrity and security of electronic information, which is more suitable for the needs of electronic storage scenarios. This paper first introduced the development of electronic data depository system and cloud computing, and optimized the electronic data depository system through the task scheduling model of cloud computing. Finally, the feasibility of the system was verified through experiments. The data showed that the functional efficiency of the system in the electronic data sampling point storage function, the upload of documents to be stored, the download of stored documents, the view of stored information function and the file storage and certificate comparison verification function has reached 0.843, 0.821, 0.798, 0.862 and 0.812 respectively. The final function indexes of each function of the traditional electronic data depository system were 0.619, 0.594, 0.618, 0.597 and 0.622 respectively. This data shows that the electronic data storage system based on cloud computing and blockchain modeling can effectively manage electronic data and facilitate relevant personnel to verify electronic data.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135789676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stock market prediction based on sentiment analysis using deep long short-term memory optimized with namib beetle henry optimization 基于深度长短期记忆的股票市场预测与namib甲虫henry优化
Q4 Computer Science Pub Date : 2023-09-12 DOI: 10.3233/idt-230191
Nital Adikane, V. Nirmalrani
Stock price prediction is a recent hot subject with enormous promise and difficulties. Stock prices are volatile and exceedingly challenging to predict accurately due to factors like investment sentiment and market rumors etc. The development of effective models for accurate prediction is extremely tricky due to the complexity of stockdata. Long Short-Term Memory (LSTM) discovers patterns and insights that weren’t previously visible, and they can be leveraged to make incredibly accurate predictions. Therefore, to perform an accurate prediction of the next-day trend, in this research manuscript, a novel method called Updated Deep LSTM (UDLSTM) with namib Beetle Henry optimization (BH-UDLSTM) is proposed on historical stock market data and sentiment analysis data. The UDLSTMmodel has improved prediction performance, which is more stable during training, and increases data accuracy. Hybridization of namib beetle and henry gas algorithm with the UDLSTM further enhances the prediction accuracy with minimum error by excellent balance of exploration and exploitation. BH-UDLSTM is then evaluated with several existing methods and it is proved that the introduced approach predicts the stock price accurately (92.45%) than the state-of-the-art.
股票价格预测是近年来的一个热门学科,既有巨大的前景,也有巨大的困难。由于投资情绪和市场传言等因素的影响,股票价格波动很大,很难准确预测。由于库存数据的复杂性,开发有效的模型来进行准确的预测是非常棘手的。长短期记忆(LSTM)可以发现以前不可见的模式和见解,并且可以利用它们做出令人难以置信的准确预测。因此,为了对次日走势进行准确预测,本文在历史股市数据和情绪分析数据上,提出了一种新的方法,即基于namib Beetle Henry优化的更新深度LSTM (UDLSTM) (BH-UDLSTM)。udlstm模型提高了预测性能,在训练过程中更加稳定,提高了数据的准确性。将namib甲虫和henry gas算法与UDLSTM进行杂交,通过良好的勘探和开采平衡,进一步提高了预测精度,误差最小。然后用几种现有的方法对BH-UDLSTM进行了评估,证明了所引入的方法对股票价格的预测准确率(92.45%)高于现有的方法。
{"title":"Stock market prediction based on sentiment analysis using deep long short-term memory optimized with namib beetle henry optimization","authors":"Nital Adikane, V. Nirmalrani","doi":"10.3233/idt-230191","DOIUrl":"https://doi.org/10.3233/idt-230191","url":null,"abstract":"Stock price prediction is a recent hot subject with enormous promise and difficulties. Stock prices are volatile and exceedingly challenging to predict accurately due to factors like investment sentiment and market rumors etc. The development of effective models for accurate prediction is extremely tricky due to the complexity of stockdata. Long Short-Term Memory (LSTM) discovers patterns and insights that weren’t previously visible, and they can be leveraged to make incredibly accurate predictions. Therefore, to perform an accurate prediction of the next-day trend, in this research manuscript, a novel method called Updated Deep LSTM (UDLSTM) with namib Beetle Henry optimization (BH-UDLSTM) is proposed on historical stock market data and sentiment analysis data. The UDLSTMmodel has improved prediction performance, which is more stable during training, and increases data accuracy. Hybridization of namib beetle and henry gas algorithm with the UDLSTM further enhances the prediction accuracy with minimum error by excellent balance of exploration and exploitation. BH-UDLSTM is then evaluated with several existing methods and it is proved that the introduced approach predicts the stock price accurately (92.45%) than the state-of-the-art.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135886767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of laser image recognition system based on high performance computing of spatiotemporal data 基于时空数据高性能计算的激光图像识别系统设计
Q4 Computer Science Pub Date : 2023-09-11 DOI: 10.3233/idt-230161
Zongfu Wu, Fazhong Hou
Due to the large scale and spatiotemporal dispersion of 3D (three-dimensional) point cloud data, current object recognition and semantic annotation methods still face issues of high computational complexity and slow data processing speed, resulting in data processing requiring much longer time than collection. This article studied the FPFH (Fast Point Feature Histograms) description method for local spatial features of point cloud data, achieving efficient extraction of local spatial features of point cloud data; This article investigated the robustness of point cloud data under different sample densities and noise environments. This article utilized the time delay of laser emission and reception signals to achieve distance measurement. Based on this, the measured object is continuously scanned to obtain the distance between the measured object and the measurement point. This article referred to the existing three-dimensional coordinate conversion method to obtain a two-dimensional lattice after three-dimensional position conversion. Based on the basic requirements of point cloud data processing, this article adopted a modular approach, with core functional modules such as input and output of point cloud data, visualization of point clouds, filtering of point clouds, extraction of key points of point clouds, feature extraction of point clouds, registration of point clouds, and data acquisition of point clouds. This can achieve efficient and convenient human-computer interaction for point clouds. This article used a laser image recognition system to screen potential objects, with a success rate of 85% and an accuracy rate of 82%. The laser image recognition system based on spatiotemporal data used in this article has high accuracy.
由于三维(三维)点云数据的大规模和时空弥散性,目前的目标识别和语义标注方法仍然面临计算复杂度高和数据处理速度慢的问题,导致数据处理所需的时间远远长于数据采集。本文研究了点云数据局部空间特征的FPFH (Fast Point Feature Histograms)描述方法,实现了点云数据局部空间特征的高效提取;本文研究了点云数据在不同样本密度和噪声环境下的鲁棒性。本文利用激光发射和接收信号的时间延迟来实现距离测量。在此基础上,对被测物体进行连续扫描,得到被测物体与测点之间的距离。本文参考现有的三维坐标转换方法,在三维位置转换后得到二维点阵。本文根据点云数据处理的基本要求,采用模块化的方式,核心功能模块包括点云数据的输入输出、点云可视化、点云滤波、点云关键点提取、点云特征提取、点云配准、点云数据采集。这可以实现点云高效、便捷的人机交互。本文采用激光图像识别系统对潜在目标进行筛选,成功率为85%,准确率为82%。本文所采用的基于时空数据的激光图像识别系统具有较高的精度。
{"title":"Design of laser image recognition system based on high performance computing of spatiotemporal data","authors":"Zongfu Wu, Fazhong Hou","doi":"10.3233/idt-230161","DOIUrl":"https://doi.org/10.3233/idt-230161","url":null,"abstract":"Due to the large scale and spatiotemporal dispersion of 3D (three-dimensional) point cloud data, current object recognition and semantic annotation methods still face issues of high computational complexity and slow data processing speed, resulting in data processing requiring much longer time than collection. This article studied the FPFH (Fast Point Feature Histograms) description method for local spatial features of point cloud data, achieving efficient extraction of local spatial features of point cloud data; This article investigated the robustness of point cloud data under different sample densities and noise environments. This article utilized the time delay of laser emission and reception signals to achieve distance measurement. Based on this, the measured object is continuously scanned to obtain the distance between the measured object and the measurement point. This article referred to the existing three-dimensional coordinate conversion method to obtain a two-dimensional lattice after three-dimensional position conversion. Based on the basic requirements of point cloud data processing, this article adopted a modular approach, with core functional modules such as input and output of point cloud data, visualization of point clouds, filtering of point clouds, extraction of key points of point clouds, feature extraction of point clouds, registration of point clouds, and data acquisition of point clouds. This can achieve efficient and convenient human-computer interaction for point clouds. This article used a laser image recognition system to screen potential objects, with a success rate of 85% and an accuracy rate of 82%. The laser image recognition system based on spatiotemporal data used in this article has high accuracy.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136025567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HT-WSO: A hybrid meta-heuristic approach-aided multi-objective constraints for energy efficient routing in WBANs HT-WSO:一种混合元启发式方法辅助多目标约束的wban节能路由
Q4 Computer Science Pub Date : 2023-09-11 DOI: 10.3233/idt-220295
Bhagya Lakshmi A, Sasirekha K, Nagendiran S, Ani Minisha R, Mary Shiba C, Varun C.M, Sajitha L.P, Vimala Josphine C
Generally, Wireless Body Area Networks (WBANs) are regarded as the collection of small sensor devices that are effectively implanted or embedded into the human body. Moreover, the nodes included in the WBAN have large resource constraints. Hence, reliable and energy-efficient data transmission plays a significant role in the implementation and in constructing of most of the merging applications. Regarded to complicated channel environment, limited power supply, as well as varying link connectivity has made the construction of WBANs routing protocol become difficult. In order to provide the routing protocol in a high energy-efficient manner, a new approach is suggested using hybrid meta-heuristic development. Initially, all the sensor nodes in WBAN are considered for experimentation. In general, the WBAN is comprised of mobile nodes as well as fixed sensor nodes. Since the existing models are ineffective to achieve high energy efficiency, the new routing protocol is developed by proposing the Hybrid Tunicate-Whale Swarm Optimization (HT-WSO) algorithm. Subsequently, the proposed work considers the multiple constraints for deriving the objective function. The network efficiency is analyzed using the objective function that is formulated by distance, hop count, energy, path loss, and load and packet loss ratio. To attain the optimum value, the HT-WSO derived from Tunicate Swarm Algorithm (TSA) and Whale Optimization Algorithm (WOA) is employed. In the end, the ability of the working model is estimated by diverse parameters and compared with existing traditional approaches. The simulation outcome of the designed method achieves 13.3%, 23.5%, 25.7%, and 27.7% improved performance than DHOA, Jaya, TSA, and WOA. Thus, the results illustrate that the recommended protocol attains better energy efficiency over WBANs.
一般来说,无线体域网络(wban)被认为是有效植入或嵌入人体的小型传感器设备的集合。此外,无线宽带网络所包含的节点具有较大的资源约束。因此,可靠和节能的数据传输在大多数合并应用的实现和构建中起着重要的作用。由于信道环境复杂,供电有限,链路连通性多变,使得wban路由协议的构建变得困难。为了高效节能地提供路由协议,提出了一种混合元启发式开发方法。首先,考虑WBAN中所有传感器节点进行实验。一般来说,无线宽带网络由移动节点和固定传感器节点组成。针对现有模型无法实现高能效的问题,提出了一种新的路由协议,即混合被毛鲸群优化算法(HT-WSO)。随后,提出的工作考虑了多个约束来推导目标函数。使用目标函数分析网络效率,目标函数由距离、跳数、能量、路径损耗、负载和丢包率组成。为了获得最优值,采用了由被囊虫群算法(TSA)和鲸鱼优化算法(WOA)衍生的HT-WSO。最后,用不同的参数对工作模型的能力进行了估计,并与现有的传统方法进行了比较。仿真结果表明,该方法的性能比DHOA、Jaya、TSA和WOA分别提高了13.3%、23.5%、25.7%和27.7%。因此,结果表明所推荐的协议比wban具有更好的能源效率。
{"title":"HT-WSO: A hybrid meta-heuristic approach-aided multi-objective constraints for energy efficient routing in WBANs","authors":"Bhagya Lakshmi A, Sasirekha K, Nagendiran S, Ani Minisha R, Mary Shiba C, Varun C.M, Sajitha L.P, Vimala Josphine C","doi":"10.3233/idt-220295","DOIUrl":"https://doi.org/10.3233/idt-220295","url":null,"abstract":"Generally, Wireless Body Area Networks (WBANs) are regarded as the collection of small sensor devices that are effectively implanted or embedded into the human body. Moreover, the nodes included in the WBAN have large resource constraints. Hence, reliable and energy-efficient data transmission plays a significant role in the implementation and in constructing of most of the merging applications. Regarded to complicated channel environment, limited power supply, as well as varying link connectivity has made the construction of WBANs routing protocol become difficult. In order to provide the routing protocol in a high energy-efficient manner, a new approach is suggested using hybrid meta-heuristic development. Initially, all the sensor nodes in WBAN are considered for experimentation. In general, the WBAN is comprised of mobile nodes as well as fixed sensor nodes. Since the existing models are ineffective to achieve high energy efficiency, the new routing protocol is developed by proposing the Hybrid Tunicate-Whale Swarm Optimization (HT-WSO) algorithm. Subsequently, the proposed work considers the multiple constraints for deriving the objective function. The network efficiency is analyzed using the objective function that is formulated by distance, hop count, energy, path loss, and load and packet loss ratio. To attain the optimum value, the HT-WSO derived from Tunicate Swarm Algorithm (TSA) and Whale Optimization Algorithm (WOA) is employed. In the end, the ability of the working model is estimated by diverse parameters and compared with existing traditional approaches. The simulation outcome of the designed method achieves 13.3%, 23.5%, 25.7%, and 27.7% improved performance than DHOA, Jaya, TSA, and WOA. Thus, the results illustrate that the recommended protocol attains better energy efficiency over WBANs.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136025566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taobao transaction data mining based on time series evaluation under the background of big data 大数据背景下基于时间序列评价的淘宝交易数据挖掘
Q4 Computer Science Pub Date : 2023-09-11 DOI: 10.3233/idt-230111
Yanmin Zhang
With the emergence of e-commerce, more and more people conduct transactions through the Internet, thus resulting in a large number of transaction data. Data mining is to decompose a large amount of data according to data rules, and analyze network transaction data, so as to provide necessary digital links for companies to analyze the market and develop business. Although time series data mining is smaller than other types of data mining, it is also an important issue of data mining. In the real world, the correlation between data and time is very common. The study of time series model plays a very important role in data mining. Due to different purposes, Taobao data analysis is also different. In addition to statistics, at present, the in-depth research and analysis of Taobao data are relatively insufficient, and the analysis of Taobao transaction data based on time series is rare. In order to improve the accuracy of Taobao transaction data mining and better formulate Taobao marketing strategy, this paper used time series data mining technology to mine Taobao transaction data. This paper first introduced the role of Taobao transaction data mining, and then described the calculation method of time series data mining, including the re-description of time series and the similarity measurement of time series. Finally, through a series of processes such as data collection, data processing and data feature extraction, the data mining model for Taobao transaction was established, and two data prediction evaluation indicators, namely prediction accuracy and entropy, were proposed. The experimental part verified the effect of Taobao transaction data mining. The experimental results showed that the data mining model moment had good data prediction accuracy and entropy. The average data prediction accuracy was 94.26%, and the data mining ability was strong.
随着电子商务的出现,越来越多的人通过互联网进行交易,从而产生了大量的交易数据。数据挖掘就是根据数据规则对大量数据进行分解,分析网络交易数据,为企业分析市场、开展业务提供必要的数字环节。虽然时间序列数据挖掘比其他类型的数据挖掘要小,但它也是数据挖掘中的一个重要问题。在现实世界中,数据和时间之间的相关性非常普遍。时间序列模型的研究在数据挖掘中起着非常重要的作用。由于目的不同,淘宝数据分析也有所不同。除统计外,目前对淘宝数据的深入研究和分析相对不足,基于时间序列的淘宝交易数据分析较少。为了提高淘宝交易数据挖掘的准确性,更好地制定淘宝营销策略,本文采用时间序列数据挖掘技术对淘宝交易数据进行挖掘。本文首先介绍了淘宝交易数据挖掘的作用,然后描述了时间序列数据挖掘的计算方法,包括时间序列的重新描述和时间序列的相似性度量。最后,通过数据采集、数据处理、数据特征提取等一系列过程,建立了淘宝交易的数据挖掘模型,并提出了预测准确率和熵两个数据预测评价指标。实验部分验证了淘宝交易数据挖掘的效果。实验结果表明,该数据挖掘模型矩具有良好的数据预测精度和熵。平均数据预测准确率为94.26%,数据挖掘能力较强。
{"title":"Taobao transaction data mining based on time series evaluation under the background of big data","authors":"Yanmin Zhang","doi":"10.3233/idt-230111","DOIUrl":"https://doi.org/10.3233/idt-230111","url":null,"abstract":"With the emergence of e-commerce, more and more people conduct transactions through the Internet, thus resulting in a large number of transaction data. Data mining is to decompose a large amount of data according to data rules, and analyze network transaction data, so as to provide necessary digital links for companies to analyze the market and develop business. Although time series data mining is smaller than other types of data mining, it is also an important issue of data mining. In the real world, the correlation between data and time is very common. The study of time series model plays a very important role in data mining. Due to different purposes, Taobao data analysis is also different. In addition to statistics, at present, the in-depth research and analysis of Taobao data are relatively insufficient, and the analysis of Taobao transaction data based on time series is rare. In order to improve the accuracy of Taobao transaction data mining and better formulate Taobao marketing strategy, this paper used time series data mining technology to mine Taobao transaction data. This paper first introduced the role of Taobao transaction data mining, and then described the calculation method of time series data mining, including the re-description of time series and the similarity measurement of time series. Finally, through a series of processes such as data collection, data processing and data feature extraction, the data mining model for Taobao transaction was established, and two data prediction evaluation indicators, namely prediction accuracy and entropy, were proposed. The experimental part verified the effect of Taobao transaction data mining. The experimental results showed that the data mining model moment had good data prediction accuracy and entropy. The average data prediction accuracy was 94.26%, and the data mining ability was strong.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136025568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning and financial big data control using IoT 利用物联网进行机器学习和金融大数据控制
IF 1 Q4 Computer Science Pub Date : 2023-08-28 DOI: 10.3233/idt-230156
Jian Xiao
Machine learning algorithms have been widely used in risk prediction management systems for financial data. Early warning and control of financial risks are important areas of corporate investment decision-making, which can effectively reduce investment risks and ensure companies’ stable development. With the development of the Internet of Things, enterprises’ financial information is obtained through various intelligent devices in the enterprise financial system. Big data provides high-quality services for the economy and society in the high-tech era of information. However, the amount of financial data is large, complex and variable, so the analysis of financial data has huge difficulties, and with the in-depth application of machine learning algorithms, its shortcomings are gradually exposed. To this end, this paper collects the financial data of a listed group from 2005 to 2020, and conducts data preprocessing and Feature selection, including removing missing values, Outlier and unrelated items. Next, these data are divided into a training set and a testing set, where the training set data is used for model training and the testing set data is used to evaluate the performance of the model. Three methods are used to build and compare data control models, which are based on machine learning algorithm, based on deep learning network and the model based on artificial intelligence and Big data technology proposed in this paper. In terms of risk event prediction comparison, this paper selects two indicators to measure the performance of the model: accuracy and Mean squared error (MSE). Accuracy reflects the predictive ability of the model, which is the proportion of all correctly predicted samples to the total sample size. Mean squared error is used to evaluate the accuracy and error of the model, that is, the square of the Average absolute deviation between the predicted value and the true value. In this paper, the prediction results of the three methods are compared with the actual values, and their accuracy and Mean squared error are obtained and compared. The experimental results show that the model based on artificial intelligence and Big data technology proposed in this paper has higher accuracy and smaller Mean squared error than the other two models, and can achieve 90% accuracy in risk event prediction, which proves that it has higher ability in controlling financial data risk.
机器学习算法已广泛应用于金融数据风险预测管理系统中。财务风险的预警和控制是企业投资决策的重要领域,可以有效降低投资风险,保证企业的稳定发展。随着物联网的发展,企业的财务信息是通过企业财务系统中的各种智能设备获取的。在高科技信息时代,大数据为经济社会提供高质量的服务。然而,金融数据量大、复杂、多变,因此对金融数据的分析难度巨大,而随着机器学习算法的深入应用,其不足也逐渐暴露出来。为此,本文收集了某上市集团2005 - 2020年的财务数据,并对数据进行预处理和Feature选择,包括剔除缺失值、Outlier和不相关项。接下来,将这些数据分为训练集和测试集,其中训练集数据用于模型训练,测试集数据用于评估模型的性能。采用基于机器学习算法、基于深度学习网络和本文提出的基于人工智能和大数据技术的模型三种方法构建和比较数据控制模型。在风险事件预测比较方面,本文选取准确率和均方误差两个指标来衡量模型的性能。准确性反映了模型的预测能力,即所有正确预测的样本占总样本量的比例。均方误差用来评价模型的精度和误差,即预测值与真实值之间的平均绝对偏差的平方。本文将三种方法的预测结果与实际值进行了比较,得到了它们的精度和均方误差。实验结果表明,本文提出的基于人工智能和大数据技术的模型比其他两种模型具有更高的精度和更小的均方误差,在风险事件预测中可以达到90%的准确率,证明其具有更高的金融数据风险控制能力。
{"title":"Machine learning and financial big data control using IoT","authors":"Jian Xiao","doi":"10.3233/idt-230156","DOIUrl":"https://doi.org/10.3233/idt-230156","url":null,"abstract":"Machine learning algorithms have been widely used in risk prediction management systems for financial data. Early warning and control of financial risks are important areas of corporate investment decision-making, which can effectively reduce investment risks and ensure companies’ stable development. With the development of the Internet of Things, enterprises’ financial information is obtained through various intelligent devices in the enterprise financial system. Big data provides high-quality services for the economy and society in the high-tech era of information. However, the amount of financial data is large, complex and variable, so the analysis of financial data has huge difficulties, and with the in-depth application of machine learning algorithms, its shortcomings are gradually exposed. To this end, this paper collects the financial data of a listed group from 2005 to 2020, and conducts data preprocessing and Feature selection, including removing missing values, Outlier and unrelated items. Next, these data are divided into a training set and a testing set, where the training set data is used for model training and the testing set data is used to evaluate the performance of the model. Three methods are used to build and compare data control models, which are based on machine learning algorithm, based on deep learning network and the model based on artificial intelligence and Big data technology proposed in this paper. In terms of risk event prediction comparison, this paper selects two indicators to measure the performance of the model: accuracy and Mean squared error (MSE). Accuracy reflects the predictive ability of the model, which is the proportion of all correctly predicted samples to the total sample size. Mean squared error is used to evaluate the accuracy and error of the model, that is, the square of the Average absolute deviation between the predicted value and the true value. In this paper, the prediction results of the three methods are compared with the actual values, and their accuracy and Mean squared error are obtained and compared. The experimental results show that the model based on artificial intelligence and Big data technology proposed in this paper has higher accuracy and smaller Mean squared error than the other two models, and can achieve 90% accuracy in risk event prediction, which proves that it has higher ability in controlling financial data risk.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85905821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling dynamic social networks using concept of neighborhood theory 基于邻域理论的动态社会网络建模
IF 1 Q4 Computer Science Pub Date : 2023-08-28 DOI: 10.3233/idt-220138
Subrata Paul, C. Koner, Anirban Mitra
Dynamic social network analysis basically deals with the study of how the nodes and edges and associations among them within the network alter with time, thereby forming a special category of social network. Geometrical analysis has been done on various occasions, but there is a difference in the approximate distances of nodes. Snapshots for social networks are taken at each time slot and then are bound for these studies. The paper will discuss an efficient way of modeling dynamic social networks with the concept of neighborhood theory of cellular automata. So far, no model that uses the concept of neighborhood has been proposed to the best of our knowledge and the literature survey. Besides cellular automata that has been important tool in various applications has remained unexplored in the area of modelling. To this extent the paper, is the 1st attempt in modelling the social network that is evolving in nature. A link prediction algorithm based on some basic graph theory concepts has also been additionally proposed for the emergence of new nodes within the network. Theoretical and programming simulations have been explained in support to the model. Finally, the paper will discuss the model with a real-life scenario.
动态社会网络分析基本上是研究网络内的节点和边缘以及它们之间的关联如何随时间变化,从而形成社会网络的一个特殊类别。几何分析已经在各种场合进行过,但在节点的近似距离上存在差异。在每个时间段拍摄社交网络快照,然后将其绑定到这些研究中。本文将利用元胞自动机邻域理论的概念,讨论一种有效的动态社会网络建模方法。到目前为止,就我们所知和文献调查而言,还没有人提出使用邻域概念的模型。此外,元胞自动机在各种应用中都是重要的工具,但在建模领域仍未被探索。在这种程度上,这篇论文是对自然界中不断进化的社会网络建模的第一次尝试。针对网络中新节点的出现,本文还提出了一种基于图论基本概念的链路预测算法。理论和程序模拟都对该模型进行了说明。最后,本文将以实际场景讨论该模型。
{"title":"Modeling dynamic social networks using concept of neighborhood theory","authors":"Subrata Paul, C. Koner, Anirban Mitra","doi":"10.3233/idt-220138","DOIUrl":"https://doi.org/10.3233/idt-220138","url":null,"abstract":"Dynamic social network analysis basically deals with the study of how the nodes and edges and associations among them within the network alter with time, thereby forming a special category of social network. Geometrical analysis has been done on various occasions, but there is a difference in the approximate distances of nodes. Snapshots for social networks are taken at each time slot and then are bound for these studies. The paper will discuss an efficient way of modeling dynamic social networks with the concept of neighborhood theory of cellular automata. So far, no model that uses the concept of neighborhood has been proposed to the best of our knowledge and the literature survey. Besides cellular automata that has been important tool in various applications has remained unexplored in the area of modelling. To this extent the paper, is the 1st attempt in modelling the social network that is evolving in nature. A link prediction algorithm based on some basic graph theory concepts has also been additionally proposed for the emergence of new nodes within the network. Theoretical and programming simulations have been explained in support to the model. Finally, the paper will discuss the model with a real-life scenario.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83696696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligent Decision Technologies-Netherlands
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1