Big Data Analytics (BDA) is an unavoidable technique in today’s digital world for dealing with massive amounts of digital data generated by online and internet sources. It is kept in repositories for data processing via cluster nodes that are distributed throughout the wider network. Because of its magnitude and real-time creation, big data processing faces challenges with latency and throughput. Modern systems such as Hadoop and SPARK manage large amounts of data with their HDFS, Map Reduce, and In-Memory analytics approaches, but the migration cost is higher than usual. With Genetic Algorithm-based Optimization (GABO), Map Reduce Scheduling (MRS) and Data Replication have provided answers to this challenge. With multi objective solutions provided by Genetic Algorithm, resource utilization and node availability improve processing performance in large data environments. This work develops a novel creative strategy for enhancing data processing performance in big data analytics called Map Reduce Scheduling Based Non-Dominated Sorting Genetic Algorithm (MRSNSGA). The Hadoop-Map Reduce paradigm handles the placement of data in distributed blocks as a chunk and their scheduling among the cluster nodes in a wider network. Best fit solutions with high latency and low accessing time are extracted from the findings of various objective solutions. Experiments were carried out as a simulation with several inputs of varied location node data and cluster racks. Finally, the results show that the speed of data processing in big data analytics was enhanced by 30–35% over previous methodologies. Optimization approaches developed to locate the best solutions from multi-objective solutions at a rate of 24–30% among cluster nodes.
{"title":"Improving big data analytics data processing speed through map reduce scheduling and replica placement with HDFS using genetic optimization techniques","authors":"M.R. Sundara Kumar, H.S. Mohan","doi":"10.3233/jifs-240069","DOIUrl":"https://doi.org/10.3233/jifs-240069","url":null,"abstract":"Big Data Analytics (BDA) is an unavoidable technique in today’s digital world for dealing with massive amounts of digital data generated by online and internet sources. It is kept in repositories for data processing via cluster nodes that are distributed throughout the wider network. Because of its magnitude and real-time creation, big data processing faces challenges with latency and throughput. Modern systems such as Hadoop and SPARK manage large amounts of data with their HDFS, Map Reduce, and In-Memory analytics approaches, but the migration cost is higher than usual. With Genetic Algorithm-based Optimization (GABO), Map Reduce Scheduling (MRS) and Data Replication have provided answers to this challenge. With multi objective solutions provided by Genetic Algorithm, resource utilization and node availability improve processing performance in large data environments. This work develops a novel creative strategy for enhancing data processing performance in big data analytics called Map Reduce Scheduling Based Non-Dominated Sorting Genetic Algorithm (MRSNSGA). The Hadoop-Map Reduce paradigm handles the placement of data in distributed blocks as a chunk and their scheduling among the cluster nodes in a wider network. Best fit solutions with high latency and low accessing time are extracted from the findings of various objective solutions. Experiments were carried out as a simulation with several inputs of varied location node data and cluster racks. Finally, the results show that the speed of data processing in big data analytics was enhanced by 30–35% over previous methodologies. Optimization approaches developed to locate the best solutions from multi-objective solutions at a rate of 24–30% among cluster nodes.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140252096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient feature representation is the key to improving crowd counting performance. CNN and Transformer are the two commonly used feature extraction frameworks in the field of crowd counting. CNN excels at hierarchically extracting local features to obtain a multi-scale feature representation of the image, but it struggles with capturing global features. Transformer, on the other hand, could capture global feature representation by utilizing cascaded self-attention to capture remote dependency relationships, but it often overlooks local detail information. Therefore, relying solely on CNN or Transformer for crowd counting has certain limitations. In this paper, we propose the TCHNet crowd counting model by combining the CNN and Transformer frameworks. The model employs the CMT (CNNs Meet Vision Transformers) backbone network as the Feature Extraction Module (FEM) to hierarchically extract local and global features of the crowd using a combination of convolution and self-attention mechanisms. To obtain more comprehensive spatial local information, an improved Progressive Multi-scale Learning Process (PMLP) is introduced into the FEM, guiding the network to learn at different granularity levels. The features from these three different granularity levels are then fed into the Multi-scale Feature Aggregation Module (MFAM) for fusion. Finally, a Multi-Scale Regression Module (MSRM) is designed to handle the multi-scale fused features, resulting in crowd features rich in high-level semantics and low-level detail. Experimental results on five benchmark datasets demonstrate that TCHNet achieves highly competitive performance compared to some popular crowd counting methods.
{"title":"Transformer-CNN hybrid network for crowd counting","authors":"Jiamao Yu, Ying Yu, Jin Qian, Xing Han, Feng Zhu, Zhiliang Zhu","doi":"10.3233/jifs-236370","DOIUrl":"https://doi.org/10.3233/jifs-236370","url":null,"abstract":"Efficient feature representation is the key to improving crowd counting performance. CNN and Transformer are the two commonly used feature extraction frameworks in the field of crowd counting. CNN excels at hierarchically extracting local features to obtain a multi-scale feature representation of the image, but it struggles with capturing global features. Transformer, on the other hand, could capture global feature representation by utilizing cascaded self-attention to capture remote dependency relationships, but it often overlooks local detail information. Therefore, relying solely on CNN or Transformer for crowd counting has certain limitations. In this paper, we propose the TCHNet crowd counting model by combining the CNN and Transformer frameworks. The model employs the CMT (CNNs Meet Vision Transformers) backbone network as the Feature Extraction Module (FEM) to hierarchically extract local and global features of the crowd using a combination of convolution and self-attention mechanisms. To obtain more comprehensive spatial local information, an improved Progressive Multi-scale Learning Process (PMLP) is introduced into the FEM, guiding the network to learn at different granularity levels. The features from these three different granularity levels are then fed into the Multi-scale Feature Aggregation Module (MFAM) for fusion. Finally, a Multi-Scale Regression Module (MSRM) is designed to handle the multi-scale fused features, resulting in crowd features rich in high-level semantics and low-level detail. Experimental results on five benchmark datasets demonstrate that TCHNet achieves highly competitive performance compared to some popular crowd counting methods.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporal sentence grounding in videos (TSGV), which aims to retrieve video segments from an untrimmed videos that semantically match a given query. Most previous methods focused on learning either local or global query features and then performed cross-modal interaction, but ignore the complementarity between local and global features. In this paper, we propose a novel Multi-Level Interaction Network for Temporal Sentence Grounding in Videos. This network explores the semantics of queries at both phrase and sentence levels, interacting phrase-level features with video features to highlight video segments relevant to the query phrase and sentence-level features with video features to learn more about global localization information. A stacked fusion gate module is designed, which effectively captures the temporal relationships and semantic information among video segments. This module also introduces a gating mechanism to enable the model to adaptively regulate the fusion degree of video features and query features, further improving the accuracy of predicting the target segments. Extensive experiments on the ActivityNet Captions and Charades-STA benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art methods.
视频中的时态句子基础(Temporal sentence grounding in videos,TSGV),旨在从未修改的视频中检索出与给定查询语义相匹配的视频片段。以前的大多数方法都侧重于学习局部或全局查询特征,然后进行跨模态交互,但忽略了局部和全局特征之间的互补性。在本文中,我们提出了一种用于视频中时态句子基础的新型多层次交互网络。该网络从短语和句子两个层面探索查询语义,将短语层面的特征与视频特征进行交互,以突出与查询短语相关的视频片段;将句子层面的特征与视频特征进行交互,以了解更多全局定位信息。设计的叠加融合门模块可有效捕捉视频片段之间的时间关系和语义信息。该模块还引入了门控机制,使模型能够自适应地调节视频特征与查询特征的融合度,进一步提高预测目标片段的准确性。在 ActivityNet Captions 和 Charades-STA 基准数据集上进行的大量实验表明,所提出的方法优于最先进的方法。
{"title":"Multi-Level interaction network for temporal sentence grounding in videos","authors":"Guangli Wu, Zhijun Yang, Jing Zhang","doi":"10.3233/jifs-234800","DOIUrl":"https://doi.org/10.3233/jifs-234800","url":null,"abstract":"Temporal sentence grounding in videos (TSGV), which aims to retrieve video segments from an untrimmed videos that semantically match a given query. Most previous methods focused on learning either local or global query features and then performed cross-modal interaction, but ignore the complementarity between local and global features. In this paper, we propose a novel Multi-Level Interaction Network for Temporal Sentence Grounding in Videos. This network explores the semantics of queries at both phrase and sentence levels, interacting phrase-level features with video features to highlight video segments relevant to the query phrase and sentence-level features with video features to learn more about global localization information. A stacked fusion gate module is designed, which effectively captures the temporal relationships and semantic information among video segments. This module also introduces a gating mechanism to enable the model to adaptively regulate the fusion degree of video features and query features, further improving the accuracy of predicting the target segments. Extensive experiments on the ActivityNet Captions and Charades-STA benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art methods.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140254673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address the issue of unknown expert and attribute weights in the comprehensive assessment of hospitals, as well as the potential challenges posed by distance measures, this paper presents a probabilistic language multi-attribute group decision-making (MAGDM) approach that utilizes correlation coefficients and improved entropy. First, the correlation function, called the probabilistic linguistic correlation coefficient, is introduced into the probabilistic linguistic term set(PLTS) to measure the consistency among experts, so as to obtain the weights of experts. Next, based on Shannon entropy, an improved probabilistic linguistic entropy is proposed to measure the uncertainty of PLTS considering the number of alternatives and information quantity. Then, based on the correlation coefficient and improved entropy, the attribute weights are obtained. In addition, in order to overcome the counter-intuitive problem of existing distance measurement, this paper proposes a probabilistic language distance measurement method based on the Bray-Curtis distance to measure the differences between PLTSs. On this basis, by applying the technique for order preference by similarity to ideal solution (TOPSIS) method and using PLTSs to construct the MAGDM method, the ranking of alternative schemes is generated. Finally, the improved MAGDM method is applied to an example of the comprehensive evaluation of the smart medical hospitals. The results show that compared with the existing methods, this method can determine the weight information more reasonably, and the decision-making results are not counter-intuitive, so it can evaluate the hospital more objectively.
{"title":"Research on probabilistic language multi-attribute group decision-making method based on correlation coefficient and improved entropy","authors":"Junwei Li, Mengmeng Lian, Yong Jin, Miaomiao Xia, Huaibin Hou","doi":"10.3233/jifs-235593","DOIUrl":"https://doi.org/10.3233/jifs-235593","url":null,"abstract":"To address the issue of unknown expert and attribute weights in the comprehensive assessment of hospitals, as well as the potential challenges posed by distance measures, this paper presents a probabilistic language multi-attribute group decision-making (MAGDM) approach that utilizes correlation coefficients and improved entropy. First, the correlation function, called the probabilistic linguistic correlation coefficient, is introduced into the probabilistic linguistic term set(PLTS) to measure the consistency among experts, so as to obtain the weights of experts. Next, based on Shannon entropy, an improved probabilistic linguistic entropy is proposed to measure the uncertainty of PLTS considering the number of alternatives and information quantity. Then, based on the correlation coefficient and improved entropy, the attribute weights are obtained. In addition, in order to overcome the counter-intuitive problem of existing distance measurement, this paper proposes a probabilistic language distance measurement method based on the Bray-Curtis distance to measure the differences between PLTSs. On this basis, by applying the technique for order preference by similarity to ideal solution (TOPSIS) method and using PLTSs to construct the MAGDM method, the ranking of alternative schemes is generated. Finally, the improved MAGDM method is applied to an example of the comprehensive evaluation of the smart medical hospitals. The results show that compared with the existing methods, this method can determine the weight information more reasonably, and the decision-making results are not counter-intuitive, so it can evaluate the hospital more objectively.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Density based methods are significant approaches in outlier detection for high dimensional datasets and Local correlation integral (LOCI) is one of the best of them. To extend LOCI for fuzzy datasets, we should employ suitable metrics to measure the distance between two fuzzy numbers. Euclidean distance measure is a classic one in metric learning, but to overcome curse of dimensionality, we apply fractional distance metric too. Then, after introducing the FLOCI outlier detection algorithm for identifying the fuzzy outliers, we study the efficiency of the proposed method by doing some numerical experiments, in which the obtained results were completely successfull. We also compared the results with Fuzzy versions of Distance based ABOD and SOD methods to prove robustness of this approache. More than the above, one of the main advantages of the new approach is the determination of outlierness factor for each data which is not presented in classical LOCI method.
基于密度的方法是高维数据集离群点检测的重要方法,而局部相关积分(LOCI)是其中最好的方法之一。要将局部相关积分扩展到模糊数据集,我们应该采用合适的度量方法来测量两个模糊数之间的距离。欧氏距离度量是度量学习中的经典度量,但为了克服维度诅咒,我们也采用了分数距离度量。然后,在介绍了用于识别模糊离群值的 FLOCI 离群值检测算法后,我们通过一些数值实验研究了所提方法的效率,实验结果完全正确。我们还将结果与基于距离的模糊 ABOD 和 SOD 方法进行了比较,以证明这种方法的稳健性。除上述优点外,新方法的主要优点之一是可以确定每个数据的离群因子,而这是经典 LOCI 方法所不具备的。
{"title":"An improved loci method for outlier detection in fuzzy datasets based on fractional distance metric and outlierness degree","authors":"Mehdi Hajiloei, A. F. Jahromi, Somayeh Zolmani","doi":"10.3233/jifs-234448","DOIUrl":"https://doi.org/10.3233/jifs-234448","url":null,"abstract":"Density based methods are significant approaches in outlier detection for high dimensional datasets and Local correlation integral (LOCI) is one of the best of them. To extend LOCI for fuzzy datasets, we should employ suitable metrics to measure the distance between two fuzzy numbers. Euclidean distance measure is a classic one in metric learning, but to overcome curse of dimensionality, we apply fractional distance metric too. Then, after introducing the FLOCI outlier detection algorithm for identifying the fuzzy outliers, we study the efficiency of the proposed method by doing some numerical experiments, in which the obtained results were completely successfull. We also compared the results with Fuzzy versions of Distance based ABOD and SOD methods to prove robustness of this approache. More than the above, one of the main advantages of the new approach is the determination of outlierness factor for each data which is not presented in classical LOCI method.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, uncertain fractional differential equations was proposed for the description of complex uncertain dynamic systems with historical characteristics. For wider applications of uncertain fractional differential equations, researches on parameter estimation for uncertain fractional differential equations are of great importance. In this paper, based on the thought of least squares estimation and uncertain hypothesis test, an algorithm of parameter estimation for uncertain fractional differential equations is discussed. Finally, we consider the application of uncertain fractional differential equations based model to predict the forecasting stock price of three major indexes of U.S. stocks and make a comparison between uncertain fractional differential equations, uncertain differential equations and stochastic differential equations.
{"title":"Least squares parameter estimation for uncertain fractional differential equations and application to stock model","authors":"Liu He, Yuanguo Zhu, Tingqing Ye","doi":"10.3233/jifs-237977","DOIUrl":"https://doi.org/10.3233/jifs-237977","url":null,"abstract":"In recent years, uncertain fractional differential equations was proposed for the description of complex uncertain dynamic systems with historical characteristics. For wider applications of uncertain fractional differential equations, researches on parameter estimation for uncertain fractional differential equations are of great importance. In this paper, based on the thought of least squares estimation and uncertain hypothesis test, an algorithm of parameter estimation for uncertain fractional differential equations is discussed. Finally, we consider the application of uncertain fractional differential equations based model to predict the forecasting stock price of three major indexes of U.S. stocks and make a comparison between uncertain fractional differential equations, uncertain differential equations and stochastic differential equations.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A human resources management plan is presently recognised as one of the most important components of a corporate technique. This is due to the fact that its major purpose is to interact with people, who are the most precious asset that an organisation has. It is impossible for an organisation to achieve its objectives without the participation of individuals. An organisation may effectively plan as well as manage individual processes to support the organization’s objectives and adapt nimbly to any change if it has well-prepared HR techniques and an action plan for its execution. This investigation puts up a fresh way for the board of directors of a private firm to increase their assets and advance their growth by using cloud programming that is characterised by networks. The small company resource has been improved by strengthening human resource management techniques, and the cloud SDN network is used for job scheduling using Q-convolutional reinforcement recurrent learning. The proposed technique attained Quadratic normalized square error of 60%, existing SDN attained 55%, HRM attained 58% for Synthetic dataset; for Human resources dataset propsed technique attained Quadratic normalized square error of 62%, existing SDN attained 56%, HRM attained 59% ; proposed technique attained Quadratic normalized square error of 64%, existing SDN attained 58%, HRM attained 59% for SyriaTel dataset.
人力资源管理计划是目前公认的企业技术最重要的组成部分之一。这是因为人力资源管理计划的主要目的是与人互动,而人是企业最宝贵的资产。没有个人的参与,组织不可能实现其目标。如果组织拥有准备充分的人力资源技术和行动计划,就可以有效地规划和管理个人流程,以支持组织目标的实现,并灵活地适应任何变化。这项调查为私营企业的董事会提出了一种全新的方法,即通过使用以网络为特征的云程序来增加资产和促进发展。通过加强人力资源管理技术改善了小公司的资源,并利用 Q-convolutional 强化循环学习将云 SDN 网络用于工作调度。在合成数据集方面,建议技术的二次归一化平方误差为60%,现有SDN为55%,HRM为58%;在人力资源数据集方面,建议技术的二次归一化平方误差为62%,现有SDN为56%,HRM为59%;在SyriaTel数据集方面,建议技术的二次归一化平方误差为64%,现有SDN为58%,HRM为59%。
{"title":"Synergizing enterprise resource management with technology through driving innovation and growth in business models","authors":"Mengtong Xie, Huaqi Chai","doi":"10.3233/jifs-235379","DOIUrl":"https://doi.org/10.3233/jifs-235379","url":null,"abstract":"A human resources management plan is presently recognised as one of the most important components of a corporate technique. This is due to the fact that its major purpose is to interact with people, who are the most precious asset that an organisation has. It is impossible for an organisation to achieve its objectives without the participation of individuals. An organisation may effectively plan as well as manage individual processes to support the organization’s objectives and adapt nimbly to any change if it has well-prepared HR techniques and an action plan for its execution. This investigation puts up a fresh way for the board of directors of a private firm to increase their assets and advance their growth by using cloud programming that is characterised by networks. The small company resource has been improved by strengthening human resource management techniques, and the cloud SDN network is used for job scheduling using Q-convolutional reinforcement recurrent learning. The proposed technique attained Quadratic normalized square error of 60%, existing SDN attained 55%, HRM attained 58% for Synthetic dataset; for Human resources dataset propsed technique attained Quadratic normalized square error of 62%, existing SDN attained 56%, HRM attained 59% ; proposed technique attained Quadratic normalized square error of 64%, existing SDN attained 58%, HRM attained 59% for SyriaTel dataset.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elanur Adar-Yazar, Buket Karatop, Selim Gökcan Karatop
Many factors such as population growth, development of industry/technology, and increase in production-consumption disrupt the ecological balance and cause climate change, which is a global problem. Determining the criteria that cause climate change is very important in finding effective solutions to the problem. In the study, the criteria were determined, weighted with a new method, Step-wise Weight Assessment Ratio Analysis (SWARA), and ranked according to their priorities with two-layer fuzzy logic model. The Fuzzy SWARA method allows the evaluation process, which becomes complicated due to the difficulties and factors experienced in decision-making, to be carried out more effectively and realistically. The risk and effect of climate change in Turkiye were evaluated regionally. However, the developed model also has a wide application area. Research findings revealed that the highest risk/effect of climate change have the Marmara and Central Anatolia regions. The lowest risk region is the Eastern Anatolia. Air pollution, population growth and deforestation have the highest weights. Important suggestions have presented especially for priority criteria. In this way, the factors that should be prioritized in climate change environmental problem solutions have been revealed and will make it easier for researchers and managers to provide more effective management.
人口增长、工业/技术发展、生产-消费增长等诸多因素破坏了生态平衡,导致气候变化这一全球性问题。确定导致气候变化的标准对于找到有效的解决方法非常重要。本研究采用一种新方法--分步加权评估比率分析法(SWARA)确定了各项标准的权重,并利用双层模糊逻辑模型根据其优先级进行了排序。模糊 SWARA 法使因决策过程中遇到的困难和因素而变得复杂的评估过程得以更有效、更现实地进行。对土耳其气候变化的风险和影响进行了区域性评估。不过,所开发的模型也有广泛的应用领域。研究结果表明,马尔马拉和安纳托利亚中部地区的气候变化风险/影响最高。风险最低的地区是安纳托利亚东部。空气污染、人口增长和森林砍伐的权重最高。特别针对优先标准提出了重要建议。因此,气候变化环境问题解决方案中应优先考虑的因素已经显现出来,这将使研究人员和管理人员更容易提供更有效的管理。
{"title":"Assessing the risk and effect of climate change with two-layer fuzzy logic-SWARA: A comparative practice in Turkiye","authors":"Elanur Adar-Yazar, Buket Karatop, Selim Gökcan Karatop","doi":"10.3233/jifs-236298","DOIUrl":"https://doi.org/10.3233/jifs-236298","url":null,"abstract":"Many factors such as population growth, development of industry/technology, and increase in production-consumption disrupt the ecological balance and cause climate change, which is a global problem. Determining the criteria that cause climate change is very important in finding effective solutions to the problem. In the study, the criteria were determined, weighted with a new method, Step-wise Weight Assessment Ratio Analysis (SWARA), and ranked according to their priorities with two-layer fuzzy logic model. The Fuzzy SWARA method allows the evaluation process, which becomes complicated due to the difficulties and factors experienced in decision-making, to be carried out more effectively and realistically. The risk and effect of climate change in Turkiye were evaluated regionally. However, the developed model also has a wide application area. Research findings revealed that the highest risk/effect of climate change have the Marmara and Central Anatolia regions. The lowest risk region is the Eastern Anatolia. Air pollution, population growth and deforestation have the highest weights. Important suggestions have presented especially for priority criteria. In this way, the factors that should be prioritized in climate change environmental problem solutions have been revealed and will make it easier for researchers and managers to provide more effective management.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140254934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most critical task of a social network is to identify a central node. Numerous methods for determining centrality are documented in the literature. It contributes to online commerce by disseminating news, advertisements and other content via central nodes. Existing methods capture the node’s direct reachability. This study introduces a novel method for quantifying centrality in a fuzzy environment. This measurement takes into account the reachability of nodes and their direct connections. Several critical properties have been demonstrated. A small Facebook network is used to illustrate the issue. Additionally, appropriate tables and graphs present a comparative study with existing methods for centrality measurement.
{"title":"New concept of centrality measurement in fuzzy social networks","authors":"Rupkumar Mahapatra, Sovan Samanta, M. Pal","doi":"10.3233/jifs-232602","DOIUrl":"https://doi.org/10.3233/jifs-232602","url":null,"abstract":"The most critical task of a social network is to identify a central node. Numerous methods for determining centrality are documented in the literature. It contributes to online commerce by disseminating news, advertisements and other content via central nodes. Existing methods capture the node’s direct reachability. This study introduces a novel method for quantifying centrality in a fuzzy environment. This measurement takes into account the reachability of nodes and their direct connections. Several critical properties have been demonstrated. A small Facebook network is used to illustrate the issue. Additionally, appropriate tables and graphs present a comparative study with existing methods for centrality measurement.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140256851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The utilization of the spectrum is optimized through which primary users of modern wireless communication technologies might obtain a higher chance of detection. The research aims to study how the NI-USRP hardware platform can be used to set up greedy cooperative spectrum sensing for cognitive radio networks. Research primarily deals with energy detection and eigenvalue-based detection approaches, both of which are highly recognized for their capacity to sense the spectrum without having prior knowledge of the primary user signals. In the hardware arrangement, there is one transmitter and two cognitive radio receivers. LABVIEW makes it simple to deploy and maximizes the detection probability across a large sample. Here, it was demonstrated that cooperative spectrum sensing is superior to non-cooperative spectrum sensing, which results in a reduction in the risk of errors occurring during detection. The research discovered that the OR combination rule has a higher detection probability than the AND rule at the same time. The research emphasizes the significance of expanding cooperative spectrum sensing to improve overall detection capabilities. SNRs that are more than 10 dB allow the energy detector to operate, and the eigenvalue detector continues to work when the SNR drops to –9 dB.
通过优化频谱的利用,现代无线通信技术的主要用户可以获得更高的探测机会。本研究旨在研究如何利用 NI-USRP 硬件平台为认知无线电网络建立贪婪合作频谱感知。研究主要涉及能量检测和基于特征值的检测方法,这两种方法都因其在不事先了解主用户信号的情况下感知频谱的能力而备受认可。在硬件布置中,有一个发射器和两个认知无线电接收器。LABVIEW 使其部署简单,并最大限度地提高了大样本的检测概率。研究证明,合作频谱感知优于非合作频谱感知,从而降低了检测过程中出现错误的风险。研究发现,在同一时间内,OR 组合规则比 AND 规则具有更高的检测概率。该研究强调了扩大合作频谱感测以提高整体检测能力的重要性。信噪比超过 10 dB 时,能量检测器可以工作;信噪比降至 -9 dB 时,特征值检测器继续工作。
{"title":"Enhancing spectrum utilization and primary user detection in cognitive radio networks through greedy cooperative spectrum sensing","authors":"S. Dhivya, A. Rajeswari","doi":"10.3233/jifs-239871","DOIUrl":"https://doi.org/10.3233/jifs-239871","url":null,"abstract":"The utilization of the spectrum is optimized through which primary users of modern wireless communication technologies might obtain a higher chance of detection. The research aims to study how the NI-USRP hardware platform can be used to set up greedy cooperative spectrum sensing for cognitive radio networks. Research primarily deals with energy detection and eigenvalue-based detection approaches, both of which are highly recognized for their capacity to sense the spectrum without having prior knowledge of the primary user signals. In the hardware arrangement, there is one transmitter and two cognitive radio receivers. LABVIEW makes it simple to deploy and maximizes the detection probability across a large sample. Here, it was demonstrated that cooperative spectrum sensing is superior to non-cooperative spectrum sensing, which results in a reduction in the risk of errors occurring during detection. The research discovered that the OR combination rule has a higher detection probability than the AND rule at the same time. The research emphasizes the significance of expanding cooperative spectrum sensing to improve overall detection capabilities. SNRs that are more than 10 dB allow the energy detector to operate, and the eigenvalue detector continues to work when the SNR drops to –9 dB.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140256230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}